Category Archives: QA

Positioning Software in a Crowded Market

This is a guest post from serial software entrepreneur Dennis Gurock.

Thinking about product positioning (and matching branding) is especially important if you build a product for a crowded market with many established competitors (and there are many reasons why this can be a good idea). We were in exactly this situation when we initially thought about building and marketing our new test management tool.

Positioning will allow you to better focus on a specific market segment to target, it makes it easier to build a clearer and stronger message to reach customers, and it helps develop the initial product vision and feature set.

What does successful positioning mean for software products? It can mean identifying a unique angle to focus on so you can stand out with your product among other products and competitors. Especially if you are entering a crowded market, this allows you to better communicate the key benefits and features you have to offer. It will help you reach the right customers and ensures that customers remember you when they look for a new product to try.

To come up with positioning for your new product, you can focus on a specific customer segment or niche that you think will be easier to market to or that you think is underserved by existing offerings. It can also help you limit the initial product scope, so you can go to market faster. Then rigorously optimizing for this initial customer segment allows you to establish a market presence and expand to other segments more easily later.

Why is positioning useful?

There are many benefits of coming up with and deciding on positioning for your new software product early on. Once you decide on the positioning, many marketing, product management and sales decisions become more straightforward.

  • Clear message & benefits: it is not easy to stand out in a crowded market. Positioning allows you to come up with clear messaging so you can explain and highlight unique selling points in few words.
  • Target and identify niche/marketing opportunities: it can be difficult to decide which marketing options to try, which campaigns to book and which niches to target. Focusing on a specific market segment based on the product positioning can be a great way to identify matching niches and opportunities.
  • Identify customer fit during sales: one of the most important aspects of the sales process is identifying and ensuring prospects are actually a great fit for your product. It’s wasted time for both you and for your prospects to invest a lot of effort evaluating and piloting a product if they will not benefit from it. Positioning can help you quickly filter and identify which customers to focus on.
  • Better focus on initial product vision: there are a lot of directions to choose when building a new product. If you don’t have a clear vision to guide you, it is easy to be distracted by different directions and work on too many things at the same time. Clear positioning makes it easy to focus product management on specific goals and use cases.
  • Easier to choose features: when you start working with customers, you will (hopefully) receive a lot of feedback on features you should add. Positioning helps you decide which of these features you should actually implement. Often times the most successful products are developed by following strong opinions and saying ‘No’ to many requests.

Examples of software product positioning

Let’s look at a few examples of companies that use positioning to market and build their products. All these examples are from industries and product categories with many existing competitors and products.

  • Testmo: we entered a crowded market with many established testing tools when we developed our new product. Most existing offerings either focus on manual testing, or they offer a complete ALM toolset to handle the entire development lifecycle. With Testmo we had other ideas and wanted to position it differently, focusing on unified testing. This means we combine test cases, automation and exploratory testing in a single platform. At the same time it allows us to limit the scope of the product. We won’t add our own issue tracking, or CI pipelines, or existing DevOps features. Instead of we focus on integrating with other tools customers already use.
  • Another example is the documentation and wiki product GitBook. They heavily focus on software developers and position themselves as the primary tool for developers to publish user docs and to document internal knowledge. With this positioning in mind, they can focus on features that primarily make sense for developers, such as Git synchronization, Markdown support and code snippets. It also allows them to more easily market directly to software developers with a clear message.
  • Then there’s the application monitoring service Checkly. There are many services and products that enable you to monitor apps and sites for downtime and notify you about issues. Checkly positions itself as a tool that enables end-to-end monitoring with flexible scripting. So it doesn’t just make simple web requests to see if a site is still live. It allows customers to write custom scripts to implement complex user flows and thereby not just check if a site is reachable, but also test the entire stack with the front-end, database, authentication and much more. This focus allows them to build more targeted features for advanced use cases and thereby provides more value to customers compared to simpler competitors.
  • The popular email marketing service Campaign Monitor also started with very focused positioning. In the first few years they concentrated on providing the best possible campaign tool for web designers and design agencies. This focus allowed them to invest more in features designers needed, such as white labeling, reusable themes and live email previews. Once they established their market presence, they started to expand their customer base to capture a larger part of the overall market for newsletter tools.

These are just some examples of companies and products that have benefited from clear positioning. Of course there are also countless of examples of companies choosing not to have such clear positioning. There is nothing wrong with this and you can certainly be very successful even if you ignore these points. But more often than not positioning is a useful tool to improve focus on specific goals and customer needs, which increases your chance to build a successful software business.

Dennis Gurock is one of the founders of Testmo, a QA testing tool that unifies test case management with exploratory testing and test automation in one platform. He has been working on products that help teams improve software quality for more than 15 years.

30 tips for creating great software releases

If a film director is only as good as their last film, then I guess a software developer is only as good as their last software release. In more than 30 years of writing software professionally I have shipped my fair share of releases. For the last 13 years I have been shipping software as a solo developer. Here are a few things I have learned along the way. Some of them are specific to downloadable software, but some of them apply equally to SaaS products.

Use a version control system

I occasionally hear about software developers who don’t use a version control system. Instead they usually create some sort of janky system using dated copies of source folders or zip files. This send shivers down my spine. Don’t be that guy. A version control system should be an essential part of every professional software developer’s tool kit. It matters less which version control system you use. All the cool kids now use distributed version control systems, such as git. But I find that Subversion is fine for my requirements.

Tag each release in version control

This makes it easy to go back and compare any two releases. A bug appeared in the printing between v1.1.1 and v1.1.2? Go back and diff the source files related to printing and review all the changes.

Store your release binaries in version control

I store every binary I ship to customers in my version control system. Many people will tell you that you should only store the source in version control. Then you can use this to regenerate the binaries if you need to. This was sound advice back in the day when harddisks were small, networks were slow, version control systems were clunky (SourceSafe!) and developer environments didn’t change very frequently. But I don’t think it is valid advice now. Harddisks are as cheap as chips, networks are much faster and online updates mean your SDK, compiler or some other element of your toolchain is likely to be updated between your releases, making it impossible for you to recreate an identical release binary later on.

‘Test what you ship, ship what you test’

In an analog system (such as a bridge) a tiny change in the system will usually only cause a small change in the behaviour. In a discrete system (such as software) a change to a single bit can make the difference between a solid release and a showstopper bug. The Mariner I rocket was destroyed by a single missing hyphen in the code. So test the binaries that you plan to ship to the customer. And if you change a single bit in the release, re-test it. You probably don’t need to run all the tests again. But you certainly can’t assume that a small change won’t cause a big problem.

This issue often manifests itself when the developers test the debug version of their executable and then ship the release version. They then find that the two haveĀ  different behaviour, e.g. due to a compiler optimization, different memory layout or code inside an ASSERT.

Make each executable individually identifiable

As a corollary of the above, you need to be able to uniquely identify each executable. I do this by having a timestamp visible in the ‘About’ box (you can use __DATE__ and __TIME__ macros in C++ for this) and ensure that I rebuild this source file for every release.

Diff your release with the previous one

Do a quick diff of your new release files versus the previous ones. Have any of the files changed unexpectedly? Are any files missing?

Be more cautious as you get nearer the release

Try not to make major changes to your code or toolchain near a release. It is too risky and it means lots of extra testing. Sometimes it is better to ship a release with a minor bug than fix it near the release and risk causing a much worse problem that might not get detected in testing.

Test your release on a clean machine

Most of us have probably sent out a release that didn’t work on a customer’s machine due to a missing dynamic library. Oops. Make sure you test your release on a non-development machine. VMs are useful for this. Don’t expect customers to be very impressed when you tell them ‘It works on my machine‘.

Test on a representative range of platforms

At least run a smoke test on the oldest and most recent version of each operating system you support.

Automate the testing where you can

Use unit tests and test harnesses to automate testing where practical. For example I can build a command line version of the seating optimization engine for my table plan software and run it on hundreds of sample seating plans overnight, to test changes haven’t broken anything.

If you set up a continuous integration server you can build a release and test it daily or even every commit. You can then quickly spot issues as soon as they appear. This makes bug fixing a lot easier than trying to work out what went wrong weeks down the line.

But still do manual testing

Automated test won’t pick up everything, especially with graphical user interface issues. So you still need to do manual testing. I find it is very useful to see real-time path coverage data during testing, for which I use Coverage Validator.

Use third parties

You can’t properly test your own software or proof read your own documentation any more than you can tickle yourself. So try to get other people involved. I have found that it is sometimes useful to pay testing companies to do additional testing. But I always do this in addition to (not instead of) my own testing.

Your documentation is an important part of the release. So make sure you get it proof read by someone different to the person that wrote it.

Use your customers

Even two computers with the same hardware specifications and operating system can be set up with an almost infinite range of user options (e.g. screen resolutions, mouse and language settings) and third party software (e.g. anti-virus). Getting customers involved in beta testing means you can cover a much wider range of setups.

When I am putting out a major new release I invite customers to join a beta mailing list and email them each time there is a new version they can test. In the past I have offered free upgrades to the customers who found the most bugs.

Don’t rely only on testing

I believe in a defence in depth approach to QA. Testing is just one element.

Automate the release process as much as you can

Typically a release process involves quite a few steps: building the executable, copying files, building the installer, adding a digital signature etc. Write a script to automate as much of this as possible. This saves time and reduces the likelihood of errors.

Use a checklist for everything else

There are typically lots of tasks that can’t be automated, such as writing release notes, updating the online FAQ, writing a newsletter etc. Create a comprehensive checklist that covers all these tasks and go through it every release. Whenever you make a mistake, add an item to the checklist to catch it next time. Here is a delightfully meta checklist for checklists.

Write release notes

Customers are entitled to know what changes are in a release before they decide whether to install it. So write some release notes describing the changes. Use screen captures and/or videos, where appropriate, to break up the text. Release notes can also be very useful for yourself later on.

Email customers whose issues you have fixed

Whenever I record a customer bug report or a feature request, I also record the email of the customer. I then email them when there is a release with a fix. It seems only polite when they have taken the effort to contact me. But it also encourages them to report bugs and suggest features in future. I will also let them access the release before I make it public, so they can let me know if there are any problems with the fix that I might not have spotted.

Don’t force people to upgrade

Don’t force customers to upgrade if I don’t want to. And don’t nag them every day if they don’t. A case in point is Skype. It has (predictably) turned from a great piece of software into a piece of crap now that Microsoft have purchased it. Every release is worst than the last. And, to add insult to injury, it just keeps bleating at me to upgrade and there doesn’t seem to be any way to turn off the notifications.

Don’t promise ship dates

If you promise a ship date and you get your estimate wrong (which you will) then either:

  • You have ship software that isn’t finished; or
  • You miss your ship date

Neither are good. So don’t promise ship dates. I never do and it makes my life a lot less stressful.Ā  It’s ready when it’s ready. I realize that some companies with investors, business partners and large marketing departments don’t have that luxury. I’m just glad that I am not them.

Inform existing customers of the release

There isn’t much point in putting out releases if no-one knows about them. By default my software checks an XML file on my server weekly and informs the customer if a new update is available. I also send out a newsletter with each software release. I generally get a spike in upgrades after each newsletter.

Don’t release too often

Creating a stable release is a lot of work, even if you manage to automate some of it. The more releases you do, the higher percentage of your time you will spend testing, proof reading and updating your website.

Adobe Acrobat seems to go through phases of nagging at me almost daily for updates. Do I think “Wow, I am so happy that those Adobe engineers keep putting out releases of their useful free software”? No. I hate them for it. If you have an early stage product with early-adopters, they may be ok with an update every few days. But most mainstream customers won’t thank you for it.

Don’t release too infrequently

Fixing a bug or usability issue doesn’t help the customer until you ship it. Also a product with very infrequent updates looks dead. The appropriate release frequency will vary with the type of product and how complex and mature it is.

Digitally sign your releases

Digital certificates are a rip-off. But unsigned software makes you look like an an amateur. I am wary of downloading any software that isn’t digitally signed. Apple now prevents you downloading unsigned software by default.Ā  Signing is just an extra line in your build script. It is a bit tedious getting a digital certificate though, so get one that lasts several years.

Check your binaries against major anti-virus software

Over zealous anti-virus software can be a real headache for developers of downloadable software. So it is worth checking if your release is likely to get flagged. You can do this using free online resource virustotal.com. If you are flagged, contact the vendor and ask them to whitelist you.

‘The perfect is the enemy of the good’

Beware second system effect. If you wait for perfection, then you will never ship anything. As long as this release is a significant improvement on the last release, then it is good enough to ship.

Pace yourself

Creating a release is exhausting. Even maths, physics and software prodigy Stephen Wolfram of Mathematica says so:

Iā€™ve led a few dozen major software releases in my life. And one might think that by now Iā€™d have got to the point where doing a software release would just be a calm and straightforward process. But it never is. Perhaps itā€™s because weā€™re always trying to do majorly new and innovative things. Or perhaps itā€™s just the nature of such projects. But Iā€™ve found that to get the project done to the quality level I want always requires a remarkable degree of personal intensity. Yes, at least in the case of our company, there are always extremely talented people working on the project. But somehow there are always things to do that nobody expected, and it takes a lot of energy, focus and pushing to get them all together.

So look after yourself. Make sure you get enough sleep, exercise and eat healthily. Also things may be at their most intense straight after the release with promotion, support, bug fixing etc. So it may be a good idea to take a day or two off before you send the release out.

Don’t release anything just before you go away

There is always a chance a new release is going to mess things up. If you are a one-man band like me, you really don’t want to make a software release just before you go away on holiday or to a conference. Wait until you get back!

Fix screwups ASAP

We all make mistakes from time to time. I recently put out a release of my card planning software, Hyper Plan, that crashed on start-up on some older versions of macOS. Oops. But I got out a release with a fix as soon as I could.

Treat yourself after a release

Releases are hard work. A successful release deserves a treat!


Anything I missed?

 

 

The brutal truth about marketing your software product

badwaterWe tend to hear a lot about software industry success stories. But most of us mere mortals have to fail a few times before we learn enough to succeed. In this guest post William Echlin talks about the hard lessons he has learned about creating and selling software products.

Probably, like you, I started developing my own software application a few years back. I had this dream of working for myself and becoming financially independent. The money side was a nice goal to have but ultimately I was looking for the fulfilment of working for myself. Sound familiar? Well, if it does, you may have learnt many of the lessons I’ve learnt. I don’t mind admitting now that I got carried away. I got carried away with building a test management application to the extent that I forget about many of the key things you need in place to build a successful business.

After a few years work I’d created the leading open source test management application (a product called QaTraq that’s still available on Source Forge but a little dormant). It had cost me time, money and effort. I’d achieved some success with building and marketing a free product. Next stop taking it commercial. This is where it gets brutal.

About a year into leaving a full time job I’m taking the last Ā£1,000 out of the joint bank account. I’m making some sales but it’s damn tough. A few months later and I’m in the supermarket Ā£15,000 in debt wondering if my credit card is about to be rejected for the families weekly shop. You read about this sort of thing in biographies on successful entrepreneurs. These guys take it to the limit and then succeed and make millions. Sounds so glamorous. When your wife, 3 year old son and 1 year old daughter depend on that credit card being accepted believe me it’s NOT glamorous.

Building a business has always been about balancing design, development, sales, marketing, support, testing, etc. When you’re a one man band that’s not easy. You try to do everything. You’re bloody brilliant at building the product. The trouble is, once you want to make a living out of it, that “building” is almost the least important bit. After I’d spent 5 years building my product I stumbled upon one very useful piece of advice. It was a little late for me but maybe it’ll help you….

“Learn how to market and sell before you build your product. Learn these crafts by picking a product that’s already been built and act as a reseller”.

That’s worth reading again (it’s counter intuitive). What’s being said here is that if you can’t market and sell a product (ANY product) then the odds of succeeding with your own product are slim. If you can’t “market and sell” what on earth is the point in wasting all that time, effort and money building your own product? If you’re never going to be able to market it, and sell it, why build it?

So find a product in a slightly different sector and sign up as a reseller. Save yourself the time and effort of building a product and practice marketing and sales with someone elseā€™s product first. Create a web site, develop an ad words campaign and start promoting with social media. Sell the product! If you can’t get the hang of this why bother building your own? If you can get the hang of building your own marketing machine it won’t be wasted effort. If you’re clever and pick the right product / sector you just need to switch the product on your site a year or so down the road. Once you’ve built the marketing and sales engine switch it to sell the product you’re building.

I’m not saying that this is the only way to go about it. I’m just saying that if you don’t have the determination to learn, understand and be successful with marketing and sales early on, then it’s unlikely you’ll succeed with your own product. So why waste time building it. It’s a tough lesson to learn. One I learnt the hard way.

And the specific lessons I learnt the hard way? Well I’d do these things first if I was ever to do this again:

1. Create at least one lead generation channel as an affiliate for another product. That lead generation channel will probably be a web site and as part of that you’ll need to master things like:

  • Google Adwords
  • Social media
  • Email marketing
  • Blogging
  • Link building

All these things take a lot of time. Do you have the determination to learn and execute on all of this?

2. Spend some time in a sales related role. Initially I was working in a full time job whilst building my own product in my spare time. The best thing I did was offer to help the sales team with product demos. I learnt lots from working closely with sales people (I didn’t like them very much, but that’s a different matter) and clients. If you can’t do product demos to clients, or you can’t talk to clients confidently then you don’t stand a chance of selling anything. People buy from people and a product demo is THE place to show case YOU (and the product)

3. Spend time learning about re-marketing. A lot of money goes into getting that initial lead. Don’t waste it! Understand Googleā€™s re-marketing campaigns. These allow you to follow the people that came to your site and continue serving them banner ads on other sites. Understand email marketing once you’ve captured an email address. Yes I hate most of this when I’m on the receiving end. The reality is that it works though. That’s why companies do it (and why Google make so much money). I’ll tell you now that your business won’t survive if you don’t master some of these techniques. And if your business doesn’t survive then every ounce of effort you’ve put into building that application is wasted!

4. Spend time learning about cross selling. A significant amount of revenue can come from cross selling other products. When was the last time you went to a restaurant and they didn’t try to sell you a bread roll? When was the last time you flew somewhere and they didn’t try to sell you priority boarding? For you this might be in the guise of selling your leads to other companies that have complementary products. It might be providing different editions of your application. There are many other ways to add additional revenue streams to your prime product sale. These streams are absolutely critical to the success of your business.

5. Don’t try to become a sales person. You don’t have to be a sales man/woman to sell. Some of the best sales people I’ve worked with are those that just go out of their way to HELP the customer. They understand their niche inside out and have the gift, not to sell, but to HELP. People that are looking to buy something want help. They want an itch scratched or a problem solved. If you can help them with a solution then you’re most of the way towards making the sale. Forget all this rubbish about psychology and techniques to influence people. The best thing you can do is enter the mind set of helping! Go out of your way to help.

I don’t have all of this right by any stretch. I know one thing though. Products don’t sell themselves. And if you’re not prepared to start learning about sales and marketing you won’t sell your product.

It was all a bit ironic for me though. I spent years building my own test management product to help software testers. It even started out as the leading open source solution in it’s market for many years. I mastered SEO and created a great lead generation process (the oxygen of any business). I created a version which I put a price on and sold to companies. I even sold to a number of significant companies. But I just couldn’t do all of it. I couldn’t balance the design, development, testing, marketing, sales, support, etc. It’s brutally painful when this dawns on you.

In the end what I’d really mastered was lead generation. I ended up with a web site that attracted my target audience but failed to sell much. When you realise that, you realise that it’s the product. Nothing wrong with the marketing and sales. It’s the product. There were better products out there. Kind of tough to swallow but as soon as I did, I moved on. These leads, or rather people (because leads are actually real people), were looking for help. I just needed to provide them with the right product and services. So I started reselling other products and providing consultancy around those products on myĀ test management website.

In the end I had one of the toughest bits right. If you get the lead generation right you’ve built a marketing foundation that you can build any type of business around. For me I just wished I figured the marketing piece out before I’d built my product. Now I just work on my marketing. Oh, and I help companies with their software testing and test management. For me at least, it’s much easier this way.

William Echlin has spent 20 years in testing, working on everything from air traffic control systems to anti-virus engines. He had a bad experience in his early childhood trying to effectively manage test cases with vi (heā€™s still a huge fan of vi but recognises that text files make a lousy repository for test cases). In an attempt to deal with these childhood demons he became a consultant on all things related to test management.

TestLabĀ² offer

The blog is being sponsored this month by TestLabĀ², a software testing and QA company based in the Ukraine. I have used TestLabĀ² on a number of occasions for third party testing of PerfectTablePlan releases on both Windows and Mac OS X. They found a number of bugs that I hadn’t been able to find on my own (testing your own software is always problematic) and gave me additional confidence that I hadn’t let any embarrassing bugs make it through into the final binaries. Their prices are very reasonable (from $20/hour) and I have always found them to be very professional and responsive (see my previous write-up on outsourcing testing). They also have access to operating systems that I don’t have set-up, e.g. Windows 8 and Mac OS X 10.8.

Special offer

Quote “successful software” when you ask for an estimate and they will give you a 20% discount. This offer is valid for first-time customers, for the next 14 days only.

TestLabĀ².com website

Cppcheck – A free static analyser for C and C++

I got a tip from Anna-Jayne Metcalfe of C++ and QA specialists Riverblade to check out Cppcheck, a free static analyser for C and C++. I ran >100 kLOC of PerfectTablePlan C++ through it and it picked up a few issues, including:

  • variables uninitialised in constructors
  • classes passed by value, rather than as a const reference
  • variables whose scopes could be reduced
  • methods that could be made const

It only took me a few minutes from downloading to getting results. And the results are a lot less noisy than lint. I’m impressed. PerfectTablePlan is heavily tested and I don’t think any of the issues found are the cause of bugs in PerfectTablePlan, but it shows the potential of the tool.

The documentation is here. But, on Windows, you just need to start the Cppcheck GUI (in C:\Program files\Cppcheck, they appear to be too modest to add a shortcut to your desktop), select Check>Directory… and browse to the source directory you want to check. Any issues found will then be displayed.

You can also set an editor to integrate with, in Edit>Preferences>Applications. Double clicking on an issue will then display the appropriate line in your editor of choice.

Cppdepend is available with a GUI on Windows and as a command line tool on a range of platforms. There is also an Eclipse plugin. See the sourceforge page for details on platforms and IDEs supported. You can even write your own Cppcheck rules.

Cppcheck could be a very valuable additional layer in my defence in depth approach to QA. I have added it to my checklist of things to do before each new release.

New links page

I have put together a page of categorised links to blog posts and articles that I think might be useful to developers and marketers of commercial software in general, and microISVs/indie developers in particular. I intend to add more links from time-to-time. My rules for inclusion are secret, arbitrary and capricious, so please don’t ask to have your link added.

Outsourcing software testing

Every time I write a post for this blog I carefully check it for typos. I then get my wife to proof-read it. She always finds at least one typo. Often there will be whole words missing that my brain must have interpolated when I checked it. I read what I thought I had written. She is unencumbered by such preconceptions.

Similarly, it isn’t sufficient to do all your own testing on software you wrote, no matter how hard you try. You will tend to see what you intended to program, not what you actually programmed. Furthermore your users have different experiences, assumptions, and patterns of usage to you. Even in the unlikely event that you manage 100% code coverage in your testing, those pesky users won’t execute those lines of code in the same order you did. I have spent hours testing a program without finding a bug, only to see someone else break it within minutes or even seconds.

So it is essential to involve people other than the original programmer in testing, in addition to (but not instead of) the testing programmers do on their own code. This poses something of a challenge to one-man-bands such as my own. I don’t have other programmers, let alone QA staff, to call on. I can, and do, use volunteer customers for beta testing. But, in my experience, beta testing is not an effective substitute for professional testing:

  • It is haphazard. I never hear from ~90% of my beta testers.
  • You can’t control beta testers sufficiently, for example you can’t set them tight deadlines, make them concentrate on a particular feature or do their testing on a particular operating system
  • The quality of bug reports from customers is often poor. Customers often don’t understand (or don’t have the patience) to describe a bug in enough detail for you to reproduce it.
  • Professional testers know how to break software.
  • The new release should be as polished as possible before any customers see it. Your beta testers will be some of your most enthusiastic customers. You don’t want to use up that goodwill by sending them buggy software.

Consequently I like to pay third party testers to test my own PerfectTablePlan product after I have finished my own testing and before I do any beta testing. Previously I have used softwareexaminer.com, but they are no longer in business. So I decided to try a couple of other offshore testing companies I had heard about:

testlab2.com
qsgsoft.com

The problem with paying a testing company is that it is hard to assess the quality of their work until it is too late. If they report few bugs it could because there are few bugs or because they didn’t do a very good job of testing. By using 2 companies to test the same software release I was also testing the testers (I didn’t tell them this).

I paid each company to do approximately 3 days testing on the Windows and Mac versions of PerfectTablePlan. I was very pleased with the results. Both companies found a useful number of bugs in the software. They were also able to test on platforms that I didn’t have access to at the time (64 bit Windows 7 and Mac OS X 10.6). I didn’t keep an exact score, but I would say that QSG found more bugs, while TestLab2 was more responsive.

QSG found some quite obscure bugs. They were even able to tell me how to reproduce a very rare and obscure bug that I had been trying to track down for months without success. Communications were sometimes a little slow (at least partly due to us being in different time zones) but it wasn’t a huge issue. My only real grumble is their billing. Despite several reminder emails from me I am still waiting to be invoiced for the work several months later. I like to pay my bills promptly and then forget about them.

TestLab2 didn’t find quite as many bugs, but I was impressed with their responsiveness. They installed Mac OS X 10.6 within a few days of it being released, so they could test PerfectTablePlan on it. When I emailed them onĀ  a Saturday about a last minute bug fix for Mac OS X 10.6 they tested the fix the same day. That is great service.

TestLab2 and QSG are based in Ukraine and India, respectively. At around $15/hour they are about a third the price of equivalent US/European companies I contacted (who might also outsource the work to Eastern Europe and India, for all I know). Some people believe outsourcing work to countries with lower costs of living is evil. I’m not one of them. I sell my software worldwide and I am also happy to buy my services worldwide, especially if I can get significantly better value for money by doing so. While there are rational arguments to be made about problems caused by differences in culture, language and time zone caused by outsourcing to other countries, I didn’t find any of these to be a major issue in this case. Most of the other arguments I have heard boil down to the simple ugly fact that some westerners feel they are entitled to a disproportionate share of the global pie. But I don’t see any reason why someone in Europe or North America is any more deserving of a job than someone in Ukraine or India.

With the help of these two companies I was able to put out a really solid PerfectTablePlan v4.1.0 release, despite the large number of new features. In fact, I am only just putting out a v4.1.1 with some bugs fixes several months later. I plan to use both companies again. I hope readers of this blog will give them some additional work to ensure they stay in business. But not so much that they don’t have time to do my next round of testing!

CoverageValidator v3

The nice folk at Software Verification have done a major new release of Coverage Validator, and the new version fixes many of the issues I noted in a previous post. In particular:

  • The instrumentation can use breakpoint functionality to get better line coverage on builds with debug information enabled.
  • Previous sessions can be automatically merged into new sessions.
  • The default colour scheme has been toned down.
  • The flashing that happened when you resized the source window has gone.
  • It is now possible to mark sections of code not to be instrumented. I haven’t had time to try this yet, as it was only introduced in v3.0.4. But it should be very useful as currently I have a lot of defensive code that should never be reached (see below). Instrumenting this code skews the coverage stats and makes it harder to spot lines that should have been executed, but weren’t.

There are still a few issues:

  • I had problems trying to instrument release versions of my code.
  • It still fails to instrument some lines (but not many).
  • I had a couple of crashes during testing that don’t seem to have been caused by my software (although I can’t prove that).

But the technical support has been very responsive and new versions are released fairly frequently. Overall version 3 is a major improvement to a very useful tool. Certainly it helped me find a few bugs during the testing of version 4 of Perfect Table Plan on Windows. I just wish there was something comparable for MacOSX.

Using defence in depth to produce high quality software

‘Defence in depth’ is a military strategy where the attacker is allowed to penetrate the defender’s lines, but is then gradually worn down by successive layers of defences. This strategy was famously used by the Soviet Army to halt the German blitzkrieg at the battle of Kursk, using a vast defensive network including trenches, minefields and gun emplacements. Defence in depth also has parallels in non-military applications. I use a defence in depth approach to detect bugs in my code. A bug has to pass through multiple layers of defences undetected before it can cause problems for my customers.

Layer 1: Compiler warnings

Compiler warnings can help to spot many potential bugs. Crank your compiler warnings up to maximum sensitivity to get the most benefit.

Layer 2: Static analysis

Static analysis takes over where compiler warnings leave off, examining code in great detail looking for potential errors. An example static analyser is Gimpel PC-Lint for C and C++. PC-Lint performs hundreds of checks for known issues in C/C++ code. The flip side of it’s thoroughness is that it can be difficult to spot real issues amongst the vast numbers of warnings and it can take some time to fine-tune the checking to a useful level.

Layer 3: Code review

A fresh set of eyes looking at your code will often spot problems that you didn’t see. There are various ways to go about this, including formal Fagan inspections, Extreme Programming style pair programming and informal reviews. There is quite a lot of documented evidence to suggest that this is one of the most effective ways to find bugs. It is also an excellent way to mentor less experienced programmers. But it is time consuming and can be hard on the ego of the person being reviewed. Also it isn’t really an option for solo developers

Layer 4: Self-checking

Of the vast space of states that a program can occupy, usually only a minority will be valid. E.g. it might makes no sense to set a zero or negative radius for a circle. We can check for invalid states in C/C++ with an assert() macro:

class Circle
{
    public:
        void setRadius( double radius );
    private:
        double m_radius;
}

void Circle::setRadius( double radius )
{
    assert( radius > 0.0 );
    m_radius = radius;
}

The program will now halt with a warning message if the radius is set inappropriately. This can be very helpful for finding bugs during testing. Assertions can also be useful for setting pre-conditions and post-conditions:

    void List::remove( Item* i )
    {
        assert( contains( i ) );
        ...
        assert( !contains( i ) );
    }

Or detecting when an unexpected branch is executed:

    switch ( shape )
    {
        case Shape::Square:
            ...
        break;

        case Shape::Rectangle:
            ...
        break;

        case Shape::Circle:
            ...
        break;

        case Shape::Ellipse:
            ...
        break;

        default:
            assert( false ); // shouldn't get here
        break;
    }

Assertions are not compiled into release versions of the software, which means they don’t incur any overhead in production code. But this also means:

  • Assertions are not a substitute for proper error handling. They should only be used to check for states that should never occur, regardless of the program input.
  • Calls to an assert() must not change the program state, or the debug and release versions will behave differently.

Different languages have different approaches, for example pre and post conditions are built into the Eiffel language.

Layer 5: Dynamic analysis

Dynamic checking usually involves automatically instrumenting the code in some way so that it’s runtime behaviour can be checked for potential problems such as: array bound violations, reading memory that hasn’t be written to and memory leaks. An example dynamic analyser is the excellent and free Valgrind for Linux. There are a few dynamic analysers for Windows, but they tend to be expensive. The only one I have tried in the last few years was Purify and it was flaky (do IBM/Rational actually use their own tools?).

Layer 6: Unit testing

Unit testing requires the creation of a test harness to execute various tests on a small unit of code (typically a class or function) and flag any errors. Ideally the unit tests should then be executed every time you make a change to the code. You can write your own test harnesses from scratch, but it probably makes more sense to use one of the existing frameworks, such as: NUnit (.NET), JUnit (Java), QUnit (Qt) etc.

According to the Test Driven Development approach you should write your unit tests before you write the code. This makes a lot of sense, but requires discipline.

Layer 7: Integration testing

Integration testing involves testing that different modules of the system work correctly together, particularly the interfaces between your code and hardware or third party libraries.

Layer 8: System testing

System testing is testing the system in it’s entirety, as delivered to the end-user. System testing can be done manually or automatically, using a test scripting tool.

Unit, integration and system testing should ideally be done using a coverage tool such as Coverage Validator to check that the testing is sufficiently thorough.

Layer 9: Regression testing

Regression testing involves running a series of tests and comparing the results to the same input data run on the previous release of the system. Any differences may be the result of bugs introduced since the last release. Regression testing works particularly well on systems that take a single input file and produce a single output file – the output file can just be diff’ed against the previous output.

Layer 10: Third party testing

Different users have different patterns of usage. You might prefer drag and drop, someone else might use right-click a lot and yet another person might prefer keyboard accelerators. So it would be unwise to release a system that has only ever been tested by the developer. Furthermore, the developer inevitably makes all sorts of assumptions about how the software will be used. Some of those assumptions will almost certainly be wrong.

There are a number of companies that can be paid by the day to do third party testing. I have used softwareexaminer.com in the past with some success.

Layer 11: Beta testing

End-user systems can vary in processor speed, memory, screen resolution, video card, font size, language choice, operating system version/update level and installed software. So it is necessary to test your software on a representative range of supported hardware + operating system + installed software. Typically this is done by recruiting users who are keen to try out new features, for example through a newsletter. Unfortunately it isn’t always easy to get good feedback from beta testers.

Layer 12: Crash reporting

If each of the above 11 layers of defence catches 50% of the bugs missed by the previous layer, we would expect only 1 bug in 2,048 to make it into production code undetected. Assuming your coding isn’t spectacularly sloppy in the first place, you should end up with very few bugs in your production code. But, inevitably, some will still slip through. You can catch the ones that crash your software with built-in crash reporting. This is less than ideal for the person whose software crashed. But it allows you to get detailed feedback on crashes and consequently get fixes out much faster.

I rolled my own crash reporting for Windows and MacOSX. On Windows the magic function call is SetUnhandledExceptionFilter. You can also sign up to the Windows Winqual program to receive crash reports via Windows’ own crash reporting. But, after my deeply demoralising encounter with Winqual as part of getting the “works with Vista” logo, I would rather take dance lessons from Steve Ballmer.

Test what you ship, ship what you test

A change of a single byte in your binaries could be the difference between a solid release and a release with a showstopper bug. Consequently you should only ship the binaries you have tested. Don’t ship the release version after only having tested the debug version and don’t ship your software after a bug fix without re-doing the QA, no matter how ‘trivial’ the fix. Sometimes it is better to ship with minor (but known) bugs than to try to fix these bugs and risk introducing new (and potentially much worse) bugs.

Cross-platform development

I find that shipping my software on Windows and MacOSX from a single code base has advantages for QA.

  • different tools with different strengths are available on each platform
  • the Gnu C++ compiler may warn about issues that the Visual Studio C++ compiler doesn’t (and vice versa)
  • a memory error that is intermittent and hard to track down on Windows might be much easier to find on MacOSX (and vice versa)

Conclusion

For the best results you need your layers of checks to be part of your day-to-day development, not something you do just before a release. This is best done by automating them as much as possible, e.g.:

  • setting the compiler to treat warnings as errors
  • performing static analysis and unit tests on code check-in
  • running regression tests on the latest version of the code every night

Also you should design your software in such a way that it is easy to test. E.g. building in log file output can make it much easier to perform regression tests.

Defence in depth can find a high percentage of bugs. But obviously the more bugs you start with the more bugs that will end up in your code. So it doesn’t remove the need for good coding practices. Quality can’t be ‘tested in’ to code afterwards.

I have used all 12 layers of defence above at some point in my career. Currently I am not using static analysis (I must update that PC-Lint licence), code review (I am a solo developer) and dynamic analysis (I don’t currently have a dynamic analyser for Windows or MacOSX). I could also do better on unit testing. But according to my crash reporting, the latest version of PerfectTablePlan has crashed just three times in the last 5000+ downloads (the same bug each time, somewhere deep down in the Qt print engine). Not all customer click the ‘Submit’ button to send the crash reports and crashes aren’t the only type of bug, but I think this is indicative of a good level of quality. It is probably a lot better than most of the other consumer software my customers use[1]. Assuming the crash reporting isn’t buggy, of course…

[1]Windows Explorer and Microsoft Office crash on a daily basis on my current machine.

Coverage Validator

coverage_validator.pngThe sink is full of washing, I am wearing odd socks and I haven’t been out of the house in days. It must be time to put out that new release. But how can I be sure my testing hasn’t missed a hideously embarrassing bug? Maybe I introduced a major bug when I made that ‘cosmetic’ change at 2am?

In an ideal world I would just run a comprehensive automated regression test suite. Unfortunately it is difficult to automate graphical user interface (GUI) testing and the majority of lines of code in most applications are GUI. I estimate that the code for my own table planner software is at least 75% GUI code (not including generated code, which would push it even higher).

So I try to manually execute every line of my application before I release it. If I have to make any changes to the code, I start over again. This is very dull, but at least I have a tool to help me: Coverage Validator. Coverage Validator instruments code and shows, in real time, which lines have been executed. Click a few buttons on your application and watch the executed lines of code change colour from pink to yellow. Execute every line in the file and all the lines change colour to cyan. No recompilation or relinking is required and it doesn’t slow down the tested application too much. This real-time feedback is incredibly powerful for testing.

code_coverage_small.gif

Unfortunately it also has a lot of shortcomings:

  • The usability isn’t great. There is a confusing plethora of options for instrumenting your code that I would rather not have to know about.
  • It isn’t able to ‘hook’ (instrument) all the lines of code. Whole blocks get missed out for reasons I don’t fully understand. Single line branches are particularly likely to be missed.
  • The GUI isn’t great. For example, the display flashes horribly if you resize it.
  • The automatic results merging is just plain weird. At the end of a session it can merge your coverage results into a previous session. This information isn’t much use to me at the end of a session. I want to merge previous results at the start of a session so I know which lines I haven’t tested.
  • The GUI is quite ugly. They really need to update those tired old icons.

However being able to see line coverage information in real time is just so incredibly useful that I am prepared to put up with the many shortcomings. I just run my application alongside Coverage Validator and, file-by-file and function-by-function, I try to turn the lines of code yellow (or, better still, cyan). Every time I have used Coverage Validator I have found at least one potentially embarrassing bug that I hadn’t discovered by any other means. The support has also been responsive. It is just a pity about the flaws, without them this would be a ‘killer app’ for testing.

Coverage Validator works with C++, Delphi and VB on Windows NT4, 2000, 2003 and XP[1]. A single licence costs $199. A free 30-day evaluation licence is available.

[1]I am using it on Vista currently, and it seems to work fine.