If a film director is only as good as their last film, then I guess a software developer is only as good as their last software release. In more than 30 years of writing software professionally I have shipped my fair share of releases. For the last 13 years I have been shipping software as a solo developer. Here are a few things I have learned along the way. Some of them are specific to downloadable software, but some of them apply equally to SaaS products.
Use a version control system
I occasionally hear about software developers who don’t use a version control system. Instead they usually create some sort of janky system using dated copies of source folders or zip files. This send shivers down my spine. Don’t be that guy. A version control system should be an essential part of every professional software developer’s tool kit. It matters less which version control system you use. All the cool kids now use distributed version control systems, such as git. But I find that Subversion is fine for my requirements.
Tag each release in version control
This makes it easy to go back and compare any two releases. A bug appeared in the printing between v1.1.1 and v1.1.2? Go back and diff the source files related to printing and review all the changes.
Store your release binaries in version control
I store every binary I ship to customers in my version control system. Many people will tell you that you should only store the source in version control. Then you can use this to regenerate the binaries if you need to. This was sound advice back in the day when harddisks were small, networks were slow, version control systems were clunky (SourceSafe!) and developer environments didn’t change very frequently. But I don’t think it is valid advice now. Harddisks are as cheap as chips, networks are much faster and online updates mean your SDK, compiler or some other element of your toolchain is likely to be updated between your releases, making it impossible for you to recreate an identical release binary later on.
‘Test what you ship, ship what you test’
In an analog system (such as a bridge) a tiny change in the system will usually only cause a small change in the behaviour. In a discrete system (such as software) a change to a single bit can make the difference between a solid release and a showstopper bug. The Mariner I rocket was destroyed by a single missing hyphen in the code. So test the binaries that you plan to ship to the customer. And if you change a single bit in the release, re-test it. You probably don’t need to run all the tests again. But you certainly can’t assume that a small change won’t cause a big problem.
This issue often manifests itself when the developers test the debug version of their executable and then ship the release version. They then find that the two have different behaviour, e.g. due to a compiler optimization, different memory layout or code inside an ASSERT.
Make each executable individually identifiable
As a corollary of the above, you need to be able to uniquely identify each executable. I do this by having a timestamp visible in the ‘About’ box (you can use __DATE__ and __TIME__ macros in C++ for this) and ensure that I rebuild this source file for every release.
Diff your release with the previous one
Do a quick diff of your new release files versus the previous ones. Have any of the files changed unexpectedly? Are any files missing?
Be more cautious as you get nearer the release
Try not to make major changes to your code or toolchain near a release. It is too risky and it means lots of extra testing. Sometimes it is better to ship a release with a minor bug than fix it near the release and risk causing a much worse problem that might not get detected in testing.
Test your release on a clean machine
Most of us have probably sent out a release that didn’t work on a customer’s machine due to a missing dynamic library. Oops. Make sure you test your release on a non-development machine. VMs are useful for this. Don’t expect customers to be very impressed when you tell them ‘It works on my machine‘.
Test on a representative range of platforms
At least run a smoke test on the oldest and most recent version of each operating system you support.
Automate the testing where you can
Use unit tests and test harnesses to automate testing where practical. For example I can build a command line version of the seating optimization engine for my table plan software and run it on hundreds of sample seating plans overnight, to test changes haven’t broken anything.
If you set up a continuous integration server you can build a release and test it daily or even every commit. You can then quickly spot issues as soon as they appear. This makes bug fixing a lot easier than trying to work out what went wrong weeks down the line.
But still do manual testing
Automated test won’t pick up everything, especially with graphical user interface issues. So you still need to do manual testing. I find it is very useful to see real-time path coverage data during testing, for which I use Coverage Validator.
Use third parties
You can’t properly test your own software or proof read your own documentation any more than you can tickle yourself. So try to get other people involved. I have found that it is sometimes useful to pay testing companies to do additional testing. But I always do this in addition to (not instead of) my own testing.
Your documentation is an important part of the release. So make sure you get it proof read by someone different to the person that wrote it.
Use your customers
Even two computers with the same hardware specifications and operating system can be set up with an almost infinite range of user options (e.g. screen resolutions, mouse and language settings) and third party software (e.g. anti-virus). Getting customers involved in beta testing means you can cover a much wider range of setups.
When I am putting out a major new release I invite customers to join a beta mailing list and email them each time there is a new version they can test. In the past I have offered free upgrades to the customers who found the most bugs.
Don’t rely only on testing
I believe in a defence in depth approach to QA. Testing is just one element.
Automate the release process as much as you can
Typically a release process involves quite a few steps: building the executable, copying files, building the installer, adding a digital signature etc. Write a script to automate as much of this as possible. This saves time and reduces the likelihood of errors.
Use a checklist for everything else
There are typically lots of tasks that can’t be automated, such as writing release notes, updating the online FAQ, writing a newsletter etc. Create a comprehensive checklist that covers all these tasks and go through it every release. Whenever you make a mistake, add an item to the checklist to catch it next time. Here is a delightfully meta checklist for checklists.
Write release notes
Customers are entitled to know what changes are in a release before they decide whether to install it. So write some release notes describing the changes. Use screen captures and/or videos, where appropriate, to break up the text. Release notes can also be very useful for yourself later on.
Email customers whose issues you have fixed
Whenever I record a customer bug report or a feature request, I also record the email of the customer. I then email them when there is a release with a fix. It seems only polite when they have taken the effort to contact me. But it also encourages them to report bugs and suggest features in future. I will also let them access the release before I make it public, so they can let me know if there are any problems with the fix that I might not have spotted.
Don’t force people to upgrade
Don’t force customers to upgrade if I don’t want to. And don’t nag them every day if they don’t. A case in point is Skype. It has (predictably) turned from a great piece of software into a piece of crap now that Microsoft have purchased it. Every release is worst than the last. And, to add insult to injury, it just keeps bleating at me to upgrade and there doesn’t seem to be any way to turn off the notifications.
Don’t promise ship dates
If you promise a ship date and you get your estimate wrong (which you will) then either:
- You have ship software that isn’t finished; or
- You miss your ship date
Neither are good. So don’t promise ship dates. I never do and it makes my life a lot less stressful. It’s ready when it’s ready. I realize that some companies with investors, business partners and large marketing departments don’t have that luxury. I’m just glad that I am not them.
Inform existing customers of the release
There isn’t much point in putting out releases if no-one knows about them. By default my software checks an XML file on my server weekly and informs the customer if a new update is available. I also send out a newsletter with each software release. I generally get a spike in upgrades after each newsletter.
Don’t release too often
Creating a stable release is a lot of work, even if you manage to automate some of it. The more releases you do, the higher percentage of your time you will spend testing, proof reading and updating your website.
Adobe Acrobat seems to go through phases of nagging at me almost daily for updates. Do I think “Wow, I am so happy that those Adobe engineers keep putting out releases of their useful free software”? No. I hate them for it. If you have an early stage product with early-adopters, they may be ok with an update every few days. But most mainstream customers won’t thank you for it.
Don’t release too infrequently
Fixing a bug or usability issue doesn’t help the customer until you ship it. Also a product with very infrequent updates looks dead. The appropriate release frequency will vary with the type of product and how complex and mature it is.
Digitally sign your releases
Digital certificates are a rip-off. But unsigned software makes you look like an an amateur. I am wary of downloading any software that isn’t digitally signed. Apple now prevents you downloading unsigned software by default. Signing is just an extra line in your build script. It is a bit tedious getting a digital certificate though, so get one that lasts several years.
Check your binaries against major anti-virus software
Over zealous anti-virus software can be a real headache for developers of downloadable software. So it is worth checking if your release is likely to get flagged. You can do this using free online resource virustotal.com. If you are flagged, contact the vendor and ask them to whitelist you.
‘The perfect is the enemy of the good’
Beware second system effect. If you wait for perfection, then you will never ship anything. As long as this release is a significant improvement on the last release, then it is good enough to ship.
Creating a release is exhausting. Even maths, physics and software prodigy Stephen Wolfram of Mathematica says so:
I’ve led a few dozen major software releases in my life. And one might think that by now I’d have got to the point where doing a software release would just be a calm and straightforward process. But it never is. Perhaps it’s because we’re always trying to do majorly new and innovative things. Or perhaps it’s just the nature of such projects. But I’ve found that to get the project done to the quality level I want always requires a remarkable degree of personal intensity. Yes, at least in the case of our company, there are always extremely talented people working on the project. But somehow there are always things to do that nobody expected, and it takes a lot of energy, focus and pushing to get them all together.
So look after yourself. Make sure you get enough sleep, exercise and eat healthily. Also things may be at their most intense straight after the release with promotion, support, bug fixing etc. So it may be a good idea to take a day or two off before you send the release out.
Don’t release anything just before you go away
There is always a chance a new release is going to mess things up. If you are a one-man band like me, you really don’t want to make a software release just before you go away on holiday or to a conference. Wait until you get back!
Fix screwups ASAP
We all make mistakes from time to time. I recently put out a release of my card planning software, Hyper Plan, that crashed on start-up on some older versions of macOS. Oops. But I got out a release with a fix as soon as I could.
Treat yourself after a release
Releases are hard work. A successful release deserves a treat!
Anything I missed?
Hi Andy, that is a great list! Almost all the items you mention already have a place in our process.
Here are a couple of additional guidelines that work for us:
Release on the weekend (or Friday evening)
For our commercial application, releasing on the weekend ensures that mostly home users and enthusiasts will try it first — ahead of the more valuable corporate customers. Weekend releases give us time to fix (or retract!) the release early if there is a major problem.
Monitor resources (memory, handles, threads, etc.) during testing
One of our releases introduced a memory leak that we didn’t notice until our most important customers started complaining. Oops!
Releasing at the weekend is an interesting approach. I have both consumer and business users, so I might try that for the next major release.
I use to spend a lot of time checking for and debugging memory leaks. But I find they almost never occur now that I am so used to the Qt idiom of every QObject having a parent that deletes its children. Or maybe they do, but I haven’t noticed. Probably should put it in my checklist!
Lots of good advice!
I’d also add A/B testing. I always A/B test new vs old version after adding a new major feature. This is also a good way to verify nothing’s broken and people like the new feature.
Code signing is an interesting topic. Recently Windows doesn’t let me launch any freshly downloaded software without a warning screen and this screen looks much better with signed executables. Before this warning screen I A/B tested that code signed executables got installed LESS often than unsigned ones. Code signing always showed a dialog stating the company name, etc, while unsigned, smartscreen green executables downloaded without any dialogs. That single click prevented about 10% of installs for some reason. But now apparently all software downloads pop this warning dialog so I guess code signing no longer has negative side effects. I also see this getting more and more restrictive. It’s possible that in a few years you won’t be able to run unsigned code without tricks. Better be prepared.
Are you A/B testing the downloadable software? How do you do that – show 50% of visitors a download link with the old version? Doesn’t that cause confusion?
>Before this warning screen I A/B tested that code signed executables got installed LESS often than unsigned ones.
Interesting. I hadn’t heard that before.
I have a completely custom backend, downloads have always been done via a php script. When a new user arrives I create a session and flag the session A or B. Then this php script returns a different installer depending on the flag. Most users download only once, so there’s not really any confusion. New version is only announced after the A/B test is closed.
Does that mean you will have at least two, and possibly three versions of each release out in the wild? A number of A, a number of B, and a number of whatever the release ends up being?
I have a list what I need to do for a release. This includes the sequence: do manual, build app, what to update on the website etc.
Don’t update plugins on a major release. This was my worst release ever. UTF8 was saved as UTF16 and some data became chinese characters. I was lucky that I did the release just after christmas and not many users had updated.
Plugins must add a whole extra level of difficulty!
Incredible advice! Thanks Andy.
Interesting, different concerns than web application releases. How much time would you say this takes from start to finish?
Regarding bug reports and feature requests manually contacting the user. Had you consider automating this? My product do this for you as well as capturing feedback from multiple channels. It’s Roadmap at https://roadmap.space.
Might fit with your point of trying to automate as much as possible.
>How much time would you say this takes from start to finish?
It depends on how big a release it is. From final testing to public release might be anything from a few hours to a week or more. But it depends where you draw the boundaries. Do you include the testing, writing the documentation, writing the release notes, writing the newsletter?
>Regarding bug reports and feature requests manually contacting the user. Had you consider automating this?
No. I personalize the email for each customer and I am typically only sending tends of emails per release.