Category Archives: article

Beware upgradeware

fungi.jpgSome years back my wife bought a PC and got a ‘free’ inkjet printer with it. It was a really lousy printer, but hey, it was free. When it ran out of ink we tried to get a new inkjet cartridge, but the cheapest set of cartridges we could find was £80. That was 4 times the price of other comparable cartridges at the time. Some further research showed that you could buy the printer for £20 – with cartridges! Their ugly sales tactics didn’t work. We threw it in the dustbin and bought an Epson inkjet, which gave years of sterling service using third party sets of cartridges costing less than £10.

When I started my company I had a thousand decisions to make. One of them was which software to use to create and maintain my new product website. It just so happened that my new ISP (1and1.co.uk) was offering a bundle of ‘free software worth £x’ when you signed up (I forget the amount). It included a web design package (NetObjects Fusion 8 ) and an FTP package (WISE-FTP). Hoorah, free (as in beer) software and 2 less decisions to make. I was weak. Instead of spending time checking out reviews and evaluating competitors, I just installed and starting using them. It didn’t occur to me that they might be using the same sales tactics as the manufacturer of the lousy printer. In this imperfect world, if something appears too good to be true, it usually is. And so it was in this case. I grew to hate both these pieces of software.

WISE-FTP was just flaky. It kept crashing and displaying German error messages, despite the fact that I had installed the English version. No problem, I just uninstalled and installed FileZilla which is free (as in beer and speech), stable and does everything I need and more.

NetObjects Fusion was flaky and hard to use. By saving after every edit I could minimise the effects of the regular crashes and I assumed that I would learn how to work around other problems in time. But I never did. By the time I decided that the problems were more due to the shortcomings of NetObjects Fusion as a software package, rather than my (many) shortcomings as a web designer, it was a little late. I had already created an entire website, which was now stored in NetObjects Fusion’s proprietary database. Some of the bugs in NetObjects Fusion are so major that one wonders how much testing the developers did. My ‘favourite’ is the one where clicking a row in a table causes the editor to scroll to the top the table. This is infuriating when you are editing a large table (my HTML skills haven’t yet reached the 21st century).

In despair I eventually paid good money to upgrade to NetObjects Fusion 10. Surely it would be more stable and less buggy after two major version releases? Bzzzzt, wrong. The table scrolling bug is still there and it crashed 3 times this morning in 10 minutes. Also, every time I start it up the screen flashes and I get the ominous Vista warning message “The color scheme has been changed to Windows Vista Basic. A running program isn’t compatible with certain visual elements of Windows”. Even just trying to buy the software upgrade off their website was a confusing nightmare. The trouble is that it is always easier in the short-term to put up with NetObject Fusion’s many shortcomings than to create the whole site anew in another package.

For want of a better term I call this sort of software ‘upgradeware’ – commercial software that is given away free in the hope that you will buy upgrades. This is quite distinct from the ‘try before you buy’ model, where the the free version is crippled or time-limited, or freeware, for which there is no charge ever. Upgradeware is the software equivalent of giving away a printer in the hope that you will buy overpriced cartridges. Only it is less risky, as the cost of giving away the software is effectively zero. It seems to be a favoured approach for selling inferior products and it is particularly successful when there is some sort of lock-in. It certainly worked for NetObjects in my case.

Norton Anti-virus are the masters of upgradeware. Norton Anti-virus frequently comes pre-installed on new PCs with a free 1-year subscription. The path of least resistance is to pay for upgrades when your free subscription runs out. By doing these deals with PC vendors, Symantec sell vast amounts of subscriptions, despite the fact that Norton Anti-virus has been shown in test after test to be more bloated and less effective than many of its competitors. And if you think Norton Anti-virus doesn’t have any lock-in, just try uninstalling it and installing something else. It is almost impossible to get rid of fully. Last time I tried I ended up in a situation where it said I couldn’t uninstall it, because it wasn’t installed, and I couldn’t re-install, because it was still installed.

I feel slightly better now that I have had a rant about some of my least favourite software. But there is also a more general point – ‘free’ commercial software can end up being very expensive. Time is money and I hate to think how much time I have wasted struggling with upgradeware. So be very wary of upgradeware, especially if there is any sort of lock-in. When I purchased a new Vista PC, the first thing I did was to reinstall Vista to get rid of all the upgradeware that Dell had installed (Dell wouldn’t supply it to me without it). You could also draw the alternative conclusion that upgradeware might be a good approach for making money from lousy software. But hang your head in shame if you are even thinking about it. It would be better for everyone if you just created a product that was good for customers to pay for it up-front.

Ps/ If you fancy the job of converting www.perfecttableplan.com to beautiful sparkly clean XHTML/CSS and your rates are reasonable – feel free to contact me with a quote.

The other side of the interface

all_seeing_eyes.jpgWhile researching my talk on usability for ESWC2007 I came across this article I wrote some years ago. It has quite a lot of material I would have liked to have included, but there is only so much you can fit into a 60 minute talk. I am putting it here as a supplement to the talk and as a tribute to the late lamented EXE magazine which first published the article in June 1998. EXE went under in 2000 and I can’t find anyone to ask permission to republish it. I think they wouldn’t have minded. It is quite a long article and may be truncated by feed readers. Click through to the site to read the whole article.

It has been said that if users were meant to understand computers they would have been given brains. But, in fairness to users, the problem is often that interfaces are not designed to take account of their strengths and weaknesses. I have struggled with my fair share of dire user interfaces, and I’m supposed to be an expert user.

An interface is, by definition, a boundary between two systems. On one side of a user interface is the computer hardware and software. On the other side is the user with (hopefully) a brain and associated sensory systems. To design a good interface it is necessary to have some understanding of both of these systems. Programmers are familiar with the computer side (it is their job after all) but what about the other side? The brain is a remarkable organ, but to own one is not necessarily to understand how it works. Cognitive psychologists have managed to uncover a fair amount about thought processes, memory and perception. As computer models have played quite a large role in understanding the brain, it seems only fair to take something back. With apologies to psychologists everywhere, I will try to summarise some of the most important theory in the hope that this will lead to a better understanding of what makes a good user interface. Also, I think it is interesting to look at the remarkable design of a computer produced by millions of years of evolution, and possibly the most sophisticated structure in the universe (or at least in our little cosmic neighbourhood).

The human brain is approximately 1.3kg in weight and contains approximately 10,000,000,000 neurons. Processing is basically digital, with ‘firing’ neurons triggering other neurons to fire. A single neuron is rather unimpressive compared with a modern CPU. It can only fire a sluggish maximum of 1000 times a second, and impulses travel down it a painfully slow maximum of 100 meters per second. However, the brain’s architecture is staggeringly parallel, with every neuron having a potential 25,000 interconnections with neighbouring neurons. That’s up to 2.5 x 10^14 interconnections. This parallel construction means that it has massive amounts of store, fantastic pattern recognition abilities and a high degree of fault tolerance. But the poor performance of the individual neurons means that the brain performs badly at tasks that cannot be easily parallelised, for example arithmetic. Also the brain carries out its processing and storage using a complex combination of electrical, chemical, hormonal and structural processes. Consequently the results of processing are probabilistic, rather than deterministic and the ability to store information reliably and unchanged for long periods is not quite what one might hope for.

Perhaps unsurprisingly, the brain has a similar multi-level storage approach to a modern computer. Where a computer has cache, RAM and hard-disk memory (in increasing order of capacity and decreasing order of access speed) the brain has sensory memory, short-term memory and long-term memory. Sensory memory has a large capacity, but a very short retention period. Short-term memory has a very small capacity but can store and retrieve quickly. Long-term memory has a much larger capacity, but storage and retrieval is more difficult. New information from sensory memory and knowledge from long-term memory are integrated with information in short-term memory to produce solutions.

memory_model.gif

A simple model of memory and problem solving[1].

Sensory memory acts like a huge register, retaining large amounts of sensory data very briefly so that it can be processed into a meaningful form, e.g. to recognise a face, which is transferred to short-term memory. The sensory data is then quickly replaced with new incoming data.

Short-term memory acts like a very small queue with a limited retention period. It can hold only 7±2 items of information, with new items added into short-term memory displacing older ones once this limit has been reached. Items disappear after approximately 30 seconds if not rehearsed. The items of information in short-term memory act as ‘pointers’ to arbitrarily large and complex pieces of information stored in long-term memory. For example the seventh of January is one chunk for me (its my birthday), 2 chunks to you (one for each familiar word) and 14 chunks for a non-English speaker familiar with our alphabet (one for each character). The number 7±2 may seem rather arbitrary, but experimentation shows it is remarkably consistent across a wide range of individuals and cultures. Short-term memory acts as a workspace for problem solving. The more items that are held in short-term memory the longer it takes to process them.

It is important not to overload short-term memory. The limited size of short-term memory is a critical bottleneck in problem solving and one of the main constraints to consider for any user interface (designed for human users at least). Don’t force the user to try to hold lots of items in short-term memory. If they have to think about more than 7±2 items then new items will displace old ones. Also the more items that are in short-term memory the slower their response time will be. Having lots of ‘open’ tasks puts a big burden on short-term memory, so tasks should be grouped into well-defined ‘transactions’. Complex tasks can almost always be broken down into simpler sub-tasks.

Long-term memory acts like a huge network database. It has a complex structure and massive capacity, but storing and retrieving information is slow and not always reliable. Items of information are apparently interconnected and accessed by some form of pointer. Some psychologists believe that long-term memory may be permanent, and only the ability to retrieve it may be lost (a bad case of ‘dangling pointers’ perhaps?). Dreaming may be a side-effect of routine re-structuring of long-term memory (garbage collection?) while we are asleep. Transferring information to long-term memory seems to be a process of encoding the memory and creating pointers to access it. The more often an item of information is accessed the easier it becomes to access in future. Each item of information may be accessible by many different routes. Consequently the context in which information is presented can be important factor in remembering. The more context cues that are available the easier it is to retrieve an item from long-term memory. For example, experiments show that students perform better in exams held in the classroom where they learnt the information than elsewhere. So if an item was presented in a particular font, colour and size, it will be easier to remember its meaning if the same font, colour and size are used.

There is some evidence that image and verbal memories are stored in different parts of the brain. We can often remember the faces of people we have met better than their names. Experiments show that it is easier to remember an image than a concrete word, for example it is easier to remember ‘car’ when shown an image of a car than when shown the word ‘car’. It is also easier to remember a concrete word than an abstract word, for example it is easier to remember the word ‘car’, than the word ‘transport’. This implies that the iconic representation of commands on toolbars has value beyond just looking nice. Also keywords used in a command line interface should where possible be concrete, rather than abstract.

The different types of memory are stored using different physical mechanisms, probably electrical, chemical and structural. As proof of this you can train an animal to run a maze, cool it down to the point where all brain activity ceases and then warm it up again. It will have forgotten how to run the maze, but remember things it learnt days before (I don’t recommend you try this with users). Also some diseases have been observed affect short-term memory without affecting long-term memory. Transfer information from short-term to long-term memory and retrieving it again is not very reliable. It is better to allow the user to select from alternatives rather than force them to commit items to long-term memory and then retrieve them. At work, the interface of our old accountancy package had many short-comings. Projects had to be identified as 5 digit numerical codes, even though alphabetic codes would have been easier to remember. Users also had to enter project numbers from memory, no facility for selecting from available projects was provided. It wouldn’t have taken much effort to produce a better interface design, just a little thought. For example the Microsoft Word print dialog cues the user as to the permitted format for specifying pages to be printed.

example.gif

A useful aid to memory.

The brain gets its input from the outside world through the senses. Of the senses vision is the most important, with some 70% of all sensory receptors in the eyes. The importance of vision is also reflected in the design of modern computers. Other than the odd beep the computer communicates with the user almost entirely through the VDU. Consequently I will confine the scope of the discussion on the senses to vision alone.

The eye is an impressive sensing device by any standards. Tests show that its is possible for a human eye to detect a candle flame at a range of 30 miles on a dark, still night. This corresponds to detecting a signal as low as a few photons entering the eye. Incoming light is focused onto the retina at the back of the eye, which contains the light receptors. The retina is actually an extension of the brain. Observation of growing embryos shows that the tissue that forms the retina extends from the brain, it is not formed from the tissue that turns into the rest of the eye. The retina contains some 5 million ‘cone’ receptors and 100 million ‘rod’ receptors. The cones are sensitive to colour, while the rods are sensitive to brightness. Some cones are sensitive to red, some to green and some to blue, depending on the pigment they contain. The cones are much more liberally supplied with nerve cells and are able to discern detail, but they don’t function in low light levels. The cones are densest in the centre of the retina, and virtually absent at the outer edge. The fovea centralis, a spot 1 millimetre across at the centre of the retina, contains some 200,000 cones and no rods. The rods only detect light at the blue end of the spectrum, but they are extremely sensitive and can detect a single photon of light. The uneven distribution of rods and cones is easy to test. Look slightly away from this page and try to read it ‘out of the corner of your eye’ – its not possible. Only the fovea has sufficient acuity to discern this level of detail. You may also notice that it is easiest to see poorly illuminated objects out of the corner of your eye. A very dim star, visible out of the corner of your eye, disappears when looked straight at.

Because the fovea is so small we are only able to distinguish detail over a range of approximately 2 degrees. This equates to about 2.5cm at the normal distance from user to VDU. To build up a detailed picture of what is on the screen we have to scan it. It therefore makes sense to have single items on the interface not bigger than 2.5cm, so they can be recognised without have to scan them. Games and simulators that perform real-time rendering are wasting a lot of processing power by rendering the whole picture at the same level of detail. What they should ideally be doing is performing very detailed rendering at the point where the user’s fovea is pointing and progressively less detailed rendering further away from this. This would allow a much more efficient use of available processing power. It is possible to detect where the user is looking by bouncing an infrared beam off their retina. If this technology becomes widely available it could be used to perform differential rendering, with the result appearing much more detailed without any increase in processing power.

The receptors in the retina, in common with other sense receptors, are only sensitive to change. Using special optical equipment it is possible to project a ‘stabilised’ image onto the retina that does not change, regardless of eye movements. A stabilised image fades to a formless grey and is no longer discernible after only 2-3 seconds. It turns out that the constant movement of the eye, originally thought to be an imperfection of the optical system, is essential for sensing unchanging images. Perversely, light has to pass through 9 layers of nerves cells and blood vessels in the retina before it reaches the light receptors (I guess evolution isn’t perfect). Because the network of nerves and bloods vessels is unchanging, we don’t normally perceive it[2]. The practical consequence is that any form of movement, animation, change in intensity or flashing on a user interface is extremely noticeable. Flashing should be used sparingly as it can be distracting and fatiguing to users. Quickly changing text is also difficult to read, this is why, in our digital age, car speedometers remain as analogue dials rather than numerical LEDs. It may be better to put a flashing symbol next to steady text, this draws attention to the text without reducing its legibility. Mosier and Smith[3] recommend a flash rate between 2-5 Hz, with a minimum ‘on’ time of at least 50 percent. Large flashing areas of colour are believed to aggravate epilepsy (particularly at certain frequencies) and should not be used.

While sensation happens in the eye, perception happens in the brain. The receptors in the retina convert the light to electrical signals which they pass to the brain through the optic nerve, a bundle of approximately 1,000,000 neurons. The information is processed in the visual cortex, the surface of the brain at the back of the head. Our perception is incredibly sophisticated, as artificial intelligence researchers have found to their cost. Experiments on the cortex shows that it has evolved with built-in ‘feature detectors’. A feature detector is a neuron that fires for a very particular stimulus. For example, one neuron in the visual cortex may fire if there is a horizontal line at the top-left of the visual field. Next to it will be a neuron that fires for a slightly different orientation, length or position. Additional processing is then carried out to integrate all the information from the different feature detectors.

As you are reading this page your eye is making rapid movements, with your brain recognising the shape of 2-3 words at a time before moving on to the next group of words (the maximum number of words recognised at a time presumably being limited by the size of the fovea). This is apparently being done by information from different feature detectors being integrated very quickly. For example the word ‘FIX’ can be broken down into six straight lines at different positions in the visual field. We are able to recognise this work in about a third of a second, even though the size and font may vary. Shape recognition is therefore incredibly efficient and seems to be one of the best developed features of our visual system. Tests show that objects can be recognised just as well from line drawings as from colour photographs. A cup is recognisable as a cup because of its shape, not because of its colour, orientation etc. Textual representations are not always the best way to convey information. A map, chart, diagram or other form of image will often convey the same information quicker.

icons-in-explorer.gif

The use of icons in Windows Explorer makes it easier to browse document types than would be possible by reading the file extensions.

Tests show that our ability to pick out simple features such as length, orientation, curvature and brightness are carried at a very low level, in parallel. Consequently we can pick out items based on these features in a constant time, regardless of the number of other items in the image. Careful use of these abilities allow a great deal of information to be filtered very rapidly by the user.

shapes1.gif

The anomalous shape is detected as quickly in b) as in a), even though there are three times as many targets.

But the brain is not so good at integrating (‘conjoining’) different types of feature, for example shape and brightness. It is easy to pick out a black icon or a circular icon, but picking out a black circular icon is more difficult and time consuming.

shapes2.gif

Time taken to pick out the black circle increases at the number of targets increases.

It follows from this that you should try to distinguish features of the interface by shape or brightness or orientation, but not a combination of these factors.

optical.gif

a) the horizontal and vertical lines are the same length. b) the vertical lines are the same length.

The visual cortex carries out a great deal of processing that we are unaware of, not least of which is turning the image of the world the right way up. Even though we can understand the nature of illusions, our visual system is still fooled. This is because it is not just sensing the world, but trying to interpret it, making use of all sorts of cues and in-built knowlege, and this is happening at a lower level than we can consciously control. You may not have even noticed that there was a deliberate spelling mistake in the last sentence because your perceptual system made a sensible guess.

Although the image projected onto our retina is two dimensional we have very well developed depth perception, our ancestors wouldn’t have been able to swing through the trees without it. Partly this is because having two eyes allows stereoscopic vision, but also because our brain processes lots of other visual cues that produce a sensation of depth, even where it doesn’t exist (for example in a photograph). The main cues are:

  • More distant objects are smaller
  • More distant objects appear closer to the ‘vanishing point’ created by converging parallels
  • More distant objects move across the visual field more slowly
  • Superposition, if A overlaps B then A must be closer
  • Shadows and highlights
  • Chromostereopsis, long wavelength colours (e.g. red) appear closer than shorter wavelength colours (e.g. blue) because shorter wavelength light is refracted more strongly by the lens of the eye (but this is rather weak compared to the other effects)

depth-cues.gif

Use of depth cues make the one shape appear closer than the other.

Using these cues can give a very effective illusion of depth, without specialised equipment such as stereoscopic googles. This built-in depth perception is currently taken advantage of only in a very limited way in most GUI environments, for example the use of highlights and shadows to infer a three dimensional element for controls. Many applications would benefit from a three dimensional representation. For example the structure of a complex web site could be better presented in three dimensions than two. The availability of VRML and other technologies is likely to make three dimensional interfaces increasingly common.

buttons.gif

An illusion of depth.

Interestingly it is purely a matter of convention and practise that makes us imagine the light source as at the top-left and see the top button as sticking out and the bottom button as sticking in[4]. If you can also see them the other way around if you try.

Layout is an important feature of an interface. Western users will tend to scan an screen as if they were reading a page, starting from the top-left. Scanning can be made easier by aligning controls in rows. Complex displays can be made easier to scan by adding additional cues, for example a timesheet could have a thicker line denoting the end of each week.

Both layout and similarity can be used to group items on an interface.

grouping.gif

In a) the shapes are perceived as 3 rows, while in b) they are perceived as 3 columns, due to proximity. In c) the shapes are perceived as 3 columns, due to similarity. d) gives a mixed message.

A colour is perceived according to how strongly it activates the red, green and blue cone receptors in our eyes. From this we perceive its intensity (how bright it is), its hue (the dominant wavelength) and saturation (how wide a range of wavelengths make it up). Within the 400-700 nanometer visible range we can distinguish wavelengths 2 nanometers apart. Combined with differing levels of hue and saturation the estimated numbers of colours we can discriminate is 7,000,000. According to the US National Bureau of Standards there are some 7,500 colours with names. But colour should be used sparingly in interfaces. I once worked on an application where a very extrovert student with dubious taste (as evidenced by his choice of ties) had designed the user interface. Each major type of window had a different lurid background colour. This was presumably to make it easy to tell them apart, but the overall effect was highly distracting.

Colour perception, like everything else to do with perception, is complex. Experiments show that how we perceive a colour depends on the other colours surrounding it. If you look through a pinhole at a sheet of green or red paper it doesn’t appear to have a very strong colour. But if you put the sheets next to each other and look at them both through the pinhole the colours appear much stronger. So if you want to make a colour highly visible, put it next to a complementary colour, for example yellow is perceived by red and green cone cells, so to make it more visible put it next to an area of saturated blue.

Colour can be used with text and symbols to add information without making them less legible, as long as a careful choice of colours is used. Some combinations of colours work better than others. Saturated blue appears dimmer to the human eye than other saturated colours and is more difficult to focus on. Blue symbols and text are therefore probably best avoided. However, for the same reasons, blue can make a background that is easy on the eye. Saturated yellow appears brighter than all the other colours for the same intensity.

colours1.gif

Ill-advised colour combinations.

colours2.gif

Better colour combinations.

Designers should remember that a significant proportion of the population has deficient colour vision (some 6% of males and 0.4% of females, the difference being due to the way the defective gene is inherited). This is caused by problems with pigmentation in one or more of the red, green and blue cone cells in the eye. While there are a range of different types of colour deficiency the most common is the inability to distinguish between red and green. This raises some questions about the design of traffic lights (some colour-deficient drivers have to rely on the position, rather than the colour, of the lights). Some individuals may not be able to distinguish one or more primary colours from grey, it is therefore unwise to put a primary colour on a dark background. Allowing users to customise colours goes some way to alleviating this problem.

Other forms of vision defect are also common, as evidenced by the number of people wearing glasses. Something that is easily visible on the programmer’s 17 inch screen may be almost impossible to read on a user’s LCD laptop screen. This problem is further compounded by the fact that eyesight deteriorates with age and programmers tend to younger on average than users. There also seems to be a tendency to use ever smaller fonts even though screen sizes are increasing. Perhaps this is based on the assumption that large fonts make things look childish and unsophisticated, so small fonts must look professional. Ideally the user should be able to customise screen resolution and font sizes.

Meaning can sometimes be conveyed with colour, for example a temperature scale may be graded from blue (cold) to red (hot) as this has obvious physical parallels. But the meaning of colour can be very culturally dependent. For example, red is often used to imply danger in the west, but this does not necessarily carry over into other cultures. The relative commonness of defective colour vision and the limited ability of users to attach meaning to colour means that it should be used as an additional cue, and should not be relied on as the primary means of conveying information. Furthermore colour won’t be visible on a monochrome display (now relatively rare) or a monochrome printer (still very common).

Humans are good at recognising patterns, making creative decisions and filtering huge amounts of information. Humans are not so good at arithmetic, juggling lots of things at once and committing them to long-term memory. Computers are the opposite. A good interface design should reflect the respective strengths and weaknesses of human and computer. Just as a well crafted graphical user interface will minimise the amount of machine resources required to run it, it should also minimise the amount of brain resources required to use it, leaving as much brain capacity as possible for the user to solve their actual problem.

[1] After “Psychology”, 2nd Ed, C.Wade and C.Tavris.

[2] However it can be seen under certain conditions. Close one eye and look through a pinhole in a piece of card at a well illuminated sheet of white paper. If you waggle the card from side to side you start to see the network of blood vessel.

[3] “Guidelines for Designing User Interface Software” by Smith and Mosier (1986). Several hundred pages of guidelines for user interface design. They betray their 80’s US Air Force sponsored origins in places, but are still excellent. For the dedicated.

[4] I have since found out that this may not be true. Our brains appeared to be hardwired to assume that the lighting comes from above. For more details see: “Mind Hacks” T.Stafford & M.Webb (2005).

Selling your own software vs working for the man

nz_beach.jpgYou’ve got this great idea for a software product. You are pretty confident that you can crank out version 1.0 working full-time on your own from the spare room, and you are fairly confident that people will buy it. But you’ve also got a well paid full-time job ‘working for the man’. It’s cosy and familiar in that cubicle. Is it worth risking your career and savings to set out into uncharted waters on your own? Do you take the red pill or the blue pill?

The aim of this article is just to give you some insight into the economic realities of becoming a one man software company (a microISV). The results might surprise you. ‘Working for the man’ you get a steady monthly income every month. Working for yourself you start off with no income, while you create your product. If all goes well you start to make sales when you release v1.0 and these sales gradually improve over time until you are earning the same amount each month as when you were working for the man . As the sales continue to improve you (hopefully) reach the point where you have made as much money as if you had stayed in your old job for the same period of time. From here on it’s all gravy. Here is a very simple model:

simple microISV income model

Monthly income as microISV vs WFTM (T0=version 1.0 release, T1=monthly income equal to WFTM, T2=areas under the red and blue lines are the same)

Obviously I am making a lot of assumptions and simplifications here. In particular I am assuming:

  • Net income from microISV sales rises linearly month-on-month as soon as you release v1.0. Obviously this can’t happen forever (or you will be richer than Bill Gates) but it seems as good a guess as any and it keeps the mathematics simple.
  • MicroISV start-up expenses (buying a domain name, starting your company, buying equipment and software, getting an Internet connection etc) are fairly low compared your monthly WFTM salary.

Even though the model is embarrassingly over-simplified, I think it can still give some insights. If I plug some numbers for T0 and T1 into a simple spreadsheet I can come up with values for T2. I’ll choose numbers that I consider optimistic, realistic and pessimistic for each. For T0 (time to V1.0) I choose 3, 6 and 12 months. For T1 (time to same income as WFTM) I choose 12, 18 and 24 months.

T2 calculation

Months required to reach T2

i.e. if it takes you 6 months to get V1.0 out and then another 18 months until it is making the same monthly income (after expenses) as WFTM then it will take you 47 months to reach the point where a microISV has made you more money than WFTM.

So how much do you need in the way of savings to survive until you have a decent income? I can work this out by assuming living expenses as some proportion of WFTM monthly income. Calculating for 50% (living on noodles) and 100% (full speed ahead and damn the torpedoes):

debt incurred with living expenses=50% of WFTM income

Maximum debt in months of WFTM income with living expenses=50% of WFTM income

debt incurred with living expenses=100% of WFTM income

Maximum debt in months of WFTM income with living expenses=100% of WFTM income

i.e. if it takes you 6 months to get V1.0 out and then another 18 months until it is making the same monthly income (after expenses) as WFTM and your living expenses are 50% of your WFTM income then your maximum debt is 5 months of WFTM income.

I think the results of this simple little model make a few points:

  • Rate of sales growth is critical but the the time to getting v1.0 out is also very important. The longer it takes, the more you have to catch up later.
  • You are unlikely to come out financially ahead after 2 years as a microISV, even with fairly optimistic sales figures. It could easily take 3 or 4 years and, if the sales don’t take off or level out too early, you may never get there. There are many reasons to start a microISV, but getting rich quick isn’t one of them.
  • Given that you can’t know what T1 will be for your product, you should probably have at least 6 months WFTM income in the bank. Preferably 12 months.
  • Learn to love noodles.

You can download my Excel spreadsheet here (it’s a quick hack, so don’t expect too much).

So which is it going to be, the red pill or the blue pill?

man in costume holds red and blue pills

digg vs reddit vs slashdot vs stumbleupon – who’s the daddy?

traffic spike from digg reddit stumbleupon and slashdotSocial news and bookmarking sites, such as reddit.com, digg.com, slashdot.org and stumbleupon.com, use voting by users or selection by editors to rank interesting stories. Much to my surprise, I recently had an article from this blog featured prominently on all four of these popular sites. This generated a large amount of traffic and gave me an interesting opportunity to turn the tables, by using my hit statistics to rank these sites.

On the 16th August I published an article about a little experiment I did to prove that many software download sites hand out awards automatically, without reviewing the software. Most developers who have submitted software to such sites probably suspected this already. But the experiment proved it conclusively by garnering awards for software that didn’t even run.

I wrote the article because I wanted to shine some light on this unsavoury practise. I wanted it to be as widely read as possible, so I posted a link to it on a few software developer and entrepreneur forums that I frequent. Later in the day I posted it to reddit.com. I also added my vote to the people who had already posted it to digg.com and programming.reddit.com. I expected a few hundred people would read the article, mostly regular readers of my blog. But it got voted up and made its way on to the home page of reddit.com. Traffic started to flood in.

My recollections of the next few days are a bit hazy as it all happened rather quickly. From the front page of reddit.com the article made its way across the front pages of digg.com, and then slashdot.org, like a electronic Mexican wave. The article also appeared on the home page of WordPress.com and received traffic from social bookmarking sites stumbleupon.com and del.icio.us. Large numbers of blogs and forums also linked to the article. Hits on my blog peaked on the 17th at 53,422 hits for the day.

total blog hits per day

blog hits from reddit, digg, slashdot and stumbleupon

A few observations from the data:

  • The social news sites have the attention span of a one year old on amphetamines. The hits from digg.com went from 15,161 on Friday to just 648 on Saturday.
  • The article was linked to from 375 blogs (according to technorati.com) and an unknown number of forums and other sites. The top 4, 10 and 20 sites account for 52%, 61% and 65% of the total traffic, respectively. A long tail of less popular sites makes up the rest.
  • Things really took off once the article reached the front page of reddit.com. I visualise the links spreading across the Internet analogous to a sub-atomic chain reaction. Just as energetic particles decompose into cascades of ever smaller particles, bigger sites propogate their links to ever larger numbers of smaller and smaller sites.
  • The onslaught was wide, but not deep. A relatively low percentage of readers followed links in the article or read other articles on my blog. While that still made quite an impact on the number of visitors to the home page of my seating planner software PerfectTablePlan, there were few additional downloads and (according to my cookie tracking) 0 additional sales. This is not too surprising when you think how untargetted the traffic is. Experience has shown me that small volumes of targetted traffic make more sales than large volumes of untargetted traffic. But still, one of you must know someone who is getting married? ;0)

perfecttableplan_hits.gif

Totalling all the visitors to the blog over the 5 days I give you the successfulsoftware.net official ranking for social news and bookmarking sites.

stumbleupon, digg, reddit and slashdot

Here is the full top 20:

top 20 referrer sites

The article has generated a lot of comments. I particularly enjoyed the reviews here (I hope they haven’t been deleted). Interestingly the order of the number of comments/reviews for the 4 top sites is very different to the number of hits.

comments and reviews on stumbleupon. digg, reddit and slashdot

Please don’t take my ranking too seriously. The story reached similar positions on the reddit, digg and slashdot home pages[1], but my methodology here is far from rigorous. A different type of story on a different day might have resulted in a quite different ranking. Amongst other issues:

  • The WordPress stats only show the top 40 referrers for each day.
  • The article made the front page of different sites at different times.
  • Just because someone clicked through, doesn’t mean they actually read the article.
  • My article might simply have been more interesting to the type of people who read one site than the type of people who read another.
  • I have no way of knowing whether any of the visitors were bots.

But social news sites aren’t exactly rigorous in their ranking either.

Please note that I created this blog to write about what it takes to successfully create and market commercial software. I don’t intend to become another blogger blogging about blogging. It’s bad for your eyesight (see point #10 here). Normal service will be resumed shortly.

[1] To the best of my knowledge the article reached a highwater mark of positions 1,2 and 2 on slashdot.org, reddit.com and digg.com respectively and was featured in one of the ‘popular’ pages on stumbleupon.

The software awards scam

software awardI put out a new product a couple of weeks ago. This new product has so far won 16 different awards and recommendations from software download sites. Some of them even emailed me messages of encouragement such as “Great job, we’re really impressed!”. I should be delighted at this recognition of the quality of my software, except that the ‘software’ doesn’t even run. This is hardly surprising when you consider that it is just a text file with the words “this program does nothing at all” repeated a few times and then renamed as an .exe. The PAD file that described the software contains the description “This program does nothing at all”. The screenshot I submitted (below) was similarly blunt and to the point:

awardmestars_screenshot.gif

Even the name of the software, “awardmestars”, was a bit of a giveaway. And yet it still won 16 ‘awards’. Here they are:

all_awards2.gif

Some of them look quite impressive, but none of them are worth the electrons it takes to display them.

The obvious explanation is that some download sites give an award to every piece of software submitted to them. In return they hope that the author will display the award with a link back to them. The back link then potentially increases traffic to their site directly (through clicks on the award link) and indirectly (through improved page rank from the incoming links). The author gets some awards to impress their potential clients and the download site gets additional traffic.

This practise is blatantly misleading and dishonest. It makes no distinction between high quality software and any old rubbish that someone was prepared to submit to a download site. The download sites that practise this deceit should be ashamed of themselves. Similarly, any author or company, that displays one of these ‘awards’ is either being naive (at best) or knowingly colluding in the scam (at worst).

My suspicions were first aroused by the number of five star awards I received for my PerfectTablePlan software. When I went to these sites all the other programs on them seemed to have five star awards as well. I also noticed that some of my weaker competitors were proudly displaying pages full of five star awards. I saw very few three or four star awards. Something smelled fishy. Being a scientist by original training, I decided to run a little experiment to see if a completely worthless piece of software would win any awards.

Having seen various recommendations for the rundenko.com submit-everywhere.com submission service on the ASP forums I emailed the owner, Mykola Rudenko, to ask if he could help with my little experiment. To my surprise, he generously agreed to help by submitting “awardmestars” to all 1033 sites on their database, free of charge.

According to the report I received 2 weeks after submissions began “awardmestars” is now listed on 218 sites, pending on 394 sites and has been rejected by 421 sites. Approximately 7% of the sites that listed the software emailed me that it had won an award (I don’t know how many have displayed it with an award, without informing me). With 394 pending sites it might win quite a few more awards yet. Many of the rejections were on the grounds of “The site does not accept products of this genre” (it was listed as a utility) rather than quality grounds.

The truth is that many download sites are just electronic dung heaps, using fake awards, dubious SEO and content misappropriated from PAD files in a pathetic attempt to make a few dollars from Google Adwords. Hopefully these bottom-feeders will be put out of business by the continually improving search engines, leaving only the better sites. I think there is still a role for good quality download sites. But there needs to be more emphasis on quality, classification, and additional content (e.g. reviews). Whether it is possible for such a business to be profitable, I don’t know. However, it seems to work in the MacOSX world where the download sites are much fewer in number, but with much higher quality and more user interaction.

Some download site owners did email me to say either “very funny” or “stop wasting my time”. Kudos to them for taking the time to check every submission. I recommend you put their sites high on your list next time you are looking for software:

www.filecart.com

www.freshmeat.net

www.download-tipp.de (German)

This is the response I got from Lothar Jung of download-tipp.de when I showed him a draft of this article:

“The other side for me as a website publisher is that if you do not give each software 5 stars, you don’t get so many back links and some authors are not very pleased with this and your website. When I started download-tipp.de, I wanted to create a site where users can find good software. So I decided the visitor is important, and not the number of backlinks. Only 10% of all programs submitted get the 5 Suns Award.”

Another important issue for download sites is trust. I want to know that the software I am downloading doesn’t contain spyware, trojans or other malware. Some of the download sites have cunningly exploited this by awarding “100% clean” logos. I currently use the Softpedia one on the PerfectTablePlan download page. It shouldn’t be too difficult in principle to scan software for known malware. But now I am beginning to wonder if these 100% clean logos have any more substance than the “five star”awards. The only way to find out for sure would be to submit a download with malware, which would be unethical. If anyone has any information about whether these sites really check for malware, I would be interested to know.

My thanks to submit-everywhere.com for making this experiment possible. I was favourably impressed by the thoroughness of their service. At only $70 I think it is excellent value compared to the time and hassle of trying to do it yourself. I expect to be a paying customer in future.

** Addendum 1 **

This little experiment has been featured on reddit.com, digg.com, slashdot.com, stumbleupon.com and a number of other popular sites and blogs. Consequently there have been hundreds of comments on this blog and on other sites. I am very flattered by the interest. But I also feel like Dr Frankenstein, looking on as my experiment gains a life of its own. If I had known the article was going to be read by so many people I would have taken a bit more time to clarify the following points:

  • I have no commercial interest in, or prior relationship with, the three download sites mentioned. I singled them out because I infer from emails received that they have a human-in-the-loop, checking all submissions (or a script that passes the Turing test, which is even more praiseworthy). I offered all three a chance to be quoted in the article. Today I received a similar email from tucows.com, but they were too late to make the article. I don’t know if they read the article before they emailed me.
  • I have no commercial interest in, or prior relationship with, the automatic submission service mentioned. I approached them for help, which they generously provided, free of charge.
  • The only business mentioned in which I have a commercial interest is my own table planning software, PerfectTablePlan.

** Addendum 2 **

23 awards ‘won’ at the latest count.

If you aren’t embarrassed by v1.0 you didn’t release it early enough

releasing v1.0I cringe every time I hear about someone who has spent years writing their ‘killer app’, but still hasn’t released it. My preferred approach is to get a solid, but minimally featured, v1.0 out there and then iterate like crazy based on real customer feedback. There are a number of arguments for and against releasing early:

Against: Feature poverty

A common reason for holding back on a release is “my competitor has features A and B, so I have to have A and B”. BZZZZT. Wrong. If you are trying to compete feature for feature with a competitor who is already in the market, you are at big disadvantage. By the time you have added A your competitor will have added B. Anyway, maybe some of your potential customers don’t want A or B. Perhaps they actually want something simpler. Or they really want C, which you can do in half the time of doing A and B. Microsoft has released a number of products that were derided at v1.0, but went on to dominate the market (Windows, for one).

Against: Reputation

If you release early, won’t you get a bad reputation? Only if you produce shoddy software that crashes all over the place. There is no excuse for that, even at version 1.0. The key is to pare down the features without sacrificing quality. Pick the smallest sub-set of features that will be useful. Then add more features at each subsequent release, based on user feedback.

The truth is, unless you are a big company with a lot of marketing muscle or you have picked a tiny niche, very few of your potential customers will ever hear about version 1.0 of your software anyway.

Against: Support overheads

As soon as you have customers you will have to spend considerable amounts of effort supporting them. The sooner you release the software the sooner you get this overhead.

Against: Release overheads

Creating a stable release is a lot of work, even if you manage to automate some of it. If you do more releases in a given period of time than your competitor you will inevitably spend a higher percentage of your time testing, proof reading and updating your website.

For: Feedback

Every product launch is a huge guess. If you have lots of competitors, you don’t know if you will be able to take customers from them. If you don’t have many competitors, you don’t know if there is a real market for your product. It is also tough to know what features people really want and how much they are prepared to pay. What people say and what they do are often quite different. Even if you manage to figure all that out, every market is constantly changing.

The only reliable way to find out if people will buy your product is to release it. As soon as you have paying customers they will let you know what you need to improve. Even emails from prospective buyers asking “does it do X?” can be very valuable. Many (perhaps most) successful products have ended up quite different to what the developers originally intended.

For: Motivation

Having customers is great for motivation. If you are working for a year or two on a project without customers to push you on, it is very easy to lose focus or run out of steam.

For: Failing faster

Despite the best efforts of all concerned many products fail. In fact I would guess that the majority of commercial products fail to recoup the initial investment. Yours could be one of them. If you are going to fail, you should fail as fast as you can so you can start over on something more profitable. The sooner you release and start asking people for money the sooner you will know if your product is a dog.

For: Cash-flow

The sooner you start selling, the quicker you start to recoup your investment. As a simple (contrived) illustration:

Company1 release v1.0 after 6 months. As they improve the product and website and word-of-mouth kicks in sales increase linearly for the next 18 months: 10 sales in month 1, 20 sales in month2, 30 sales in month 3 etc.

Company2 release v1.0 after 12 months. They have the same sales: 10 sales in month 1, 20 sales in month2, 30 sales in month 3 etc.

In 18 months Company1 will sell 1900 licence, whereas Company2 will sell only 910 licences. But surely Company2 will have a better product when the release it? Even if Company2 sales grow twice as fast (20 sales in month 1, 40 sales in month2, 60 sales in month 3 etc) they will still sell 80 less licences than Company1. Also I believe Company1 will probably have a better product than Company2 12 months or 24 months in because they have 6 months more feedback on what customers really want and will pay for.

Conclusion

It is well known that the sooner you catch a mistake in development, the cheaper it is to fix. I believe this is just as true in marketing. A sure way to find these marketing mistakes is to release. You wouldn’t write a thousand lines of code before you tried to compile it. Why would you spend a year or more on development before testing it in the market? Creating software should be an incremental process.

The best time to release is a trade-off between the various factors above. Obviously your software has to be able to solve a real problem, or no-one is going to buy it. This is going to take longer for an air traffic control system than a back-up utility. But I would always try to release v1.0 in less than 6 months of elapsed time if it is my money paying for the development (I don’t write air traffic control systems). Spending a year or more writing something with no real customer feedback is more risk than I am prepared to accept. If you think it isn’t possible to produce something useful in that time, then maybe you aren’t being creative or brutal enough with the feature set. As a rule of thumb, I would say that if you aren’t embarrassed by the lack of features in v1.0, then you didn’t release it early enough.

Cost effective software registration with ejunkie

ejunkieMost small software vendors don’t want all the hassle of taking payments direct from customers, so they use a third party registration service. Registration services provide payment processing plus additional services, including handling of:

  • licence key emails
  • coupon codes
  • affiliate payments
  • taxes
  • invoice sales

But these services don’t come cheap. According to this calculator some registration services charge as much as 15% commission on every £20/$40 sale. 15%! I find that quite staggering. 10% is more typical, but personally I don’t intend to give 10+% of my hard earned income to anyone, except my wife and the government. To add insult to injury some of these services also try to upsell questionable ‘offers’ to your customers. For example KAGI upsell a licence look-up service for which the software vendor gets a, frankly insulting, $1. I understand from reading the macsb forum that the upsell will be added automatically to the shopping carts of all software vendors selling downloads and will be checked by default. You then have to opt out if you don’t want it. Personally I think every software vendor should offer licence retrieval for free. And don’t even get me started on Digital River/SWREG and their Reservation Rewards ‘offer’.

PayPal and GoogleCheckout are much cheaper, with rates of approximately 3.4%[1] and 2.25%[2] respectively on a £20/$40 sale. But PayPal and GoogleCheckout are just payment processors and don’t provide all the additional services most software vendors need. They provide extensive APIs so you can ‘roll your own’ service, but this sounds like a lot of work reinventing the same old wheels.

Alternatively you can use a third party to provide additional services on top of PayPal and/or GoogleCheckout. I use ejunkie which provides most of the services you would expect from a fully-fledged registration service from just $5 per month[3]. The savings can be considerable, for example (all figures approximate):

number of $40 licences sold per year

yearly costs
10% commission registration service PayPal +e-junkie[4] GoogleCheckout +e-junkie[5]
1,000 $4,000 $1,420 $1,060
5,000 $20,000 $6,820 $5,060
10,000 $40,000 $13,660 $10,060

If you can offset your GoogleCheckout processing fees against your Google adwords spend your monthly costs could be as little as just the $5 ejunkie fee.

On the whole I have been very happy with the service I have received from e-junkie, once I got it all working. It has been very reliable and the support has been very responsive. ejunkie does seem to be more geared to selling downloads (e.g. e-books and MP3s) than licence keys and the documentation is thin in places. Consequently I had a few issues trying to bend it to my particular requirements. I will try to find time to cover these issues in another article.

You can find out more about ejunkie and try their 1 week free trial here.

Other possible third party integration solutions are PayLoadz and Linklok. For those of you who prefer a more traditional registration services, I have heard some good reports about Plimus and Avangate on various forums. Neither of these companies has been bought out by SWREG owner Digital River (yet). I haven’t used any of these services myself.

It remains to be seen whether pressure from PayPal and Google forces registration companies to reduce their fees, add more services or just puts them out of business.

Thanks to Patrick for first alerting me to ejunkie.

Full disclosure: The above ejunkie links are affiliates links. If you follow these links and sign up with ejunkie I will get a commission. It is not a lot, but I won’t need many people to sign up to cover my ejunkie fees completely.

[1] PayPal rates vary according to volume. Currency conversions cost an extra 2.5%.

[2] Google have sweetened the deal by offsetting processing fees against adwords fees until the end of 2007. This means the rate is effectively 0% if you have a moderate spend on Google adwords each month.

[3] The monthly fee depends on number of products. $5 per month covers 10 products and 50MB of storage.

[4] Based on 3.4% PayPal fee + $5 per month ejunkie fee.

[5] Based on 2.25% GoogleCheckout fee + $5 per month ejunkie fee.

Software piracy

barrier_reef_2.jpg‘Software piracy’ is a colourful term for people using your software without paying the appropriate fee for all your hard work. It includes using cracks (versions with the security removed), keygens (software that can generate valid licence keys) and sharing licence keys in contravention of the licencing terms. Parrots, eye patches and attacking ships rarely feature prominently.

You might think that software piracy is only an issue for the Microsoft’s and Adobe’s of this world. But it is a real issue for all sizes of software vendor, even for small companies selling niche products such as mine. If you don’t believe me check the logs for the crack ‘honey-pot’ page I created[1] (IP addresses obscured to protect the guilty), click the image to enlarge:

piracy_logs.gif

That’s only the people who clicked through on to my honey-pot page. It really isn’t very inviting when displayed in a search engine, so I am sure that there are many more that searched for a crack but didn’t click through.

piracy_search2.gif

A quick look at this small sample of traffic shows that people looking for cracks come from all over the world, not just poorer countries. It also shows that Mac users look for cracks just the same as Windows users. In fact Mac users are a larger proportion of visitors to this page than you would expect from market share alone. I’m not saying that Mac users are less honest than Windows users, just that you shouldn’t be complacent about piracy just because you write software for the Mac.

I know from cookie tracking that some of the people who look for cracks go on to buy a licence (yes, I know who you are). Ergo, if there is a crack for the latest version out there it would definitely be costing me sales. So what can a vendor do to minimise sales lost to piracy? The first step is to understand the motivations of the people involved.

People crack software for many reasons. Some undoubtedly do it for commercial profit, e.g. so they can illegally sell the cracked version. But I understand the main reason is the challenge of cracking the software and resulting kudos from the cracking ‘community’. Some of the crackers are skilled and use sophisticated tools that emulate the computer environment, allowing them to quickly find and remove your security code. Although there is quite a lot you can do to make a crackers job more difficult, this is just going to make cracking your software more of a challenge and therefore more desirable to some. It is highly unlikely that the best security is going to defeat a skilled cracker for long. If Microsoft and Adobe can’t write uncrackable applications, what chance have we got? Trying to defeat piracy from the supply side is a fools errand. Just make sure your security is good enough to foil an unskilled cracker – if your average customer can bypass your security you are really in trouble.

On the demand side people use cracked software simply because they don’t want to pay for it. But they can end up paying in other ways. If we look at the costs and benefits in the wider sense:

costs of legitimate purchase:

  • purchase price
  • time taken to purchase

benefits of legitimate purchase:

  • use of current version
  • free upgrades

costs of pirate version:

  • time taken to locate crack
  • risk of malware in crack
  • risk of prosecution
  • guilty conscience?

benefits of pirate purchase:

  • use of current version

If your software is successful it will almost certainly be cracked at some point. Perhaps repeatedly. Congratulations! Somebody thought your software was worth cracking. We can’t stop cracks appearing. The best we can do is to make sure the benefits minus costs is greater for a legitimate purchase than a pirate version. Ways in which we can tip this equation in our favour are:

  • having cracks removed – Demand that ISPs remove cracks as soon as they appear (likely to be a lot more successful if the ISP is in Europe or North America). To find out when cracks appear you need to check your web logs regularly for unusual activity. For example a sudden flurry of downloads from countries that don’t normally buy your software could signal that a crack has appeared. You can also set up a Google alert for ‘<app name> crack’.
  • make existing cracks hard to find – Register your software with lots of download sites. Many of them search engine optimise their pages for phrases such as ‘crack or ‘keygen’ making real cracks hard to find.
  • price appropriately – Price your software at a level people will consider fair. Perhaps offer a ‘lite’ version at a lower cost.
  • make your software easy to purchase – The slicker and simpler the purchase process the less temptation to stray.
  • display the user name – Deter casual key swapping by displaying the licencee name prominently, for example in the splash screen and status bar.
  • use a digital certificate – A digital certificate reassures users that your installer hasn’t been tampered with and is free from malware.
  • release regularly – Crackers generally don’t want to pay for the bandwidth of lots of people downloading your software. so they will usually post patches and direct people to download the original software from your site. The patch is useless as soon as you release a new version and remove the old version. Making new and improved releases available to legitimate users also makes buying a licence more attractive.
  • create a honey-pot page – Make the case for buying your software and try to win over potential pirates. Point out the dangers of using cracks and emphasize that it isn’t a victimless crime.

Whatever we do there is a certain number of people who are never going to pay for our software due to some combination of lack of means (e.g. people in developing countries) and lack of scruples. There is not much point worrying about these people. In fact we could look on them from a ‘glass half-full’ perspective as potential free marketing – even though they are never going to pay for a licence they might recommend the software to someone else who will.

We also need to do our own little bit to educate people that software piracy isn’t a victimless crime. That means doing our best to ensure that our family, friends and work colleagues don’t use pirated software. It also goes without saying that we shouldn’t use pirated software ourselves – that would be the height of hypocrisy.

What we mustn’t do is make life difficult for our paying customers. Complex, intrusive and restrictive security schemes may have a negative impact on piracy, but they will probably have a much larger negative impact on our honest customers. If you are going to use ‘phone home’ or hardware based licensing you had better be absolutely sure there is no chance of false positives. It is hard to think of a better way to annoy an honest customer than to disable the software they paid for and brand them a thief. That would be enough to make anyone turn to crime. Shiver me timbers!

[1] I got the idea of a honey-pot page from another site. Unfortunately I can’t remember the name of the site to give them the appropriate credit.

The importance of targeted website traffic

Anyone who has a website can’t help but care how many people visit it. It’s great for our vanity to know that someone else is seeing our creation. Also, if you are running a business, more hits equals more sales doesn’t it? Well, not necessarily.

Take as many hits as you like and multiply it by a 0% chance of purchasing and you still end up with no sales. What matters is not just the number of visitors, but also the quality. Visitors that have a high likelihood of buying your software are said to be ‘targeted’. What you need is targeted traffic (preferably lots of it).

Targeted traffic is particularly important when you are paying per visitor e.g. using pay per click schemes such as Google Adwords. If you are paying $0.40 per click and your software retails at $40 you need more than a 1% conversion rate or you will just be donating to Larry and Sergey’s 767 fund.

I can illustrate the importance of targeted traffic with three examples from my own table plan software website.

1. Earlier this year I agreed for PerfectTablePlan[1] to be used as part of the demonstration of D-Wave’s prototype quantum computer in return for some free publicity. The controversial demonstration got huge publicity in the IT and high-tech communities. Sadly PerfectTablePlan didn’t get a mention in the press release as I was expecting, but it did get a mention in the D-Wave founder’s blog.

  • Click throughs from D-Wave founder’s blog: 566
  • Sales: 0
  • Conversion rate: 0%

2. I hang out on the JoelOnSoftware Business of Software forum quite a bit (especially when there are boring jobs I should really be doing). People often click through to the PerfectTablePlan website from my signature.

  • Click throughs from Business of Software forum: 4,757
  • Sales: 1
  • Conversion rate: 0.02%

3. PerfectTablePlan is built using the Qt framework by Trolltech and gets a mention on their cool apps page.

  • Click throughs from Trolltech coolapps page: 1,922
  • Sales: 2
  • Conversion rate: 0.1%

The data above is based on web stats and using cookies to track the initial referrer of sales. I don’t pretend it is hugely accurate, for example it doesn’t take account of someone clicking through to my site and then emailing the URL to a friend who then buys the software. But it is accurate enough for current purposes.

Adding all 3 examples together the conversion rate averages 0.04%, or about £0.01 revenue per click. So I would have lost a lot of money if I was paying for these clicks through Adwords. What these 3 examples have in common is that they were untargeted. The people clicking through to my site were primarily interested in quantum computing, selling software or creating cross-platform software, not creating table plans.

From better targetted traffic (e.g. people searching for “table plans” on Google) I do much better, with a conversion rate typically in the range 1% to 10%, depending on how well targeted it is. That is 25 to 250 times better than the less targeted traffic.

So, next time you are boasting about the number of hits on your site (or bemoaning the lack) remember that hit counts is a flawed metric. Like LOC (lines of code) it is easy to measure, but not terribly meaningful. You need quality as well as quantity.

[1] Actually an adapted version interfacing with their D-Wave quantum solver, rather than using PerfectTablePlan’s own genetic algorithm.

Promoting your software (part 6)

21. Viral marketing

Viral marketing is where you use the software to promote itself. For example an anti-virus product could append to each outgoing email “this email scanned for viruses by <product URL>”.

Paying customers might get annoyed at the free promotion, so you should give them the option to disable viral marketing elements. You probably don’t have to worry about this for customers using a free trial or ‘lite’ version.

Pros: Free. Requires little or no effort once set up.

Cons: Not all applications lend themselves well to this.

Data point: If you email a table plan from within PerfectTablePlan it appends a URL for downloading the free trial so you can view the plan. I have no idea whether this has resulted in any extra sales.

22. Cover CDs

Many computer magazines come with a CD or DVD full of software attached. These seem less and less relevant with the increasing availability of broadband connections.

Pros: Could be useful if your trial download is very large.

Cons: I am guessing that the conversion rate is very low, especially when there are lots of other products competing for attention on the same CD.

Data point: I haven’t tried this myself.

23. Online directories

A listing in an online directory, such as dmoz.org or Yahoo, can increase the visibility of your product. There are also specialist directories for different markets. Directories used to be considered important, but they seem to be becoming less important as search engines improve.

Pros: A directory listing can bring you additional traffic directly (from clickthroughs) and indirectly (by improving your search engine page rank).

Cons: dmoz is free, but who gets in depends on the whims of the editor in charge of that section. Yahoo costs $299 dollars and doesn’t guarantee that you will get included (maintaining the entry costs a further $299 per year).

Data point: I have applied to be added to dmoz.org every 6 months or so for 2 years, but to no avail. I’m not convinced that a Yahoo entry is worth the cost. A listing I paid for in a business entertainment directory was a total waste of money (and time, due to the incompetance of their accounts department).

24. Word of mouth

If your product is good customers will often recommend it to other people, face-to-face, by email and on forums.

Pros: A happy customer is the best possible form of promotion.

Cons: You need to get everything right (website, installer, software, documentation, support etc) and that isn’t easy.

Data point: Customers email me to tell me that they have recommended my table planning software to others and I see quite a few favourable comments in forums.

You need to try different forms of promotion to find out which ones work best for your product. But in general you should prefer forms of promotion that keep working day after day with a minimum of effort and where you can easily measure the results.

Which forms of promotion are most effective in terms of time and cost vs sales will obviously depend on your market and budget. If your ticket price is $100,000 then expensive print ads in are more likely to be effective than if your ticket price is $30.

Obviously I haven’t listed every possible method of promotion. Perhaps the one I haven’t listed is the one that will work best for your product. Feel free to comment if you think I have missed any important ones or if your experiences are different from mine.