Category Archives: software

The other side of the interface

all_seeing_eyes.jpgWhile researching my talk on usability for ESWC2007 I came across this article I wrote some years ago. It has quite a lot of material I would have liked to have included, but there is only so much you can fit into a 60 minute talk. I am putting it here as a supplement to the talk and as a tribute to the late lamented EXE magazine which first published the article in June 1998. EXE went under in 2000 and I can’t find anyone to ask permission to republish it. I think they wouldn’t have minded. It is quite a long article and may be truncated by feed readers. Click through to the site to read the whole article.

It has been said that if users were meant to understand computers they would have been given brains. But, in fairness to users, the problem is often that interfaces are not designed to take account of their strengths and weaknesses. I have struggled with my fair share of dire user interfaces, and I’m supposed to be an expert user.

An interface is, by definition, a boundary between two systems. On one side of a user interface is the computer hardware and software. On the other side is the user with (hopefully) a brain and associated sensory systems. To design a good interface it is necessary to have some understanding of both of these systems. Programmers are familiar with the computer side (it is their job after all) but what about the other side? The brain is a remarkable organ, but to own one is not necessarily to understand how it works. Cognitive psychologists have managed to uncover a fair amount about thought processes, memory and perception. As computer models have played quite a large role in understanding the brain, it seems only fair to take something back. With apologies to psychologists everywhere, I will try to summarise some of the most important theory in the hope that this will lead to a better understanding of what makes a good user interface. Also, I think it is interesting to look at the remarkable design of a computer produced by millions of years of evolution, and possibly the most sophisticated structure in the universe (or at least in our little cosmic neighbourhood).

The human brain is approximately 1.3kg in weight and contains approximately 10,000,000,000 neurons. Processing is basically digital, with ‘firing’ neurons triggering other neurons to fire. A single neuron is rather unimpressive compared with a modern CPU. It can only fire a sluggish maximum of 1000 times a second, and impulses travel down it a painfully slow maximum of 100 meters per second. However, the brain’s architecture is staggeringly parallel, with every neuron having a potential 25,000 interconnections with neighbouring neurons. That’s up to 2.5 x 10^14 interconnections. This parallel construction means that it has massive amounts of store, fantastic pattern recognition abilities and a high degree of fault tolerance. But the poor performance of the individual neurons means that the brain performs badly at tasks that cannot be easily parallelised, for example arithmetic. Also the brain carries out its processing and storage using a complex combination of electrical, chemical, hormonal and structural processes. Consequently the results of processing are probabilistic, rather than deterministic and the ability to store information reliably and unchanged for long periods is not quite what one might hope for.

Perhaps unsurprisingly, the brain has a similar multi-level storage approach to a modern computer. Where a computer has cache, RAM and hard-disk memory (in increasing order of capacity and decreasing order of access speed) the brain has sensory memory, short-term memory and long-term memory. Sensory memory has a large capacity, but a very short retention period. Short-term memory has a very small capacity but can store and retrieve quickly. Long-term memory has a much larger capacity, but storage and retrieval is more difficult. New information from sensory memory and knowledge from long-term memory are integrated with information in short-term memory to produce solutions.

memory_model.gif

A simple model of memory and problem solving[1].

Sensory memory acts like a huge register, retaining large amounts of sensory data very briefly so that it can be processed into a meaningful form, e.g. to recognise a face, which is transferred to short-term memory. The sensory data is then quickly replaced with new incoming data.

Short-term memory acts like a very small queue with a limited retention period. It can hold only 7±2 items of information, with new items added into short-term memory displacing older ones once this limit has been reached. Items disappear after approximately 30 seconds if not rehearsed. The items of information in short-term memory act as ‘pointers’ to arbitrarily large and complex pieces of information stored in long-term memory. For example the seventh of January is one chunk for me (its my birthday), 2 chunks to you (one for each familiar word) and 14 chunks for a non-English speaker familiar with our alphabet (one for each character). The number 7±2 may seem rather arbitrary, but experimentation shows it is remarkably consistent across a wide range of individuals and cultures. Short-term memory acts as a workspace for problem solving. The more items that are held in short-term memory the longer it takes to process them.

It is important not to overload short-term memory. The limited size of short-term memory is a critical bottleneck in problem solving and one of the main constraints to consider for any user interface (designed for human users at least). Don’t force the user to try to hold lots of items in short-term memory. If they have to think about more than 7±2 items then new items will displace old ones. Also the more items that are in short-term memory the slower their response time will be. Having lots of ‘open’ tasks puts a big burden on short-term memory, so tasks should be grouped into well-defined ‘transactions’. Complex tasks can almost always be broken down into simpler sub-tasks.

Long-term memory acts like a huge network database. It has a complex structure and massive capacity, but storing and retrieving information is slow and not always reliable. Items of information are apparently interconnected and accessed by some form of pointer. Some psychologists believe that long-term memory may be permanent, and only the ability to retrieve it may be lost (a bad case of ‘dangling pointers’ perhaps?). Dreaming may be a side-effect of routine re-structuring of long-term memory (garbage collection?) while we are asleep. Transferring information to long-term memory seems to be a process of encoding the memory and creating pointers to access it. The more often an item of information is accessed the easier it becomes to access in future. Each item of information may be accessible by many different routes. Consequently the context in which information is presented can be important factor in remembering. The more context cues that are available the easier it is to retrieve an item from long-term memory. For example, experiments show that students perform better in exams held in the classroom where they learnt the information than elsewhere. So if an item was presented in a particular font, colour and size, it will be easier to remember its meaning if the same font, colour and size are used.

There is some evidence that image and verbal memories are stored in different parts of the brain. We can often remember the faces of people we have met better than their names. Experiments show that it is easier to remember an image than a concrete word, for example it is easier to remember ‘car’ when shown an image of a car than when shown the word ‘car’. It is also easier to remember a concrete word than an abstract word, for example it is easier to remember the word ‘car’, than the word ‘transport’. This implies that the iconic representation of commands on toolbars has value beyond just looking nice. Also keywords used in a command line interface should where possible be concrete, rather than abstract.

The different types of memory are stored using different physical mechanisms, probably electrical, chemical and structural. As proof of this you can train an animal to run a maze, cool it down to the point where all brain activity ceases and then warm it up again. It will have forgotten how to run the maze, but remember things it learnt days before (I don’t recommend you try this with users). Also some diseases have been observed affect short-term memory without affecting long-term memory. Transfer information from short-term to long-term memory and retrieving it again is not very reliable. It is better to allow the user to select from alternatives rather than force them to commit items to long-term memory and then retrieve them. At work, the interface of our old accountancy package had many short-comings. Projects had to be identified as 5 digit numerical codes, even though alphabetic codes would have been easier to remember. Users also had to enter project numbers from memory, no facility for selecting from available projects was provided. It wouldn’t have taken much effort to produce a better interface design, just a little thought. For example the Microsoft Word print dialog cues the user as to the permitted format for specifying pages to be printed.

example.gif

A useful aid to memory.

The brain gets its input from the outside world through the senses. Of the senses vision is the most important, with some 70% of all sensory receptors in the eyes. The importance of vision is also reflected in the design of modern computers. Other than the odd beep the computer communicates with the user almost entirely through the VDU. Consequently I will confine the scope of the discussion on the senses to vision alone.

The eye is an impressive sensing device by any standards. Tests show that its is possible for a human eye to detect a candle flame at a range of 30 miles on a dark, still night. This corresponds to detecting a signal as low as a few photons entering the eye. Incoming light is focused onto the retina at the back of the eye, which contains the light receptors. The retina is actually an extension of the brain. Observation of growing embryos shows that the tissue that forms the retina extends from the brain, it is not formed from the tissue that turns into the rest of the eye. The retina contains some 5 million ‘cone’ receptors and 100 million ‘rod’ receptors. The cones are sensitive to colour, while the rods are sensitive to brightness. Some cones are sensitive to red, some to green and some to blue, depending on the pigment they contain. The cones are much more liberally supplied with nerve cells and are able to discern detail, but they don’t function in low light levels. The cones are densest in the centre of the retina, and virtually absent at the outer edge. The fovea centralis, a spot 1 millimetre across at the centre of the retina, contains some 200,000 cones and no rods. The rods only detect light at the blue end of the spectrum, but they are extremely sensitive and can detect a single photon of light. The uneven distribution of rods and cones is easy to test. Look slightly away from this page and try to read it ‘out of the corner of your eye’ – its not possible. Only the fovea has sufficient acuity to discern this level of detail. You may also notice that it is easiest to see poorly illuminated objects out of the corner of your eye. A very dim star, visible out of the corner of your eye, disappears when looked straight at.

Because the fovea is so small we are only able to distinguish detail over a range of approximately 2 degrees. This equates to about 2.5cm at the normal distance from user to VDU. To build up a detailed picture of what is on the screen we have to scan it. It therefore makes sense to have single items on the interface not bigger than 2.5cm, so they can be recognised without have to scan them. Games and simulators that perform real-time rendering are wasting a lot of processing power by rendering the whole picture at the same level of detail. What they should ideally be doing is performing very detailed rendering at the point where the user’s fovea is pointing and progressively less detailed rendering further away from this. This would allow a much more efficient use of available processing power. It is possible to detect where the user is looking by bouncing an infrared beam off their retina. If this technology becomes widely available it could be used to perform differential rendering, with the result appearing much more detailed without any increase in processing power.

The receptors in the retina, in common with other sense receptors, are only sensitive to change. Using special optical equipment it is possible to project a ‘stabilised’ image onto the retina that does not change, regardless of eye movements. A stabilised image fades to a formless grey and is no longer discernible after only 2-3 seconds. It turns out that the constant movement of the eye, originally thought to be an imperfection of the optical system, is essential for sensing unchanging images. Perversely, light has to pass through 9 layers of nerves cells and blood vessels in the retina before it reaches the light receptors (I guess evolution isn’t perfect). Because the network of nerves and bloods vessels is unchanging, we don’t normally perceive it[2]. The practical consequence is that any form of movement, animation, change in intensity or flashing on a user interface is extremely noticeable. Flashing should be used sparingly as it can be distracting and fatiguing to users. Quickly changing text is also difficult to read, this is why, in our digital age, car speedometers remain as analogue dials rather than numerical LEDs. It may be better to put a flashing symbol next to steady text, this draws attention to the text without reducing its legibility. Mosier and Smith[3] recommend a flash rate between 2-5 Hz, with a minimum ‘on’ time of at least 50 percent. Large flashing areas of colour are believed to aggravate epilepsy (particularly at certain frequencies) and should not be used.

While sensation happens in the eye, perception happens in the brain. The receptors in the retina convert the light to electrical signals which they pass to the brain through the optic nerve, a bundle of approximately 1,000,000 neurons. The information is processed in the visual cortex, the surface of the brain at the back of the head. Our perception is incredibly sophisticated, as artificial intelligence researchers have found to their cost. Experiments on the cortex shows that it has evolved with built-in ‘feature detectors’. A feature detector is a neuron that fires for a very particular stimulus. For example, one neuron in the visual cortex may fire if there is a horizontal line at the top-left of the visual field. Next to it will be a neuron that fires for a slightly different orientation, length or position. Additional processing is then carried out to integrate all the information from the different feature detectors.

As you are reading this page your eye is making rapid movements, with your brain recognising the shape of 2-3 words at a time before moving on to the next group of words (the maximum number of words recognised at a time presumably being limited by the size of the fovea). This is apparently being done by information from different feature detectors being integrated very quickly. For example the word ‘FIX’ can be broken down into six straight lines at different positions in the visual field. We are able to recognise this work in about a third of a second, even though the size and font may vary. Shape recognition is therefore incredibly efficient and seems to be one of the best developed features of our visual system. Tests show that objects can be recognised just as well from line drawings as from colour photographs. A cup is recognisable as a cup because of its shape, not because of its colour, orientation etc. Textual representations are not always the best way to convey information. A map, chart, diagram or other form of image will often convey the same information quicker.

icons-in-explorer.gif

The use of icons in Windows Explorer makes it easier to browse document types than would be possible by reading the file extensions.

Tests show that our ability to pick out simple features such as length, orientation, curvature and brightness are carried at a very low level, in parallel. Consequently we can pick out items based on these features in a constant time, regardless of the number of other items in the image. Careful use of these abilities allow a great deal of information to be filtered very rapidly by the user.

shapes1.gif

The anomalous shape is detected as quickly in b) as in a), even though there are three times as many targets.

But the brain is not so good at integrating (‘conjoining’) different types of feature, for example shape and brightness. It is easy to pick out a black icon or a circular icon, but picking out a black circular icon is more difficult and time consuming.

shapes2.gif

Time taken to pick out the black circle increases at the number of targets increases.

It follows from this that you should try to distinguish features of the interface by shape or brightness or orientation, but not a combination of these factors.

optical.gif

a) the horizontal and vertical lines are the same length. b) the vertical lines are the same length.

The visual cortex carries out a great deal of processing that we are unaware of, not least of which is turning the image of the world the right way up. Even though we can understand the nature of illusions, our visual system is still fooled. This is because it is not just sensing the world, but trying to interpret it, making use of all sorts of cues and in-built knowlege, and this is happening at a lower level than we can consciously control. You may not have even noticed that there was a deliberate spelling mistake in the last sentence because your perceptual system made a sensible guess.

Although the image projected onto our retina is two dimensional we have very well developed depth perception, our ancestors wouldn’t have been able to swing through the trees without it. Partly this is because having two eyes allows stereoscopic vision, but also because our brain processes lots of other visual cues that produce a sensation of depth, even where it doesn’t exist (for example in a photograph). The main cues are:

  • More distant objects are smaller
  • More distant objects appear closer to the ‘vanishing point’ created by converging parallels
  • More distant objects move across the visual field more slowly
  • Superposition, if A overlaps B then A must be closer
  • Shadows and highlights
  • Chromostereopsis, long wavelength colours (e.g. red) appear closer than shorter wavelength colours (e.g. blue) because shorter wavelength light is refracted more strongly by the lens of the eye (but this is rather weak compared to the other effects)

depth-cues.gif

Use of depth cues make the one shape appear closer than the other.

Using these cues can give a very effective illusion of depth, without specialised equipment such as stereoscopic googles. This built-in depth perception is currently taken advantage of only in a very limited way in most GUI environments, for example the use of highlights and shadows to infer a three dimensional element for controls. Many applications would benefit from a three dimensional representation. For example the structure of a complex web site could be better presented in three dimensions than two. The availability of VRML and other technologies is likely to make three dimensional interfaces increasingly common.

buttons.gif

An illusion of depth.

Interestingly it is purely a matter of convention and practise that makes us imagine the light source as at the top-left and see the top button as sticking out and the bottom button as sticking in[4]. If you can also see them the other way around if you try.

Layout is an important feature of an interface. Western users will tend to scan an screen as if they were reading a page, starting from the top-left. Scanning can be made easier by aligning controls in rows. Complex displays can be made easier to scan by adding additional cues, for example a timesheet could have a thicker line denoting the end of each week.

Both layout and similarity can be used to group items on an interface.

grouping.gif

In a) the shapes are perceived as 3 rows, while in b) they are perceived as 3 columns, due to proximity. In c) the shapes are perceived as 3 columns, due to similarity. d) gives a mixed message.

A colour is perceived according to how strongly it activates the red, green and blue cone receptors in our eyes. From this we perceive its intensity (how bright it is), its hue (the dominant wavelength) and saturation (how wide a range of wavelengths make it up). Within the 400-700 nanometer visible range we can distinguish wavelengths 2 nanometers apart. Combined with differing levels of hue and saturation the estimated numbers of colours we can discriminate is 7,000,000. According to the US National Bureau of Standards there are some 7,500 colours with names. But colour should be used sparingly in interfaces. I once worked on an application where a very extrovert student with dubious taste (as evidenced by his choice of ties) had designed the user interface. Each major type of window had a different lurid background colour. This was presumably to make it easy to tell them apart, but the overall effect was highly distracting.

Colour perception, like everything else to do with perception, is complex. Experiments show that how we perceive a colour depends on the other colours surrounding it. If you look through a pinhole at a sheet of green or red paper it doesn’t appear to have a very strong colour. But if you put the sheets next to each other and look at them both through the pinhole the colours appear much stronger. So if you want to make a colour highly visible, put it next to a complementary colour, for example yellow is perceived by red and green cone cells, so to make it more visible put it next to an area of saturated blue.

Colour can be used with text and symbols to add information without making them less legible, as long as a careful choice of colours is used. Some combinations of colours work better than others. Saturated blue appears dimmer to the human eye than other saturated colours and is more difficult to focus on. Blue symbols and text are therefore probably best avoided. However, for the same reasons, blue can make a background that is easy on the eye. Saturated yellow appears brighter than all the other colours for the same intensity.

colours1.gif

Ill-advised colour combinations.

colours2.gif

Better colour combinations.

Designers should remember that a significant proportion of the population has deficient colour vision (some 6% of males and 0.4% of females, the difference being due to the way the defective gene is inherited). This is caused by problems with pigmentation in one or more of the red, green and blue cone cells in the eye. While there are a range of different types of colour deficiency the most common is the inability to distinguish between red and green. This raises some questions about the design of traffic lights (some colour-deficient drivers have to rely on the position, rather than the colour, of the lights). Some individuals may not be able to distinguish one or more primary colours from grey, it is therefore unwise to put a primary colour on a dark background. Allowing users to customise colours goes some way to alleviating this problem.

Other forms of vision defect are also common, as evidenced by the number of people wearing glasses. Something that is easily visible on the programmer’s 17 inch screen may be almost impossible to read on a user’s LCD laptop screen. This problem is further compounded by the fact that eyesight deteriorates with age and programmers tend to younger on average than users. There also seems to be a tendency to use ever smaller fonts even though screen sizes are increasing. Perhaps this is based on the assumption that large fonts make things look childish and unsophisticated, so small fonts must look professional. Ideally the user should be able to customise screen resolution and font sizes.

Meaning can sometimes be conveyed with colour, for example a temperature scale may be graded from blue (cold) to red (hot) as this has obvious physical parallels. But the meaning of colour can be very culturally dependent. For example, red is often used to imply danger in the west, but this does not necessarily carry over into other cultures. The relative commonness of defective colour vision and the limited ability of users to attach meaning to colour means that it should be used as an additional cue, and should not be relied on as the primary means of conveying information. Furthermore colour won’t be visible on a monochrome display (now relatively rare) or a monochrome printer (still very common).

Humans are good at recognising patterns, making creative decisions and filtering huge amounts of information. Humans are not so good at arithmetic, juggling lots of things at once and committing them to long-term memory. Computers are the opposite. A good interface design should reflect the respective strengths and weaknesses of human and computer. Just as a well crafted graphical user interface will minimise the amount of machine resources required to run it, it should also minimise the amount of brain resources required to use it, leaving as much brain capacity as possible for the user to solve their actual problem.

[1] After “Psychology”, 2nd Ed, C.Wade and C.Tavris.

[2] However it can be seen under certain conditions. Close one eye and look through a pinhole in a piece of card at a well illuminated sheet of white paper. If you waggle the card from side to side you start to see the network of blood vessel.

[3] “Guidelines for Designing User Interface Software” by Smith and Mosier (1986). Several hundred pages of guidelines for user interface design. They betray their 80’s US Air Force sponsored origins in places, but are still excellent. For the dedicated.

[4] I have since found out that this may not be true. Our brains appeared to be hardwired to assume that the lighting comes from above. For more details see: “Mind Hacks” T.Stafford & M.Webb (2005).

Moving from POP3 to IMAP

palm.jpgI have been using the POP3 protocol to collect all my emails from my ISP for the last few years. POP3 stores emails locally once they have been read from the server. This works great if you have a single PC, but it is a bit of a disaster if you check your email from multiple PCs. For example, trying to synchronise the emails on my ‘master’ desktop PC after using the laptop for a week on holiday was a royal pain. I would set the laptop not to remove messages from the POP3 server when read (unless deleted) and then re-do all the marking as read, tagging and sorting into sub-folders when I got home. Groan.

I chose POP3 because I was familiar with it and because I was using some auto-responder software that only worked with POP3. Now that I use e-junkie for sending out licence keys I don’t really need the auto-responder. So I decided to try IMAP, an alternative protocol that stores emails on a central server. So far I am very pleased with the move.

I use Mozilla Thunderbird on all my computers and my email is hosted by 1and1.co.uk. Both Thunderbird and 1and1 support POP3 and IMAP, so this made the transition very easy. I just set-up new IMAP accounts for each email account on each machine in Thunderbird. The POP3 accounts are still there so I can search them, but they no longer retrieve new emails.

Now, when I mark an email read or move it to a sub-folder, the change is immediately visible across all my email clients. Hoorah. I realise the same could be said for webmail. But I then would have to use webmail. Ugh. Lets not go there.

I was a bit worried about network latency issues with IMAP, but it haven’t noticed any problems so far and searching IMAP emails on the 1and1 server seems similar in performance to searching POP3 emails locally.

I haven’t quite worked what to do about backing-up my email yet. With POP3 it was easy, as the data was stored as files on my local machine. I am not sure what the best way to achieve this is with IMAP. In theory my ISP should be taking care of backing-up my IMAP data, but I am a bit paranoid after the recent disappearance of this blog. It is something I need to investigate further.

I am fairly conservative when it comes to adopting new technologies. Most of you reading this probably moved to IMAP ages ago. But if you didn’t, you might want to give IMAP a try. Even if you are currently a one-person company with a single PC/Mac (unlikely) it is going to make life easier if you later grow to multiple machines and/or people.

GoogleCheckout takes 22 hours 28 minutes to clear a payment

GoogleCheckout

I am a big believer in having more than one payment processor. I use PayPal as my main processor with GoogleCheckout and 2Checkout as alternatives (GoogleCheckout for pounds sterling and 2Checkout for dollars). But I haven’t been overwhelmed by GoogleCheckout so far. This is how long the last 10 payments for PerfectTablePlan through GoogleCheckout took to clear:

  • 4 hours 5 minutes
  • 5 minutes
  • <1 minute
  • <1 minute
  • 22 hours 28 minutes
  • <1 minute
  • <1 minute
  • <1 minute
  • 1 minute
  • <1 minute

That is quite some variation. I assume it is due to some orders being flagged for manual fraud checking. This is response I got from Google when I complained:

…for your protection, Google may review certain orders before passing them to you for processing. Some reviews may take slightly longer as Google performs more comprehensive analysis of the order to minimise your exposure to fraud risk.

Our specialists are working hard to address all orders in a ‘Reviewing’ state as quickly as possible. These reviews may take up to 24 hours…

So 22.5 hours appears to be acceptable as far as Google is concerned. But they managed to reply to my support email within a few minutes.

GoogleCheckout may be cheap (effectively free to Google Adwords customers at present) but keeping my customers waiting up to 24 hours for their licence isn’t acceptable to me. It makes me look bad. Go and hire some more people Google – you can afford it. Otherwise PayPal are going to wipe the floor with you as soon as you start charging comparable fees.

Despite the leisurely time they take over fraud checks they still managed to pass a payment with a postal address in Scotland, an IP address in the Netherlands and a Romanian email address. I am still waiting to see if I am going to be charged a £7.50 fee by Google for the privilege.

Patently absurd

I saw this list of patents on the back of something I bought recently:

patent

(click for larger image)

That is 40 patents, with 28 in the US alone and “other U.S. and foreign patents pending”. But it isn’t a flying car, a teleporter or a death ray. It is a child’s toy that blows bubbles.

bubbles

bubbles

It can blow bubbles within bubbles, which is quite neat, and its includes the weasel words “One or more of [the] following patents apply”. But still, 40 patents?

Patents have also got rather out of hand in the software world. The Amazon patent on “1-click ordering” is one of the more flagrant examples. This patent is now being challenged, but such a trivial patent should never have been granted in the first place. I am decidedly uneasy about the whole concept of patenting software. Software is closer to literature and mathematics than it is to invention, and it makes no sense to patent literature or mathematics. Copyright protection seems adequate to me. But, even if we allow that some forms of software innovation might be patentable, its serves no-one’s interests (apart from the lawyers) to allow trivial patents.

The companies that apply for these patents are only partly to blame. If ridiculous patents are being granted companies will inevitably have to join in to provide some protection against the patent portfolios of their competitors. Most of the blame has to lie with the institutions granting the patents and the US patent office seems to be far and away the worst offender. If the US patent office doesn’t have the resources and expertise to evaluate software patents then it should acquire the resources and expertise, and soon. The purpose of patents should be to foster innovation, not to stifle it.

 

Software audio and video resources

The Internet is a cornucopia of useful resources for software developers and marketers. As well as all the documentation, forums, blogs and wikis there are some great audio and video resources. Here are some of my favourites:

NerdTV – Robert Cringely interviews famous names from the software industry.

Shareware Radio – Mike Dullin interviews shareware authors and microISVs in his inimitable style.

The MicroISV show on channel 9 – Michael Lehman and Bob Walsh interview people of interest to microISVs.

.Net rocks and Hanselminutes – Carl Franklin and Scott Hanselman interview people of interest to .Net/Windows developers. Some of the programs are Microsoft-heavy Silverlight/Orcas/WPF alphabetti spaghetti yawn-athons, but others are of more general interest.

TED talks – The great and the good talk on a wide range of subjects, including technology.

These sites contain hours of great material. Long drives/walks/waits need never be boring again. Please add a comment if I have missed any good ones.

Business of Software microISV survey

microISV sales per hour workedThe Business of Software blog has published the results of a survey of 96 microISVs:

survey results – part 1
survey results – part 2

 

As the survey is self-selecting it is hard to know how representative the results are for microISVs in general, but it makes interesting reading.

Of respondents whose microISVs had been running 6 months or more, 50% made less than $25 in sales per hour worked. Assuming modest expenses of 20% that means that the majority of microISVs are making less than $20 per hour worked, before tax. This sounds rather discouraging, but some claim to be making >$200 per hour. The author has kindly provided the raw stats for download, so I looked at them in a bit more detail. According to my quick analysis the situation is, unsurprisingly, more encouraging for established microISVs. If you take you all the respondents who have been in business at least 12 months, are working at least 30 hours per week and are making any sales at all, the average is around $60 in sales per hour worked. This is not too bad for an indoor job, with no heavy lifting, that you can do in your underwear.

The data also shows an interesting difference in sales by category. I took the data for all the 1-man companies with monthly sales >0, divided them by category and then removed the top and bottom performers in each category (to prevent outliers distorting the averages).

hourly_sales_by_category.gif

Average sales per hour worked ($), by category, click to enlarge

I am not surprised that the average sales is relatively low in the ‘Developer tools’ market given the fierce competition, prevalence of free tools and the effects of developer ‘not invented here’ syndrome. I am rather surprised that consumer software appears to pay better than business software. This seems to turn conventional wisdom on its head (assuming I got the numbers right, it was after midnight). Of course, sales is not the same as profit. There appears to be little (if any) correlation between the ticket price of an item and the total monthly sales.

Digging a bit further, the stats also show some correlation between marketing spend and sales:

microISV marketing v sales

Monthly marketing spend ($/month) vs monthly sales ($/month), click to enlarge

Of course (repeat after me) correlation does not imply causation.

Thanks to Neil for taking the time to do the survey and publish the results.

Selling your own software vs working for the man

nz_beach.jpgYou’ve got this great idea for a software product. You are pretty confident that you can crank out version 1.0 working full-time on your own from the spare room, and you are fairly confident that people will buy it. But you’ve also got a well paid full-time job ‘working for the man’. It’s cosy and familiar in that cubicle. Is it worth risking your career and savings to set out into uncharted waters on your own? Do you take the red pill or the blue pill?

The aim of this article is just to give you some insight into the economic realities of becoming a one man software company (a microISV). The results might surprise you. ‘Working for the man’ you get a steady monthly income every month. Working for yourself you start off with no income, while you create your product. If all goes well you start to make sales when you release v1.0 and these sales gradually improve over time until you are earning the same amount each month as when you were working for the man . As the sales continue to improve you (hopefully) reach the point where you have made as much money as if you had stayed in your old job for the same period of time. From here on it’s all gravy. Here is a very simple model:

simple microISV income model

Monthly income as microISV vs WFTM (T0=version 1.0 release, T1=monthly income equal to WFTM, T2=areas under the red and blue lines are the same)

Obviously I am making a lot of assumptions and simplifications here. In particular I am assuming:

  • Net income from microISV sales rises linearly month-on-month as soon as you release v1.0. Obviously this can’t happen forever (or you will be richer than Bill Gates) but it seems as good a guess as any and it keeps the mathematics simple.
  • MicroISV start-up expenses (buying a domain name, starting your company, buying equipment and software, getting an Internet connection etc) are fairly low compared your monthly WFTM salary.

Even though the model is embarrassingly over-simplified, I think it can still give some insights. If I plug some numbers for T0 and T1 into a simple spreadsheet I can come up with values for T2. I’ll choose numbers that I consider optimistic, realistic and pessimistic for each. For T0 (time to V1.0) I choose 3, 6 and 12 months. For T1 (time to same income as WFTM) I choose 12, 18 and 24 months.

T2 calculation

Months required to reach T2

i.e. if it takes you 6 months to get V1.0 out and then another 18 months until it is making the same monthly income (after expenses) as WFTM then it will take you 47 months to reach the point where a microISV has made you more money than WFTM.

So how much do you need in the way of savings to survive until you have a decent income? I can work this out by assuming living expenses as some proportion of WFTM monthly income. Calculating for 50% (living on noodles) and 100% (full speed ahead and damn the torpedoes):

debt incurred with living expenses=50% of WFTM income

Maximum debt in months of WFTM income with living expenses=50% of WFTM income

debt incurred with living expenses=100% of WFTM income

Maximum debt in months of WFTM income with living expenses=100% of WFTM income

i.e. if it takes you 6 months to get V1.0 out and then another 18 months until it is making the same monthly income (after expenses) as WFTM and your living expenses are 50% of your WFTM income then your maximum debt is 5 months of WFTM income.

I think the results of this simple little model make a few points:

  • Rate of sales growth is critical but the the time to getting v1.0 out is also very important. The longer it takes, the more you have to catch up later.
  • You are unlikely to come out financially ahead after 2 years as a microISV, even with fairly optimistic sales figures. It could easily take 3 or 4 years and, if the sales don’t take off or level out too early, you may never get there. There are many reasons to start a microISV, but getting rich quick isn’t one of them.
  • Given that you can’t know what T1 will be for your product, you should probably have at least 6 months WFTM income in the bank. Preferably 12 months.
  • Learn to love noodles.

You can download my Excel spreadsheet here (it’s a quick hack, so don’t expect too much).

So which is it going to be, the red pill or the blue pill?

man in costume holds red and blue pills

Codekana

codekanaI don’t remember when or where I first saw an editor with syntax highlighting. But I do remember that I was ‘blown away’ by it. It was immediately obvious that it was going to make code easier to understand and syntax errors easier to spot. I would now hate to have to program without it. So I was interested to try version 1.1of CodeKana, a recently released C/C++/C# syntax highlighting add-in for Visual Studio.

Codekana features include:

  • Finer grained syntax highlighting than VS2005 provides.
  • Highlighting of non-matching brackets and braces as you type.
  • Easy switching between header and body files.

In the code below Codekana colours the if/else/while blocks differently and visually pairs the braces:

syntax highlighting

I have only been using Codekana a few hours, but I am already impressed. I find the ability to quickly switch between C++ header and body files particularly useful. VS2005 only appears to allows switching body to header, not header to body (doh!). You need the dexterity of a concert pianist for the default Codekana keyboard shortcut (Ctrl-Shift-Alt-O), but it can be customised. I changed it to Ctrl+. (dot) .

Codekana also has other features, such as the ability to zoom in/out on code. This is quite ‘cool’, but I’m not sure yet whether it will be of much use. Time will tell.

I am new to VS2005 and I have yet to try out other add-ins, such as Visual Assist, but Codekana certainly seems to have a lot of potential and is excellent value at $39. I look forward to seeing what other features get added in future versions. Find out more and download the free trial here.

Disclosure: The author of Codekana is a JoS regular who I have corresponded with in the past and was kind enough to send me a complimentary licence.

Having a crack at the crackers

crack siteSoftware cracks are a real problem for software vendors large and small. I have discussed in a previous article some of the ways in which developers can try to mitigate their effects. A fellow ASP member (who might wish to remain nameless) has gone a step further by creating a fake crack site serialsgalore.com . It looks quite convincing, but when you try to download a crack it gives you an ominous message about the error of your ways and logs your IP address. I would have gone for a less confrontational message, but it will be interesting to see how effective this approach is.

I think serialsgalore.com is worthy of support by developers. Please consider giving the site some Google juice by linking to it from your site or blog using link words such as crack, keygen and/or serials. If you don’t want to do this on a main page of your site, link to it only from your site map page. Alternatively create a Google site map (a good idea anyway) and only reference the page with the links from there. I believe the site owner is going to try to cover his costs by donations, Google ads and possibly, referral fees. I certainly don’t begrudge him some return on his efforts. I also don’t feel bad about them playing a little trick on someone looking for illegal cracks. It might even save them from downloading malware.

The software awards scam

software awardI put out a new product a couple of weeks ago. This new product has so far won 16 different awards and recommendations from software download sites. Some of them even emailed me messages of encouragement such as “Great job, we’re really impressed!”. I should be delighted at this recognition of the quality of my software, except that the ‘software’ doesn’t even run. This is hardly surprising when you consider that it is just a text file with the words “this program does nothing at all” repeated a few times and then renamed as an .exe. The PAD file that described the software contains the description “This program does nothing at all”. The screenshot I submitted (below) was similarly blunt and to the point:

awardmestars_screenshot.gif

Even the name of the software, “awardmestars”, was a bit of a giveaway. And yet it still won 16 ‘awards’. Here they are:

all_awards2.gif

Some of them look quite impressive, but none of them are worth the electrons it takes to display them.

The obvious explanation is that some download sites give an award to every piece of software submitted to them. In return they hope that the author will display the award with a link back to them. The back link then potentially increases traffic to their site directly (through clicks on the award link) and indirectly (through improved page rank from the incoming links). The author gets some awards to impress their potential clients and the download site gets additional traffic.

This practise is blatantly misleading and dishonest. It makes no distinction between high quality software and any old rubbish that someone was prepared to submit to a download site. The download sites that practise this deceit should be ashamed of themselves. Similarly, any author or company, that displays one of these ‘awards’ is either being naive (at best) or knowingly colluding in the scam (at worst).

My suspicions were first aroused by the number of five star awards I received for my PerfectTablePlan software. When I went to these sites all the other programs on them seemed to have five star awards as well. I also noticed that some of my weaker competitors were proudly displaying pages full of five star awards. I saw very few three or four star awards. Something smelled fishy. Being a scientist by original training, I decided to run a little experiment to see if a completely worthless piece of software would win any awards.

Having seen various recommendations for the rundenko.com submit-everywhere.com submission service on the ASP forums I emailed the owner, Mykola Rudenko, to ask if he could help with my little experiment. To my surprise, he generously agreed to help by submitting “awardmestars” to all 1033 sites on their database, free of charge.

According to the report I received 2 weeks after submissions began “awardmestars” is now listed on 218 sites, pending on 394 sites and has been rejected by 421 sites. Approximately 7% of the sites that listed the software emailed me that it had won an award (I don’t know how many have displayed it with an award, without informing me). With 394 pending sites it might win quite a few more awards yet. Many of the rejections were on the grounds of “The site does not accept products of this genre” (it was listed as a utility) rather than quality grounds.

The truth is that many download sites are just electronic dung heaps, using fake awards, dubious SEO and content misappropriated from PAD files in a pathetic attempt to make a few dollars from Google Adwords. Hopefully these bottom-feeders will be put out of business by the continually improving search engines, leaving only the better sites. I think there is still a role for good quality download sites. But there needs to be more emphasis on quality, classification, and additional content (e.g. reviews). Whether it is possible for such a business to be profitable, I don’t know. However, it seems to work in the MacOSX world where the download sites are much fewer in number, but with much higher quality and more user interaction.

Some download site owners did email me to say either “very funny” or “stop wasting my time”. Kudos to them for taking the time to check every submission. I recommend you put their sites high on your list next time you are looking for software:

www.filecart.com

www.freshmeat.net

www.download-tipp.de (German)

This is the response I got from Lothar Jung of download-tipp.de when I showed him a draft of this article:

“The other side for me as a website publisher is that if you do not give each software 5 stars, you don’t get so many back links and some authors are not very pleased with this and your website. When I started download-tipp.de, I wanted to create a site where users can find good software. So I decided the visitor is important, and not the number of backlinks. Only 10% of all programs submitted get the 5 Suns Award.”

Another important issue for download sites is trust. I want to know that the software I am downloading doesn’t contain spyware, trojans or other malware. Some of the download sites have cunningly exploited this by awarding “100% clean” logos. I currently use the Softpedia one on the PerfectTablePlan download page. It shouldn’t be too difficult in principle to scan software for known malware. But now I am beginning to wonder if these 100% clean logos have any more substance than the “five star”awards. The only way to find out for sure would be to submit a download with malware, which would be unethical. If anyone has any information about whether these sites really check for malware, I would be interested to know.

My thanks to submit-everywhere.com for making this experiment possible. I was favourably impressed by the thoroughness of their service. At only $70 I think it is excellent value compared to the time and hassle of trying to do it yourself. I expect to be a paying customer in future.

** Addendum 1 **

This little experiment has been featured on reddit.com, digg.com, slashdot.com, stumbleupon.com and a number of other popular sites and blogs. Consequently there have been hundreds of comments on this blog and on other sites. I am very flattered by the interest. But I also feel like Dr Frankenstein, looking on as my experiment gains a life of its own. If I had known the article was going to be read by so many people I would have taken a bit more time to clarify the following points:

  • I have no commercial interest in, or prior relationship with, the three download sites mentioned. I singled them out because I infer from emails received that they have a human-in-the-loop, checking all submissions (or a script that passes the Turing test, which is even more praiseworthy). I offered all three a chance to be quoted in the article. Today I received a similar email from tucows.com, but they were too late to make the article. I don’t know if they read the article before they emailed me.
  • I have no commercial interest in, or prior relationship with, the automatic submission service mentioned. I approached them for help, which they generously provided, free of charge.
  • The only business mentioned in which I have a commercial interest is my own table planning software, PerfectTablePlan.

** Addendum 2 **

23 awards ‘won’ at the latest count.