Tag Archives: test

The psychology of successful bootstrappers

the psychology of successful bootstrappersI am curious about how the people who bootstrap software businesses are different to the general population, and to each other. I investigated this using a standard (‘big 5’) personality test. I think the results make for interesting reading.

I asked a number of software company founders to complete an online personality test and send me their results. 18 of them did (19 including me). You have probably heard of some of them, however I promised anonymity. We are all founders of bootstrapped (i.e. not VC funded) software product companies and have been involved in programming a significant portion of our products. Most of us are solo founders. Some of us (including myself) are lifestyle programmers, others have employees. We are all successful to the extent that we make a living from our software product sales. None of us are billionaires (Bill Gates probably wouldn’t return my email).

The test measures personality across 5 major axes of personality identified by psychologists:

  • Extraversion (outgoing/energetic vs. solitary/reserved) – how much you derive satisfaction from interacting with other people.
  • Conscientiousness (efficient/organized vs. easy-going/careless) – how careful and orderly you are.
  • Neuroticism (sensitive/nervous vs. secure/confident) – how much you tend to experience negative emotions.
  • Agreeableness (friendly/compassionate vs. analytical/detached) – how much you like and try to please others.
  • Openness (inventive/curious vs. consistent/cautious) – how much you seek out new experiences.

See Wikipedia for more details.

For each personality axis I have created a histogram of the results, showing how many founders fit in each 10% ‘bin’ compared to the general population. For example, for extraversion: 0 bootstrappers were in the 1-10 percentile (i.e. least extrovert 10%) of the general population, 1 founder was in the 11-20 percentile, 2 were in the 21-30 percentile etc.

extraversionconscientiousnessneuroticsmagreeablenessopenness

Extraversion Conscientiousness Neuroticism Agreeableness Openness
average (mean) 59.9 61.7 37.6 48.3 50.3
standard deviation 23.0 21.9 23.1 21.1 23.2

If bootstrappers were like the general population we would expect each bar to be the same height, with a bit of random variation, and the average score to be 50. Clearly this is not the case.

We are more extrovert on average than the general population. Although programming is stereotypically a profession for introverts and quite a few of us work alone, you need to get yourself noticed and interact with customers and partners to be a successful bootstrapper.

We are more conscientious on average than the general population. Shipping a software product requires a lot of attention to detail.

We are less neurotic on average than the general population. You need a some self belief and a thick skin to weather the ups and downs of being a bootstrapper.

We are about average for agreeableness. However the scores are not evenly distributed. Only 1 scored above the 70 percentile. Perhaps being too ready to please, rather than following your own vision, is a handicap for bootstrappers.

We are about average for openness. But the scores are clumped around the centre. Initially I was a bit surprised by this result. I expected bootstrappers to be inventive/ideas people and to score well above average. But perhaps the people who score very highly on openness are easily distracted (squirrel!), and never get anything finished.

The 5 personality traits are supposed to be orthogonal (not correlated). Picking some random pairs of traits and drawing scatter plots, that does indeed appear to be the case. For example extraversion doesn’t appear to be correlated with conscientiousness:

extroversions vs conscientiousnessI am aware that this survey suffers from some shortcomings:

  • The test is fairly simplistic. It doesn’t begin to capture what unique and precious little snowflakes we all are. However I don’t think I would have any results at all if I asked people to complete a massive survey. We are busy people.
  • Any survey suffers from selection bias. I am more likely to know other founders who are extroverts (the introverts probably go to less conferences). It is also likely that the people who responded were more conscientious and agreeable than those that didn’t!
  • 19 is a small sample size.

Correlation doesn’t imply causation. So these results don’t prove that high levels of conscientiousness and extraversion and low levels of neuroticism make you proportionally more likely to succeed at bootstrapping a software company. But, given that personality is considered fairly stable over time, it seems unlikely that the success caused the personality traits. However both could be correlated to some underlying factor, e.g. these traits could conceivably make you more likely to try starting a software business, but no more likely to succeed. Or the correlations could conceivably be a statistical fluke. I leave it as an exercise for an interested reader to work out the exact level of statistical significance of these results. It would be interesting to compare these results with those who tried to bootstrap business, but failed. However such data might not be easy to come by.

Given what I know about the trials of starting your own software business I think an above average level of conscientiousness and extraversion and a low level of neuroticism are a real asset. However it is also clear that the personalities of individual founders vary a lot. So don’t be disheartened if you don’t fit this profile. There are successful bootstrappers who don’t fit the profile. Personality is not destiny. And you can always partner with or employ someone who has complementary personality traits. But if you are a slap-dash, neurotic, who doesn’t like talking to other people, perhaps bootstrapping a software company isn’t for you. A career in government funded IT projects might be more suitable.

I sent a draft of this post to Dr Sherry Walling for feedback. Sherry is particularly well qualified to comment as she is both an adjunct Professor of Psychology and married to well know bootstrapper/micropreneur Rob Walling. Her response (paraphrased a bit) was:

“Your standard deviations are quite large which indicates that there is quite a lot of variability in your data. You would much rather have standard deviations between 0-10 when working with this kind of scale.

From my perspective, the only domain where I would expect significant difference is Conscientiousness. Conscientiousness is an essential bootstrapper trait. I am not sure how a solo founder could be successful if he/she is not naturally conscientious.

There are so many ways to be a successful bootstrapper. A neurotic person can fuel his sensitivity to negative emotions into hard work. A less neurotic person may not have enough anxiety to get up early and get to work. On the other hand too much neuroticism can be very debilitating. I don’t think there is a formula. The combination of factors could vary tremendously with each person, but conscientiousness is the one that seems essential.”

If you want to do your own analysis, the anonymised results are available to download as a CSV file here.

Many thanks to everyone who took part in the test.

You can do the test yourself. You don’t have to give your email address or answer the additional questions at the end. How do you compare?

The microISV test

Ok, so you’ve set yourself up as a one man software company and you’ve made some sales. But are you a real microISV/micropreneur/indie/startup? Take the test below and find out.

  1. You checked the number of sales you made overnight before you had your breakfast this morning.
  2. You measure the price of desirable objects (cars, houses, Xboxes) in terms of the number of licences you need to sell.
  3. You’ve outsourced some work to someone with no idea what they look like and only a vague idea where they live.
  4. When booking a hotel you are more interested in how good the Internet connection is than how good the restaurant is.
  5. Your product has at least 20 five star awards from download sites.
  6. You know what CTR, CPC and CPM mean.
  7. You have begged all your friends and family to ‘like’ your product’s Facebook page.
  8. You set up your computer or phone so it makes a special noise each time you get a sale.
  9. Your software has been cracked at least once.
  10. You have suggested to a particularly problematic customer that one of your competitors might have a more suitable product.
  11. You’ve done technical support while wearing a dressing gown/bathrobe (or less).
  12. You have Google alerts and Twitter searches set up for your product name.
  13. You start to get anxious after not checking your email for more than half a day.
  14. The last time you set an alarm clock it was because you were going on holiday and didn’t want to miss the flight.
  15. Your relatives think you don’t have a ‘real job’.
  16. You own at least 10 domain names.
  17. You have had to fix problems with your software or website while on holiday.
  18. You have had a least one chargeback.
  19. Your software has been flagged as malware by at least one anti-virus package.
  20. You use at least 3 different email addresses in the course of a day.
  21. You have explained what you do to someone and they said “And you make a living from that???”.
  22. You have used Google translate to answer a support email in a language you don’t understand.
  23. You use “we” when talking about your company, even though its really only you.
  24. Someone told you a half-baked idea they had in the shower that morning and said they would be willing to give you 50% of the profit if you did 100% of the work to  implement it.
  25. The last time you wore a suit and tie was to a wedding or a funeral.

I scored 25/25, of course (it’s my test). How did you do? Are there any other questions I should have added? Let me know in the comments.

Thanks to fellow microISVs Steph, Oliver, Terrell, Clay and Ian for suggesting some of the above.

A test of Cost Per Action (CPA) vs Cost Per Click (CPC) in Google Adwords

CPA vs CPCThe traditional approach to Google Adwords is to set a bid price for each keyword. This is known as Cost Per Click (CPC). Google then then uses the bid prices in conjunction with a secret formula (the quality score) to decide how high to rank your ad in the Adwords results. If you bid more, your ad will appear higher and typically get more clicks, but your cost per click will increase. So setting an optimal bid price is important. Bid too little and you won’t rank high enough to get a decent number of clickthroughs. Bid too much and you will potentially end paying more to Google than you recoup in sales.

An alternative approach is to tell Google Adwords how much you are prepared to pay for a particular action, e.g. a sign-up, download or sale. This is known as Cost Per Action (CPA) or Conversion Optimizer. Google will then automatically calculate your bid prices and attempt not to exceed the CPA you set (although this isn’t guaranteed).

CPA sounds great. I can stay in bed a bit longer while the mighty Google brain does the bid tweaking for me. Unfortunately I wasn’t able to use CPA. I  count sales as conversions (not downloads) and I have my adwords account split into a number of campaigns by geographic region and by type (e.g. search vs content). Having my campaigns structured like this, rather than one monolithic campaign, makes for more flexibility (e.g. different ads, phrases and bid prices for different geographical areas) and more useful reports (e.g. separate reports for search and content). But it also meant none of my Adwords campaigns made the minimum threshold for conversions per month.

When Google dropped the minimum threshold for CPA to 30 conversions per campaign per month, one of my Perfect Table Plan search campaigns became eligible. So I did an experiment. I ran a campaign for 4 weeks using CPC, then 9 weeks using CPA, then another 4 weeks using CPC. I set the CPA bid to roughly the average cost per conversion I got for CPC. I was curious to see if Google would find sweet spots that I had been missing or whether they would bid as high as they could to take as much money off me as possible. Summary: CPC outperformed CPA on all key metrics, including: 4.4% higher conversions, 9.4% lower cost per conversions and 8.0% higher profit.

The detailed results are as follows:

metric CPC  (vs CPA)
impressions/day +13.9%
clicks/day +1.3%
conversions/day +4.4%
CTR -11.1%
conv rate +3.1%
income/day +4.4%
cost/day -5.5%
CPC -6.6%
profit/day +8.0%
PKI -5.2%
ROI +10.4%
cost per conversion -9.4%

In graphical form (click to enlarge):

CPA vs CPC graph 50pc

Notes:

  • The values given are taken by computing (CPC metric – CPA metric)/(CPA metric). E.g. ROI of +10.4% means that CPC had a 10.4% higher ROI than CPA.
  • Only a single (geographically based) search campaign was measured. The total number of conversions during the time period of the test was in 3 figures.
  • I only measured sale conversions. This gives me less data than measuring downloads, but I think it is unsafe to assume the number of downloads correlates closely to the number of sales.
  • The PerfectTablePlan sale price is £19.95/$29.95. To calculate profit I only counted 75% of the price of a sale (the other 25% was assumed to cover the cost of support, ecommerce fees and other overheads associated with the sale).
  • Each of the time periods was a multiple of 7 days to avoid any issues with different results on different days of the week.
  • I ran CPC for an equal amount of time either side of the CPA test to try to balance out any seasonal factors.
  • Google conversion tracking uses Cookies and is therefore not 100% accurate.
  • PKI is Profit Per Thousand Impressions.
  • ROI is Return On Investment.

It wouldn’t be wise to draw any sweeping conclusions from one test with a limited amount of data. However I believe the results show:

  • A CPA campaign running for 9 weeks wasn’t able to outperform a mature CPC campaign. The CPC campaign had been running for over 4 years, but one would have thought CPA would have been able to use that pre-existing  data. CPA might have performed better if given longer. It would probably also have done better against a less mature CPC campaign.
  • Google didn’t rob me blind using CPA bidding. The CPA cost per day was only 5.5% higher.
  • The results weren’t hugely different. On the basis of the above results one might still conclude that CPA is superior to CPC as it requires less time to manage.