Category Archives: Easy Data Transform

Visual vs text based programming, which is better?

Visual programming tools (also called ‘no-code’ or ‘low-code’) have been getting a lot of press recently. This, in turn, has generated a lot of discussion about whether visual or text based programming (coding) is ‘best’. As someone who uses text programming (C++) to create a visual programming data wrangling tool (Easy Data Transform) I have some skin in this game and have thought about it quite a bit.

At some level, everything is visual. Text is still visual (glyphs). By visual programming here I specifically mean software that allows you to program using nodes (boxes) and vertexes (arrows), laid out on a virtual canvas using drag and drop. 

A famous example of this sort of drag and drop visual programming is Yahoo Pipes:

Yahoo Pipes
Credit: Tony Hirst

But there are many others, including my own Easy Data Transform:

Note that I’m not talking about Excel, Scratch or drag and drop GUI designers. Although some of the discussion might apply to them.

By text programming, I mean mainstream programming languages such as Python, Javascript or C++, and associated tools. Here is the QtCreator Interactive Development Environment (IDE) that I use to write C++ in, to create Easy Data Transform:

The advantages of visual programming are:

  • Intuitive. Humans are very visual creatures. A lot of our brain is given over to visual processing and our visual processing bandwidth is high. Look at pretty much any whiteboard, at any company, and there is a good chance you will see boxes and arrows. Even in non-techie companies.
  • Quicker to get started. Drag and drop tools can allow you to start solving problems in minutes.
  • Higher level abstractions. Which means you can work faster (assuming they are the right abstractions).
  • Less hidden state. The connections between nodes are shown on screen, rather than you having to build an internal model in your own memory.
  • Less configuration. The system components work together without modification.
  • No syntax to remember. Which means it is less arcane for people who aren’t experienced programmers.
  • Less run-time errors, because the system generally won’t let you do anything invalid. You don’t have to worry about getting function names or parameter ordering and types right.
  • Immediate feedback on every action. No need to compile and run.

The advantages of text programming are:

  • Denser representation of information.
  • Greater flexibility. Easier to do things like looping and recursion.
  • Better tooling. There is a vast ecosystem of tools for manipulating text, such as editors and version control systems.
  • Less lock-in. You can generally move your C++ or Python code from one IDE to another without much problem.
  • More opportunities for optimization. Because you have lower-level access there is more scope to optimize speed and/or memory as required.

The advantages and disadvantages of each are two sides of the same coin. A higher level of abstraction makes things simpler, but also reduces the expressiveness and flexibility. The explicit showing of connections can make things clearer, but can also increase on-screen clutter.

The typical complaints you hear online about visual programming systems are:


It makes 95% of things easy and 5% of things impossible

Visual programming systems are not as flexible. However many visual programming systems will let you drop down into text programming, when required, to implement that additional 5%.

Jokes aside, I think this hybrid approach does a lot to combine the strengths of both approaches.

It doesn’t scale to complex systems

Managing complex systems has been much improved over the years in text programming, using techniques such as hierarchy and encapsulation. But there is no reason these same techniques can’t also be applied to visual programming.

It isn’t high enough performance

The creators of a visual programming system are making a lot of design decisions for you. If you need to tune a system for high performance on a particular problem, then you probably need the low level control that text based programming allows. But with most problems you probably don’t care if it takes a few extra seconds to run, if you can do the programming in a fraction of the time. Also, a lot of visual programming systems are pretty fast. Easy Data Transform can join 2 one million row datasets on a laptop in ~5 seconds, which is faster than base R.

It ends up as spaghetti

Labview spaghetti from DailyWTF
Unreal Blueprint spaghetti from reddit.com/r/ProgrammerHumor/

I’m sure we’ve all seen examples of spaghetti diagrams. But you can also create horrible spaghetti code with text programming. Also, being able to immediately see that a visual program has been sloppily constructed might serve as a useful cue.

If you are careful to layout your nodes, you can keep things manageable (ravioli, rather than spaghetti). But it starts to become tricky when you have 50+ nodes with a moderate to high degree of connectivity, especially if there is no support for hierarchy (nodes within nodes).

Automatic layout of graphs for easier comprehension (e.g. to minimize line crossings) is hard (NP-complete, in the same class of problems as the ‘travelling salesman’).

No support for versioning

It is possible to version visual programming tools if they store the information in a text based file (e.g XML). Trying to diff raw XML isn’t ideal, but some visual based programming tools do have built-in diff and merge tools.

It isn’t searchable

There is no reason why visual programming tools should not be searchable.

Too much mousing

Professional programmers love their keyboard shortcuts. But there is no reason why visual programming tools can’t also make good use of keyboard shortcuts.

Vendor lock-in

Many visual programming tools are proprietary, which means the cost can be high for switching from one to another. So, if you are going to invest time and/or money heavily in a visual programming tool, take time to make a good choice and consider how you could move away from it if you need to. If you are doing quick and dirty one-offs to solve a particular problem that you don’t need to solve again, then this doesn’t really matter.

No code’ just means ‘someone else’s code’

If you are using Python+Pandas or R instead of Easy Data Transform, then you are also balancing on top of an enormous pile of someone else’s code.

We are experts, we don’t need no stinkin drag and drop

If you are an experienced text programmer, then you aren’t really the target market for these tools. Easy Data Transform is aimed at the analyst or business guy trying to wrangle a motley collection of Excel and CSV files, not the professional data scientist who dreams in R or Pandas. However even a professional code jockey might find visual tools faster for some jobs.


Both visual and text programming have their places. Visual programming is excellent for exploratory work and prototyping. Text based programming is almost always a better choice for experts creating production systems where performance is important. When I want to analyse some sales data, I use Easy Data Transform. But when I work on Easy Data Transform itself, I use C++.

Text programming is more mature than visual programming. FORTRAN appeared in the 1950s. Applications with graphical user interfaces only started becoming mainstream in the 1980s. Some of the shortcomings with visual programming reflect it’s relative lack of maturity and I think we can expect to see continued improvements in the tooling associated with visual programming.

Visual programming works best in specific domains, such as:

  • 3d graphics and animations
  • image processing
  • audio processing
  • game design
  • data wrangling

These domains tend to have:

  • A single, well defined data type. Such as a table of data (dataframe) for data wrangling.
  • Well defined abstractions. Such as join, to merge 2 tables of data using a common key column.
  • A relatively straightforward control flow. Typically a step-by-step pipeline, without loops, recursion or complex control flow.

My teenage son has been able to do some (I think) pretty impressive 3D modelling and animations just with Blender’s visual tools.

Visual programming has been much less successful when applied to generic programming, where you need lots of different data types, a wide range of abstractions and potentially complex control flow.

I’ve been a professional software developer since 1987. People (mostly in marketing) have talked about replacing code and programmers with point and click tools for much of that time. That is clearly not going to happen. Text programming is the best approach for some kinds of problems and will remain so for the foreseeable future. But domain-specific visual programming can be very powerful and has a much lower barrier to entry. Visual programming empowers people to do things that might be out of their reach with text programming and might never get done if they have to wait for the IT department to do it.

So, unsurprisingly, the answer to ‘which is better?’ is very much ‘it depends’. Both have their place and neither is going away.

Further reading:

Hacker News folk wisdom on visual programming

Visual Programming Codex

The life and times of Yahoo Pipes

The ‘No Code’ Delusion and HN discussion

‘Visual programming doesnt suck’ HN discussion (original article seems to have disappeared)

Visual Programming Languages – Snapshots

A Personal History of Visual Programming Environments

Is the future of data science drag and drop?

Rethinking Visual Programming with Go

Responses to this post on Reddit:

reddit.com/r/Programminglanguages

reddit.com/r/nocode

reddit.com/r/datascience

Winterfest 2023

Winterfest 2023 is on. Loads of quality software for Mac and Windows from independent vendors, at a discount. This includes my own Easy Data Transform and Hyper Plan, which are on sale with a 25% discount.

Find out more at artisanalsoftwarefestival.com .

What is the index of an empty string in an empty string?

This sounds like a question a programmer might ask after one medicinal cigarette too many. The computer science equivalent of “what is the sounds of one hand clapping?”. But it is a question I have to decide the answer to.

I am adding indexOf() and lastIndexOf() operations to the Calculate transform of my data wrangling (ETL) software (Easy Data Transform). This will allow users to find the offset of one string inside another, counting from the start or the end of the string. Easy Data Transform is written in C++ and uses the Qt QString class for strings. There are indexOf() and lastIndexOf() methods for QString, so I thought this would be an easy job to wrap that functionality. Maybe 15 minutes to program it, write a test case and document it.

Obviously it wasn’t that easy, otherwise I couldn’t be writing this blog post.

First of all, what is the index of “a” in “abc”? 0, obviously. QString( “abc” ).indexOf( “a” ) returns 0. Duh. Well only if you are a (non-Fortran) programmer. Ask a non-programmer (such as my wife) and they will say: 1, obviously. It is the first character. Duh. Excel FIND( “a”, “abc” ) returns 1.

Ok, most of my customers, aren’t programmers. I can use 1 based indexing.

But then things get more tricky.

What is the index of an empty string in “abc”? 1 maybe, using 1-based indexing or maybe empty is not a valid value to pass.

What is the index of an empty string in an empty string? Hmm. I guess the empty string does contain an empty string, but at what index? 1 maybe, using 1-based indexing, except there isn’t a first position in the string. Again, maybe empty is not a valid value to pass.

I looked at the Qt C++ QString, Javascript string and Excel FIND() function for answers. But they each give different answers and some of them aren’t even internally consistent. This is a simple comparison of the first index or last index of text v1 in text v2 in each (Excel doesn’t have an equivalent of lastIndexOf() that I am aware of):

Changing these to make the all the valid results 1-based and setting invalid results to -1, for easy comparison:

So:

  • Javascript disagrees with C++ QString and Excel on whether the first index of an empty string in an empty string is valid.
  • Javascript disagrees with C++ QString on whether the last index of an empty string in a non-empty string is the index of the last character or 1 after the last character.
  • C++ QString thinks the first index of an empty string in an empty string is the first character, but the last index of an empty string in an empty string is invalid.

It seems surprisingly difficult to come up with something intuitive and consistent! I think I am probably going to return an error message if either or both values are empty. This seems to me to be the only unambiguous and consistent approach.

I could return a 0 for a non-match or when one or both values are empty, but I think it is important to return different results in these 2 different cases. Also, not found and invalid feel qualitatively different to a calculated index to me, so shouldn’t be just another number. What do you think?

*** Update 14-Dec-2023 ***

I’ve been around the houses a bit more following feedback on this blog, the Easy Data Transform forum and hacker news and this what I have decided:

IndexOf() v1 in v2:

v1v2IndexOf(v1,v2)
1
aba
aba1
aa1
aaba1
xy
worldhello world7

This is the same as Excel FIND() and differs from Javascript indexOf() (ignoring the difference in 0 or 1 based indexing) only for “”.indexOf(“”) which returns -1 in Javascript.

LastIndexOf() v1 in v2:

v1v2LastIndexOf(v1,v2)
1
aba
aba4
aa1
aaba3
xy
worldhello world7

This differs from Javascript lastIndexOf() (ignoring difference in 0 or 1 based indexing) only for “”.indexOf(“”) which returns -1 in Javascript.

Conceptually the index is the 1-based index of the first (IndexOf) or last (LastIndexOf) position where, if the V1 is removed from the found position, it would have to be re-inserted in order to revert to V2. Thanks to layer8 on Hacker News for clarifying this.

Javascript and C++ QString return an integer and both use -1 as a placeholder value. But Easy Data Transform is returning a string (that can be interpreted as a number, depending on the transform) so we aren’t bound to using a numeric value. So I have left it blank where there is no valid result.

Now I’ve spent enough time down this rabbit hole and need to get on with something else! If you don’t like it you can always add an If with Calculate or use a Javascript transform to get the result you prefer.

*** Update 15-Dec-2023 ***

Quite a bit of debate on this topic on Hacker News.

Summerfest 2023

Summerfest 2023 is on. Loads of quality software for Mac and Windows from independent vendors, at a discount. This includes my own Easy Data Transform and Hyper Plan, which are on sale with a 25% discount.

Find out more at artisanalsoftwarefestival.com .

Winterfest 2022

Easy Data Transform and Hyper Plan Professional edition are both on sale for 25% off at Winterfest 2022. So now might be a good time to give them a try (both have free trials). There is also some other great products from other small vendors on sale, including Tinderbox, Scrivener and Devonthink. Some of the software is Mac only, but Easy Data Transform and Hyper Plan are available for both Mac and Windows (one license covers both OSs).

Easy Data Transform progress

I have been gradually improving my data wrangling tool, Easy Data Transform, putting out 70 public releases since 2019. While the product’s emphasis is on ease of use, rather than pure performance, I have been trying to make it fast as well, so it can cope with the multi-million row datasets customers like to throw at it. To see how I was doing, I did a simple benchmark of the most recent version of Easy Data Transform (v1.37.0) against several other desktop data wrangling tools. The benchmark did a read, sort, join and write of a 1 million row CSV file. I did the benchmarking on my Windows development PC and my Mac M1 laptop.

Easy Data Transform screenshot

Here is an overview of the results:

Time by task (seconds), on Windows without Power Query (smaller is better):

data wrangling/ETL benchmark Windows

I have left Excel Power Query off this graph, as it is so slow you can hardly see the other bars when it is included!

Time by task (seconds) on Mac (smaller is better):

data wrangling/ETL benchmark M1 Mac

Memory usage (MB), Windows vs Mac (smaller is better):

data wrangling/ETL benchmark memory Windows vs Mac

So Easy Data Transform is nearly as fast as it’s nearest competitor, Knime, on Windows and a fair bit faster on an M1 Mac. It is also uses a lot less memory than Knime. However we have got some way to go to catch up with the Pandas library for Python and the data.table package for R, when it comes to raw performance. Hopefully I can get nearer to their performance in time. I was forbidden from including benchmarks for Tableau Prep and Alteryx by their licensing terms, which seems unnecessarily restrictive.

Looking at just the Easy Data Transform results, it is interesting to notice that a newish Macbook Air M1 laptop is significantly faster than a desktop AMD Ryzen 7 desktop PC from a few years ago.

Windows vs Mac M1 benchmark

See the full comparison:

Comparison of data wrangling/ETL tools : R, Pandas, Knime, Power Query, Tableau Prep, Alteryx and Easy Data Transform, with benchmarks

Got some data to clean, merge, reshape or analyze? Why not download a free trial of Easy Data Transform ? No sign up required.

Creating a Mac Universal binary for Intel and ARM M1/M2 with Qt

Apple has transitioned Macs from Intel to ARM (M1/M2) chips. In the process it has provided an emulation layer (Rosetta2) to ensure that the new ARM Macs can still run applications created for Intel Macs. The emulation works very well, but is quoted to be some 20% slower than running native ARM binaries. That may not seem like a lot, but it is significant on processor intensive applications such as my own data wrangling software, which often processes datasets with millions of rows through complex sequences of merging, splitting, reformatting, filtering and reshaping. Also people who have just spent a small fortune on a shiny new ARM Mac can get grumpy about not having a native ARM binary to run on it. So I have been investigating moving Easy Data Transform from an Intel binary to a Universal (‘fat'[1]) binary containing both Intel and ARM binaries. This is a process familiar from moving my seating planner software for Mac from PowerPC to Intel chips some years ago. Hopefully I will have retired before the next chip change on the Mac.

My software is built on-top of the excellent Qt cross-platfom framework. Qt announced support for Mac Universal binaries in Qt 6.2 and Qt 5.15.9. I am sticking with Qt 5 for now, because it better supports multiple text encodings and because I don’t see any particular advantage to switching to Qt 6 yet. But, there is a wrinkle. Qt 5.15.3 and later are only available to Qt customers with commercial licenses. I want to use the QtCharts component in Easy Data Transform v2, and QtCharts requires a commercial license (or GPL, which is a no-go for me). I also want access to all the latest bug fixes for Qt 5. So I decided to switch from the free LGPL license and buy a commercial Qt license. Thankfully I was eligible for the Qt small business license which is currently $499 per year. The push towards commercial licensing is controversial with Qt developers, but I really appreciate Qt and all the work that goes into it, so I am happy to support the business (not enough to pay the eye-watering fee for a full enterprise license though!).

Moving from producing an Intel binary using LGPL Qt to producing a Universal binary using commercial Qt involved several major stumbling points that took me hours and a lot of googling to sort out. I’m going to spell them out here to save you that pain. You’re welcome.

  • The latest Qt 5 LTS releases are not available via the Qt maintenance tool if you have open source Qt installed. After you buy your commercial licence you need to delete your open source installation and all the associated license files. Here is the information I got from Qt support:
I assume that you were previously using open source version, is that correct?

Qt 5.15.10 should be available through the maintenance tool but it is required to remove the old open source installation completely and also remove the open source license files from your system.

So, first step is to remove the old Qt installation completely. Then remove the old open source licenses which might exist. Instructions for removing the license files:

****************************
Unified installer/maintenancetool/qtcreator will save all licenses (downloaded from the used Qt Account) inside the new qtlicenses.ini file. You need to remove the following files to fully reset the license information.

Windows
"C:/Users/%USERNAME%/AppData/Roaming/Qt/qtlicenses.ini"
"C:/Users/%USERNAME%/AppData/Roaming/Qt/qtaccount.ini"

Linux
"/home/$USERNAME/.local/share/Qt/qtlicenses.ini"
"/home/$USERNAME/.local/share/Qt/qtaccount.ini"

OS X
"/Users/$USERNAME/Library/Application Support/Qt/qtlicenses.ini"
"/Users/$USERNAME/Library/Application Support/Qt/qtaccount.ini"

As a side note: If the files above cannot be found $HOME/.qt-license(Linux/macOS) or %USERPROFILE%\.qt-license(Windows) file is used as a fallback. .qt-license file can be downloaded from Qt Account. https://account.qt.io/licenses
Be sure to name the Qt license file as ".qt-license" and not for example ".qt-license.txt".

***********************************************************************

After removing the old installation and the license files, please download the new online installer via your commercial Qt Account.
You can login there at:
https://login.qt.io/login

After installing Qt with commercial license, it should be able to find the Qt 5.15.10 also through the maintenance tool in addition to online installer.
  • Then you need to download the commercial installer from your online Qt account and reinstall all the Qt versions you need. Gigabytes of it. Time to drink some coffee. A lot of coffee.
  • In your .pro file you need to add:
macx {
QMAKE_APPLE_DEVICE_ARCHS = x86_64 arm64
}
  • Note that the above doubles the build time of your application, so you probably don’t want it set for day to day development.
  • You can use macdeployqt to create your deployable Universal .app but, and this is the critical step that took me hours to work out, you need to use <QtDir>/macos/bin/macdeployqt not <QtDir>/clang_64/bin/macdeployqt . Doh!
  • You can check the .app is Universal using the lipo command, e.g.:
lipo -detailed_info EasyDataTransform.app/Contents/MacOS/EasyDataTransform
  • I was able to use my existing practise of copying extra files (third party libraries, help etc) into the .app file and then digitally signing everything using codesign –deep [2]. Thankfully the only third party library I use apart from Qt (the excellent libXL library for Excel) is available as a Universal framework.
  • I notarize the application, as before.

I did all the above on an Intel iMac using the latest Qt 5 LTS release (Qt 5.15.10) and XCode 13.4 on macOS 12. I then tested it on an ARM MacBook Air. No doubt you can also build Universal binaries on an ARM Mac.

Unsurprisingly the Universal app is substantially larger than the Intel-only version. My Easy Data Transform .dmg file (which also includes a lot of help documentation) went from ~56 MB to ~69 MB. However that is still positively anorexic compared to many bloated modern apps (looking at you Electron).

A couple of tests I did on an ARM MacBook Air showed ~16% improvement in performance. For example joining two 500,000 row x 10 column tables went from 4.5 seconds to 3.8 seconds. Obviously the performance improvement depends on the task and the system. One customer reported batch processing 3,541 JSON Files and writing the results to CSV went from 12.8 to 8.1 seconds, a 37% improvement.

[1] I’m not judging.

[2] Apparently the use of –deep is frowned on by Apple. But it works (for now anyway). Bite me, Apple.

Summerfest 2022

Easy Data Transform and Hyper Plan Professional edition are both on sale for 25% off at Summerfest 2022. So now might be a good time to give them a try (both have free trials). There is also some other great products from other small vendors on sale, including Tinderbox, Scrivener and Devonthink. Some of the software is Mac only, but Easy Data Transform and Hyper Plan are available for both Mac and Windows (one license covers both). Sale ends 12th July.

Why isn’t there a decent file format for tabular data?

Tabular data is everywhere. I support reading and writing tabular data in various formats in all 3 of my software application. It is an important part of my data transformation software. But all the tabular data formats suck. There doesn’t seem to be anything that is reasonably space efficient, simple and quick to parse and text based (not binary) so you can view and edit it with a standard editor.

Most tabular data currently gets exchanged as: CSV, Tab separated, XML, JSON or Excel. And they are all highly sub-optimal for the job.

CSV is a mess. One quote in the wrong place and the file is invalid. It is difficult to parse efficiently using multiple cores, due to the quoting (you can’t start parsing from part way through a file). Different quoting schemes are in use. You don’t know what encoding it is in. Use of separators and line endings are inconsistent (sometimes comma, sometimes semicolon). Writing a parser to handle all the different dialects is not at all trivial. Microsoft Excel and Apple Numbers don’t even agree on how to interpret some edge cases for CSV.

Tab separated is a bit better than CSV. But can’t store tabs and still has issues with line endings, encodings etc.

XML and JSON are tree structures and not suitable for efficiently storing tabular data (plus other issues).

There is Parquet. It is very efficient with it’s columnar storage and compression. But it is binary, so can’t be viewed or edited with standard tools, which is a pain.

Don’t even get me started on Excel’s proprietary, ghastly binary format.

Why can’t we have a format where:

  • Encoding is always UTF-8
  • Values stored in row major order (row 1, row2 etc)
  • Columns are separated by \u001F (ASCII unit separator)
  • Rows are separated by \u001E (ASCII record separator)
  • Er, that’s the entire specification.

No escaping. If you want to put \u001F or \u001E in your data – tough you can’t. Use a different format.

It would be reasonably compact, efficient to parse and easy to manually edit (Notepad++ shows the unit separator as a ‘US’ symbol). You could write a fast parser for it in minutes. Typing \u001F or \u001E in some editors might be a faff, but it is hardly a showstopper.

It could be called something like “unicode separated value” (hat tip to @fakeunicode on Twitter for the name) or “unit separated value” with file extension .usv. Maybe a different extension could used when values are stored in column major order (column1, column 2 etc).

Is there nothing like this already? Maybe there is and I just haven’t heard of it. If not, shouldn’t there be?

And yes I am aware of the relevant XKCD cartoon ( https://xkcd.com/927/ ).

** Edit 4-May-2022 **

“Javascript” -> “JSON” in para 5.

It has been pointed at the above will give you a single line of text in an editor, which is not great for human readability. A quick fix for this would be to make the record delimiter a \u001E character followed by an LF character. Any LF that comes immediately after an \u001E would be ignored when parsing. Any LF not immediately after an \u001E is part of the data. I don’t know about other editors, but it is easy to view and edit in Notepad++.

Making explainer videos for your software

If you want to find out how to do something, such as do a mail merge in Word or fix a leaky valve on a radiator, where do you look first? Probably Youtube. Videos are an excellent way to explain something. More bandwidth than text and more scaleable than a 1-to-1 demo.

I’ve done explainer videos for all 3 of my products. But I found it a real struggle. I would write a script and then try to read the script and do the screencast at the same time and do it all in one take. I would stutter and stumble and it would take multiple attempts. It took ages and results were passable at best. I got some better software to edit the stumbles out, so I didn’t have to do it in one go. But it still took me a fair few attempts and quite a bit of editing. It became one of my least favourite things to do and so I did less and less of it.

Recently, I came across these slides on video by Christian Genco. These and subsequent Twitter exchanges with Christian convinced me that I should stop being a perfectionist about video and just start cranking them out on the grounds that a ‘good enough’ video is better than no video at all (‘the perfect is the enemy of the good’) and I would get better at it over time. As Stalin supposedly said “Quantity has a quality all of it’s own”.

So I have ditched the scripts and the perfectionism and I’ve managed to create 13 short Easy Data Transform explainer videos in the last week or so. And I am getting faster at it and (hopefully) a bit more polished. I’m definitely not an expert on this (and probably never will be) but here are some tips I have picked up along the way:

  • Get some decent software. I use Camtasia on Windows and it seems pretty good.
Camtasia
  • Try to talk slower.
  • Try to sound upbeat (not easy if you are British and could voice double for Eeyore).
  • Try not to move the mouse and talk at the same time. This makes editing a lot easier. Some people like to do the audio and the visual separately, but that seems like too much hassle.
  • If you stumble, just take a deep breath, say it again and then edit the stumble out later.
  • Get a reasonable mic. I have a snowball mic on a cantilevered stand. I covered it with a thin cloth to try to reduce pops.
  • The occasional ‘um’ is fine.
  • Have a checklist of things to do for each video, so you don’t forget anything (such as disabling your phrase expander software or muting the phone).
My setup. Note the high tech use of rubber bands.

I’m lucky to have a very quiet office, so I don’t have much background noise to contend with.

Using Camtasia I can easily add intos and outros, edit out stumbles and add various effects, such a mouse position highlighting and movement smoothing. I just File>Save as the previous project so that I don’t have to re-add the intro and outro. Unsurprisingly, Camtasia have lots of explainer videos. I wish there was a way to automatically ‘ripple delete’ any sections where there is no audio and no mouse movement (if there is, I haven’t found it). Some people recommend descript.com. It looks interesting, but I haven’t tried it.

I did an A/B test of recordings with my Senheiser headset mic against my Snowball mic and the consensus was that the headset was ok but the the Snowball mic sound quality was better.

Some people prefer to use synthetic voices, instead of their own voice. While these synthetic voices have improved a lot, they never sound quite right to me. Also it must be time consuming to type out all the text. Or you can pay to have a professional voiceover done, but this is surprisingly expensive (around $100 per minute, last time I checked) and almost certainly more time consuming than doing it yourself.

Some people aren’t confident about speaking on videos because they are not native speakers of that language and have an accent. Personally accents don’t bother me at all. In fact I like hearing English spoken with a foreign accent, as long as I can understand it. Also I think there is an authenticity to hearing a creator talk about their product in their own voice.

I’m not a big fan of music on explainer videos, so I don’t add any.

I let Youtube generate automatic captions for people that want them (which could be people in busy offices and on trains and planes, as well as the hearing impaired). They aren’t perfect, but they are good enough.

My videos are aimed at least as much at finding new users as helping existing users. So I make sure I research keyword terms (mostly in Google Adwords) before I decide which videos to make and what to title them. Currently I am targetting very specific keyword searches, such as How to convert CSV to Markdown. Easy Data Transform can do a lot more than just format conversion, but from an SEO point of view it is better to target the phrases that people are actually searching for.

I upload the videos as 1080P (1920 x 1080 pixels) on to the Easy Data Transform Youtube channel and onto my screencast.com account (which I pay a yearly fee for). I then embed the screencast.com videos on relevant easydatatransform.com pages using IFRAME embed codes created by screencast.com. I don’t use the Youtube videos on my website, because I don’t want people to be distracted by Youtube ads and ‘you may also like’ recommendations. They might be showing a competitor! I don’t host the videos on the website itsself as I worry that might slow down the website. I also link to the videos in screencast.com from my help documentation, as appropriate.

Some people like to embed video of themselves in screencasts, in the hope of making it more engaging. But personally I want people to concentrate on my software, rather than being distracted by the horror of my face. And not having to comb my hair or look smart was part of what got me into running my own software business.

In the next few months I will be checking my analytics to see how many views these videos get and whether they increase the time on page and reduce the bounce rate.

If you can spare a few seconds to go to my Youtube page and ‘like’ a video ot two or subscribe, that would be a big help!

Note that some of the above doesn’t apply when you are creating a demo video for your home page, rather than an explainer video. Your main demo video should be slick and polished.