Category Archives: data transformation

Why you can’t parse CSV with a regular expression

Regular expressions are a very useful tool in a programmer’s toolbox. But they can’t do everything. And one of the things they can’t do is to reliably parse CSV (comma separated value) files. This is because a regular expression doesn’t store state. You need a state machine (or something equivalent) to parse a CSV file.

For example, consider this (very short) CSV file (3 double quotes + 1 comma + 3 double quotes):

“””,”””

This is correctly interpreted as:

quote to start the data value + escaped quote + comma + escaped quote + quote to end the data value

E.g. a single value of:

“,”

How each character is interpreteted depends on what characters come before and after it. E.g. the first quote puts you into an ‘inside data’ state. The second quote puts you into a ‘might be an escaped for the following character or might be end of data’ state. The third quote puts you back into a ‘inside data’ state.

No matter how complicated a regex you come up with, it will always be possible to create a CSV file that your regex can’t correctly parse. And once the parsing goes wrong, everything after that point is probably garbage.

You can write a regex that can handle CSV file where you are guaranteed there are no commas, quotes or carriage returns in the data values. But commas, quotes or carriage returns in the data values are perfectly valid in CSV files. So it is only ever going to handle a subset of all the possible well-formed CSV files.

Note that you can parse a TSV (tab separated value) file with a regex, as TSV files are (generally!) not allowed to contain tabs or carriage returns in data and therefore don’t need escaping.

See also on Stackoverflow:

Using regular expressions to parse HTML: why not?

Winterfest 2022

Easy Data Transform and Hyper Plan Professional edition are both on sale for 25% off at Winterfest 2022. So now might be a good time to give them a try (both have free trials). There is also some other great products from other small vendors on sale, including Tinderbox, Scrivener and Devonthink. Some of the software is Mac only, but Easy Data Transform and Hyper Plan are available for both Mac and Windows (one license covers both OSs).

Easy Data Transform progress

I have been gradually improving my data wrangling tool, Easy Data Transform, putting out 70 public releases since 2019. While the product’s emphasis is on ease of use, rather than pure performance, I have been trying to make it fast as well, so it can cope with the multi-million row datasets customers like to throw at it. To see how I was doing, I did a simple benchmark of the most recent version of Easy Data Transform (v1.37.0) against several other desktop data wrangling tools. The benchmark did a read, sort, join and write of a 1 million row CSV file. I did the benchmarking on my Windows development PC and my Mac M1 laptop.

Easy Data Transform screenshot

Here is an overview of the results:

Time by task (seconds), on Windows without Power Query (smaller is better):

data wrangling/ETL benchmark Windows

I have left Excel Power Query off this graph, as it is so slow you can hardly see the other bars when it is included!

Time by task (seconds) on Mac (smaller is better):

data wrangling/ETL benchmark M1 Mac

Memory usage (MB), Windows vs Mac (smaller is better):

data wrangling/ETL benchmark memory Windows vs Mac

So Easy Data Transform is nearly as fast as it’s nearest competitor, Knime, on Windows and a fair bit faster on an M1 Mac. It is also uses a lot less memory than Knime. However we have got some way to go to catch up with the Pandas library for Python and the data.table package for R, when it comes to raw performance. Hopefully I can get nearer to their performance in time. I was forbidden from including benchmarks for Tableau Prep and Alteryx by their licensing terms, which seems unnecessarily restrictive.

Looking at just the Easy Data Transform results, it is interesting to notice that a newish Macbook Air M1 laptop is significantly faster than a desktop AMD Ryzen 7 desktop PC from a few years ago.

Windows vs Mac M1 benchmark

See the full comparison:

Comparison of data wrangling/ETL tools : R, Pandas, Knime, Power Query, Tableau Prep, Alteryx and Easy Data Transform, with benchmarks

Got some data to clean, merge, reshape or analyze? Why not download a free trial of Easy Data Transform ? No sign up required.

Why isn’t there a decent file format for tabular data?

Tabular data is everywhere. I support reading and writing tabular data in various formats in all 3 of my software application. It is an important part of my data transformation software. But all the tabular data formats suck. There doesn’t seem to be anything that is reasonably space efficient, simple and quick to parse and text based (not binary) so you can view and edit it with a standard editor.

Most tabular data currently gets exchanged as: CSV, Tab separated, XML, JSON or Excel. And they are all highly sub-optimal for the job.

CSV is a mess. One quote in the wrong place and the file is invalid. It is difficult to parse efficiently using multiple cores, due to the quoting (you can’t start parsing from part way through a file). Different quoting schemes are in use. You don’t know what encoding it is in. Use of separators and line endings are inconsistent (sometimes comma, sometimes semicolon). Writing a parser to handle all the different dialects is not at all trivial. Microsoft Excel and Apple Numbers don’t even agree on how to interpret some edge cases for CSV.

Tab separated is a bit better than CSV. But can’t store tabs and still has issues with line endings, encodings etc.

XML and JSON are tree structures and not suitable for efficiently storing tabular data (plus other issues).

There is Parquet. It is very efficient with it’s columnar storage and compression. But it is binary, so can’t be viewed or edited with standard tools, which is a pain.

Don’t even get me started on Excel’s proprietary, ghastly binary format.

Why can’t we have a format where:

  • Encoding is always UTF-8
  • Values stored in row major order (row 1, row2 etc)
  • Columns are separated by \u001F (ASCII unit separator)
  • Rows are separated by \u001E (ASCII record separator)
  • Er, that’s the entire specification.

No escaping. If you want to put \u001F or \u001E in your data – tough you can’t. Use a different format.

It would be reasonably compact, efficient to parse and easy to manually edit (Notepad++ shows the unit separator as a ‘US’ symbol). You could write a fast parser for it in minutes. Typing \u001F or \u001E in some editors might be a faff, but it is hardly a showstopper.

It could be called something like “unicode separated value” (hat tip to @fakeunicode on Twitter for the name) or “unit separated value” with file extension .usv. Maybe a different extension could used when values are stored in column major order (column1, column 2 etc).

Is there nothing like this already? Maybe there is and I just haven’t heard of it. If not, shouldn’t there be?

And yes I am aware of the relevant XKCD cartoon ( https://xkcd.com/927/ ).

** Edit 4-May-2022 **

“Javascript” -> “JSON” in para 5.

It has been pointed at the above will give you a single line of text in an editor, which is not great for human readability. A quick fix for this would be to make the record delimiter a \u001E character followed by an LF character. Any LF that comes immediately after an \u001E would be ignored when parsing. Any LF not immediately after an \u001E is part of the data. I don’t know about other editors, but it is easy to view and edit in Notepad++.

Adding colour schemes to Easy Data Transform

Easy Data Transform user interface.

The colours used in Easy Data Transform make no difference to the output. But the colours are an important part of a user interface, especially when you using a tool for significant amounts of time. First impressions of the user interface are also important from a commercial point of view.

But colour is a very personal thing. Some people are colour-blind. Some people prefer light palettes and others dark palettes. Some people like lots of contrast and other don’t. So I am going to allow the user to fully customize the Center pane colours in Easy Data Transform.

I also want to include some standard colour schemes, to get people started. Looking around at other software it seems that the ‘modern’ trend is for pastel colours, invisible borders and subtle shadows. This looks lovely, but it is a bit low contrast for my tired old eyes. So I have tried to create a range of designs in that hope that everyone will like at least one. Below are the standard schemes I have come up with so far. They all stick with the convention pink=input, blue=transform, green=output.

Which is your favourite (click the images to enlarge).

Is there a tool that you use day to day that has particular nice colour scheme?

I hope to also add an optional dark theme for the rest of the UI in due course (Qt allowing).

Easy Data Transform v1.6.0

I have been working hard on Easy Data Transform. The last few releases have included:

  • a 64 bit version for Windows
  • JSON, XML and vCard format input
  • output of nested JSON and XML
  • a batch processing mode
  • command line arguments
  • keyboard shortcuts
  • various improvements to transforms

Plus lots of other improvements.

The installer now includes 32 and 64 bit version of Easy Data Transform for Windows and installs the one appropriate to your operating sytem. This is pretty easy to with the Inno Setup installer. You just need to use Check: Is64BitInstallMode, for example:

[Files]
Source: "..\binaries\windows\program\{#MyAppExeName}"; DestDir: "{app}"; Flags: ignoreversion; Check: not Is64BitInstallMode
Source: "..\binaries\windows\program64\{#MyAppExeName}"; DestDir: "{app}"; Flags: ignoreversion; Check: Is64BitInstallMode

But it does pretty much double the size of the installer (from 25MB to 47MB in my case).

The 32 bit version is restricted to addressing 4GB of memory. In practise, this means you may run out of memory if you get much above a million data values. The 64 bit version is able to address as much memory as your system can provide. So datasets with tens or hundreds of millions of values are now within reach.

I have kept the 32 bit version for compatibility reasons. But data on the percentage of customers still using 32 bit Windows is surprisingly hard to come by. Some figures I have seen suggest <5%. So I will probably drop 32 bit Windows support at some point. Apple, being Apple, made the transition to a 64 bit OS much more aggressively and so the Mac version of Easy Data Transform has always been 64 bit only.

I have also been doing some benchmarking and Easy Data Transform is fast. On my development PC it can perform an inner join of 2 datasets of 900 thousand rows x 14 columns in 5 seconds. The same operation on the same machine in Excel Power Query took 4 minutes 20 seconds. So Easy Data Transform is some 52 times faster than Excel Power Query. Easy Data Transform is written in C++ with reference counting and hashing for performance, but I am surprised by just how much faster it is.

The Excel Power Query user interface also seems very clunky by comparison. The process to join two CSV files is:

  • Start Excel.
  • Start Power Query.
  • Load the two data files into Excel power query.
  • Choose the key columns.
  • Choose merge.
  • Choose inner join. Wait 4 minutes 20 seconds.
  • Load the results back into Excel. Wait several more minutes.
  • Save to a .csv file.
  • Total time: ~600 seconds

Whereas in Easy Data Transform you just:

  • Start Easy Data Transform.
  • Drag the 2 files onto the center pane.
  • Click ‘join’.
  • Select the key columns. Wait 5 seconds.
  • Click ‘To file’ and save to a .csv file.
  • Total time: ~30 seconds

join-op

If you have some data to transform, clean or analyze please give Easy Data Transform a try. There is a fully functional free trial. Email me if you have any questions.

 

Analyzing COVID19 data with Easy Data Transform

I have continued to make lots of improvements to Easy Data Transform, including:

Here is a video of me using Easy Data Transform to analyze the data.europa.eu COVID19 dataset. Hopefully it gives an idea of what the software is capable of.

covid19-data-wrangling

 

 

Easy Data Transform v1.0.0 released

v100-screen-cap

I finally released a paid version of Easy Data Transform today, for both Windows and Mac. I am very pleased with how it has turned out. Obviously it is only v1.0.0, so there is plenty of additional features I could add, including:

  • Batch processing
  • Support for JSON, XML, SQLite input/output
  • More transforms
  • A 64 bit version for Windows
  • A Linux version

But I need to listen carefully to prospective customers to decide which additional features to prioritize in future releases. It might be something I haven’t even thought of.

But v1.0.0 already has a really useful core of features. And, if you aren’t embarrassed by v1.0, you didn’t release it early enough. That said, I haven’t cut corners on quality. It has proper documentation and has been through extended beta testing, dogfooding and several rounds of usability and third party testing.

The product has a fully-functional 7 (non-consecutive) day free trial. I think that is enough for prospective customers to decide if it does what they need. I also have a 60 day money-back guarantee.

I have decided to go with a subscription model: $99 / €90 / £75 + tax per person per year. Which covers up to 3 computers. At this price point I can afford some paid promotion and to provide a decent level of support. I am not offering a monthly subscription, as I don’t really want people who are going to pay for 1 month (to do their annual TPS reports) and then cancel.

Have you got some data you need to merge, clean, reformat or de-dupe? Give it a try. You can get a 25% discount if you buy a subscription by the 27th December 2019 using this link.

 

Eating my own dogfood

Eating your own dogfood There is a story that a president of a pet food company ate some of his own dog food, to show how good it was. I’m not sure how tasty dog food really needs to be, given that dogs are happy to lick their own backsides. But his commitment is admirable. The least we can do as software developers is to use our own software as much as possible. After all, if you don’t use it, how can you expect anyone else to?

In that spirit I have been using my new Easy Data Transform product as much as possible. The biggest project so far has been merging two databases, for a charity that I volunteer at. I created an Airtable database for the charity. But volunteer information was already in a separate CRM. I imported relevant CRM data into Airtable, but the CRM system remained in use for emailing volunteers for a couple of years while I concentrated on Airtable and other tasks. In that time the Airtable database has become a roaring success for the charity. So we eventually decided to retire the CRM system and also use Airtable as our CRM.

Consequently I had to merge the latest CRM data into Airtable. I exported the relevant data from each as a CSV and then proceeded to merge the mailing list tags from the CRM into a new column in Airtable. I also created tables of discrepancies for the charity staff to work through. For example, where the telephone numbers or emails had been added or updated in one database, but not the other.

When I had initially imported the CRM data into Airtable, I had imported the CRM ID record. So those records were easy to match between Airtable and the CRM using a simple join on the ID. However any records added subsequently to Airtable or the CRM did not have matching IDs. So I had to match those by first name + last name or email address. The data was quite ‘dirty’, as is invariably the case with real world data. A phone number may be “0123 456 789” on one system and “01 23456789” on another. A volunteer might be “Chris” in one database and “Christopher” in another. Also some contacts had multiple entries in the CRM system. So this was not a trivial problem.

dogfood.png

You can get an idea of what was involved from the screenshot above. The two pink input nodes are the 2 databases exported as CSV files, the blue nodes are various transforms (joining, filtering, removing spaces etc) and the green nodes are the outputs (e.g. lists of telephone and email differences, lists of people in one database, but not the other etc). Quite a lot of the transforms are just column renames (in future I should probably support renaming multiple columns in one transform).

I think this would have been a horrific task using Excel, SQL, Beyond Compare or any of the other tools I had to hand, amazing tools as they may be for other tasks. But Easy Data Transform performed brilliantly, even if I do say so myself. It was particularly helpful that you could see the whole process step-by-step and backtrack or branch at any point without losing previous changes.

While eating my own dogfood, I found one bug (related to carriage returns inside CSV records) and quite a few minor annoyances. These have now been fixed in the latest release. I also added  a new ‘Compare Columns’ transform, which was really useful for this sort of work. So it was a very useful experience and I really recommend ‘eating your own dogfood’ as much as you can, along with usability testing.

Have you got some data that needs cleaning, merging, de-duping or filtering? Analytics, log files, emailing lists, databases? Of course you do! Why not give Easy Data Transform a try. It is free while it is in beta. Let me know how you get on.