Tag Archives: data transformation

Easy Data Transform v1.6.0

I have been working hard on Easy Data Transform. The last few releases have included:

  • a 64 bit version for Windows
  • JSON, XML and vCard format input
  • output of nested JSON and XML
  • a batch processing mode
  • command line arguments
  • keyboard shortcuts
  • various improvements to transforms

Plus lots of other improvements.

The installer now includes 32 and 64 bit version of Easy Data Transform for Windows and installs the one appropriate to your operating sytem. This is pretty easy to with the Inno Setup installer. You just need to use Check: Is64BitInstallMode, for example:

[Files]
Source: "..\binaries\windows\program\{#MyAppExeName}"; DestDir: "{app}"; Flags: ignoreversion; Check: not Is64BitInstallMode
Source: "..\binaries\windows\program64\{#MyAppExeName}"; DestDir: "{app}"; Flags: ignoreversion; Check: Is64BitInstallMode

But it does pretty much double the size of the installer (from 25MB to 47MB in my case).

The 32 bit version is restricted to addressing 4GB of memory. In practise, this means you may run out of memory if you get much above a million data values. The 64 bit version is able to address as much memory as your system can provide. So datasets with tens or hundreds of millions of values are now within reach.

I have kept the 32 bit version for compatibility reasons. But data on the percentage of customers still using 32 bit Windows is surprisingly hard to come by. Some figures I have seen suggest <5%. So I will probably drop 32 bit Windows support at some point. Apple, being Apple, made the transition to a 64 bit OS much more aggressively and so the Mac version of Easy Data Transform has always been 64 bit only.

I have also been doing some benchmarking and Easy Data Transform is fast. On my development PC it can perform an inner join of 2 datasets of 900 thousand rows x 14 columns in 5 seconds. The same operation on the same machine in Excel Power Query took 4 minutes 20 seconds. So Easy Data Transform is some 52 times faster than Excel Power Query. Easy Data Transform is written in C++ with reference counting and hashing for performance, but I am surprised by just how much faster it is.

The Excel Power Query user interface also seems very clunky by comparison. The process to join two CSV files is:

  • Start Excel.
  • Start Power Query.
  • Load the two data files into Excel power query.
  • Choose the key columns.
  • Choose merge.
  • Choose inner join. Wait 4 minutes 20 seconds.
  • Load the results back into Excel. Wait several more minutes.
  • Save to a .csv file.
  • Total time: ~600 seconds

Whereas in Easy Data Transform you just:

  • Start Easy Data Transform.
  • Drag the 2 files onto the center pane.
  • Click ‘join’.
  • Select the key columns. Wait 5 seconds.
  • Click ‘To file’ and save to a .csv file.
  • Total time: ~30 seconds

join-op

If you have some data to transform, clean or analyze please give Easy Data Transform a try. There is a fully functional free trial. Email me if you have any questions.

 

Analyzing COVID19 data with Easy Data Transform

I have continued to make lots of improvements to Easy Data Transform, including:

Here is a video of me using Easy Data Transform to analyze the data.europa.eu COVID19 dataset. Hopefully it gives an idea of what the software is capable of.

covid19-data-wrangling

 

 

Easy Data Transform v1.1.0

I released v1.1.0 of Easy Data Transform this week. It is a big upgrade, with some major new features.

easy-data-transform-v110

There is a new Javascript transform. This allows you to create custom transforms for anything that is too specialist to do with the other 37 built-in transforms. I’m not a fan of Javascript, with its horrible scoping and typing, and I would have preferred Python or Lua. But there is a Javascript engine built into Qt, so this was the easiest way to add scripting. Now if you want to multiply two columns of your data together in Easy Data Transform, you can just do this:

javascript-data-transform-v110

You can also access Javascript maths, date and string functions. So you can do some pretty complicated stuff. Hopefully the built-in transforms are enough to cover 95% of data transformations. But the new Javascript transform adds some serious flexibility for the remainder. The Qt Javascript engine is also pretty fast. In testing I was able to multiply values from 2 columns together across 10,000 rows in less than 0.03 seconds.

There is a new Lookup transform. This allows you to lookup values for one dataset in another dataset. For example, if you have a dataset with a column for country code and another dataset with columns for the country code and tax rate, you can look up the tax rate by country code.

Previously you could only output your data in Excel and delimited text formats (including CSV and TSV). The new release also adds output to JSON, HTML, Markdown, vCard, YAML and XML formats.

I have improved the speed of the Join transform significantly using hashing. This makes a big difference with large datasets.

To save time, Easy Data Transform guesses the likely columns you want to use as keys when you Join, Intersect, Lookup or Subtract two datasets. For example if 2 datasets both have colummns called ‘ID’ with lots of unique values that are common to both columns, it will choose those two columns as the default key columns. I have improved the heuristic used to set the default columns.

You can now add comments to input, transform and output nodes as a note to a colleague or your future self.

You can now snap your input, transform and output nodes to a grid, so you can keep your layout all lovely and neat.

I have also made some bug fixes and minor improvements.

Haven’t tried Easy Data Transform yet? Got some table or list data that you need to wrangle into a more useful form? Take the free trial for a spin.

 

Easy Data Transform v1.0.0 released

v100-screen-cap

I finally released a paid version of Easy Data Transform today, for both Windows and Mac. I am very pleased with how it has turned out. Obviously it is only v1.0.0, so there is plenty of additional features I could add, including:

  • Batch processing
  • Support for JSON, XML, SQLite input/output
  • More transforms
  • A 64 bit version for Windows
  • A Linux version

But I need to listen carefully to prospective customers to decide which additional features to prioritize in future releases. It might be something I haven’t even thought of.

But v1.0.0 already has a really useful core of features. And, if you aren’t embarrassed by v1.0, you didn’t release it early enough. That said, I haven’t cut corners on quality. It has proper documentation and has been through extended beta testing, dogfooding and several rounds of usability and third party testing.

The product has a fully-functional 7 (non-consecutive) day free trial. I think that is enough for prospective customers to decide if it does what they need. I also have a 60 day money-back guarantee.

I have decided to go with a subscription model: $99 / €90 / £75 + tax per person per year. Which covers up to 3 computers. At this price point I can afford some paid promotion and to provide a decent level of support. I am not offering a monthly subscription, as I don’t really want people who are going to pay for 1 month (to do their annual TPS reports) and then cancel.

Have you got some data you need to merge, clean, reformat or de-dupe? Give it a try. You can get a 25% discount if you buy a subscription by the 27th December 2019 using this link.

 

Eating my own dogfood

Eating your own dogfood There is a story that a president of a pet food company ate some of his own dog food, to show how good it was. I’m not sure how tasty dog food really needs to be, given that dogs are happy to lick their own backsides. But his commitment is admirable. The least we can do as software developers is to use our own software as much as possible. After all, if you don’t use it, how can you expect anyone else to?

In that spirit I have been using my new Easy Data Transform product as much as possible. The biggest project so far has been merging two databases, for a charity that I volunteer at. I created an Airtable database for the charity. But volunteer information was already in a separate CRM. I imported relevant CRM data into Airtable, but the CRM system remained in use for emailing volunteers for a couple of years while I concentrated on Airtable and other tasks. In that time the Airtable database has become a roaring success for the charity. So we eventually decided to retire the CRM system and also use Airtable as our CRM.

Consequently I had to merge the latest CRM data into Airtable. I exported the relevant data from each as a CSV and then proceeded to merge the mailing list tags from the CRM into a new column in Airtable. I also created tables of discrepancies for the charity staff to work through. For example, where the telephone numbers or emails had been added or updated in one database, but not the other.

When I had initially imported the CRM data into Airtable, I had imported the CRM ID record. So those records were easy to match between Airtable and the CRM using a simple join on the ID. However any records added subsequently to Airtable or the CRM did not have matching IDs. So I had to match those by first name + last name or email address. The data was quite ‘dirty’, as is invariably the case with real world data. A phone number may be “0123 456 789” on one system and “01 23456789” on another. A volunteer might be “Chris” in one database and “Christopher” in another. Also some contacts had multiple entries in the CRM system. So this was not a trivial problem.

dogfood.png

You can get an idea of what was involved from the screenshot above. The two pink input nodes are the 2 databases exported as CSV files, the blue nodes are various transforms (joining, filtering, removing spaces etc) and the green nodes are the outputs (e.g. lists of telephone and email differences, lists of people in one database, but not the other etc). Quite a lot of the transforms are just column renames (in future I should probably support renaming multiple columns in one transform).

I think this would have been a horrific task using Excel, SQL, Beyond Compare or any of the other tools I had to hand, amazing tools as they may be for other tasks. But Easy Data Transform performed brilliantly, even if I do say so myself. It was particularly helpful that you could see the whole process step-by-step and backtrack or branch at any point without losing previous changes.

While eating my own dogfood, I found one bug (related to carriage returns inside CSV records) and quite a few minor annoyances. These have now been fixed in the latest release. I also added  a new ‘Compare Columns’ transform, which was really useful for this sort of work. So it was a very useful experience and I really recommend ‘eating your own dogfood’ as much as you can, along with usability testing.

Have you got some data that needs cleaning, merging, de-duping or filtering? Analytics, log files, emailing lists, databases? Of course you do! Why not give Easy Data Transform a try. It is free while it is in beta. Let me know how you get on.