August 1, 2016

QViewer v3: Qlik-style filtering and full table phonetic search

New major version of QViewer is out. You can download it here:

Here is what's new and exciting about it:

Qlik-style filtering


click to zoom

The headline feature of the new version is the ability to filter tables in the Qlik fashion using listboxes. Applied filters can be seen in the filter bar that appears above the table (see screenshot above).

When a selection is made, value counters update automatically. The green bar charts behind the counters hint at current selection count vs total count ratio.

Note that nulls are also selectable.

Full table search

It is now possible to search for a value in entire table. Current selection becomes limited to only rows where the searched value is found. Full table search is basically another kind of filter. When it's applied, it can be seen in the filter bar.

Phonetic search mode

The full-table search and the listboxes allow looking up values by matching them phonetically. This is helpful when you don't know for sure the spelling of a word in question. For instance, if you search for "Acord" values "Accord", "Akord", "Akkort" will match. Phonetic search works for whole words only. Currently, only English phonetic matching is supported.

Cell metadata


You can view additional metadata of a value in question using Cell Metadata floating window. This is particularly helpful for easy detection of leading and trailing spaces in text values.

New license key format

The new version requires upgrading license keys. All license keys purchased after 1st of August, 2015 will be upgraded for free. License keys purchased prior to that date are upgraded at 50% of the current regular price. To upgrade a license key please send me your old key and indicate the email address it is linked to.

July 9, 2016

The three most important habits of a data analyst

I've been doing data analysis for almost 15 years -- mostly using Excel and Business Intelligence tools. And from the very first year I believe that accuracy is the biggest challenge for a data analyst. Accuracy is fundamental because if a calculation result is incorrect then everything else that is based on it -- visualizations, judgements and conclusions, become irrelevant and worthless. Even performance is not so important, because sometimes you can solve a performance problem by throwing in more hardware, but that would never fix incorrect calculation logic.

Ensuring accuracy is probably the most important skill a data analyst should master. To me, striving for accuracy is a mental discipline developed as a result of constant self-training, rather than something that can be learned overnight. There are three practical habits to develop this skill:

1) Sanity checks. These are quick litmus tests that allow detecting grave errors on early stages. After you get a calculation result for the first time, ask yourself -- does it make sense? Does the order of magnitude look sane? If it's a share (percentage) of something else -- is it reasonably big/small? If it's a list of items -- is it reasonably long/short? Sounds like a no-brainer but people tend to skip sanity checks frequently.

2) Full assumption testing. In my experience this habit is most overlooked by beginner analysts. Assumptions should not be opinions, they must be verified facts. "We're told that field A has unique keys" -- verify it by trying to find duplicate values in it. "Field B has no nulls" -- again, verify it by counting nulls or check data model constraints (where applicable). "They said that Gender is encoded with M and F" -- verify it by counting distinct values in field Gender. Whatever assumptions are used for filtering, joining or calculation -- absolutely all of them must be tested and confirmed prior to doing anything else. Once you develop this habit you would be surprised how often assumptions turn out to be wrong. A good data analyst can spend a few days to verify assumptions before even starting analyzing data itself. Sometimes assumptions are implicit -- e.g. when we compare two text fields we usually implicitly assume that neither of if has special symbols or trailing spaces. A good data samurai is able to see invisible recognize implicit assumptions and test them explicitly.

3) Double calculation. This habit is sometimes overlooked by even experienced analysts. Probably because it requires sometimes rather tedious effort. This habit is about creating alternative calculations, often created in a different tool -- typically Excel. The point is to test the core logic, therefore such alternative calculation can include only a subset of original data and do not cover minor cases. The results achieved using alternative calculation should be equal to results of the main calculation logic, regardless whether it's done in SQL or some BI/ETL tool.

Let the Accuracy be with you.

July 5, 2016

Columnar in-memory ETL

Columnar databases are not exotic anymore. They're quite widespread and their benefits are well-known: data compression, high performance on analytical workloads, less demanding storage I/O throughput requirements. At the same time, ETL tools currently are still exclusively row-based, as they were 10 years ago or 20 years ago. Below, I'm describing a working principle of a columnar in-memory ETL [1], its differences in comparison with row-based ETL/ELT tools, and area of applicability (spoiler alert -- it's not just data transformation).

Incremental column-based transformations
Row-based ETL tools transform data row by row, when possible. For complex transformations, like aggregations, they build temporary data structures either in memory or on disks. This approach dictates a particular pattern -- row-based ETL tools strive to create as few temporary data structures as possible in order to reduce data redundancy and minimize number of full table scans, because they are very costly.

A column-based ETL tool, similarly to a columnar database, operates not with rows of uncompressed data, but with compressed columns (typically using vocabulary compression). Unlike a row-based ETL system, it can re-use datasets and transform compressed data directly (i.e. without decompression) and incrementally. Let me illustrate it with two examples: calculating a new column, and filtering a table:

1) Calculating a new column is simply adding a new column to existing table. Since a table is a set of columns, the result would be the original table (untransformed) + the newly calculated column. Note that in this case data is not moved -- we simply attach a new column to already existing table which is possible because our data is organized by columns, not by rows. In the case of a row-based ETL, we would have to update each row.

2) Filtering compressed columns can be done in two steps. Let's assume that we're using the vocabulary compression. In this case a column is represented as a combination of a vocabulary of unique entries, and a vector of pointers to the vocabulary entries (one pointer per row). Filtering can be done by marking selected vocabulary entries first, and then rebuilding the vector by removing pointers to not selected entries. Here, the benefit is double: we don't calculate a filtering condition (which is a heavy operation) for every row, but only for the vocabulary which is typically much shorter. Rebuilding the vector is a fast operation since it doesn't require calculating the filtering condition. Another benefit is that we don't have to rebuild vocabulary -- we can keep using the old vocabulary with the new vector, thus greatly reducing data redundancy.

In the examples above transformations are incremental -- i.e. new tables are obtained by incrementally modifying and re-using existing data.

Transformations like aggregation, sorting, unpivoting and some other can also be done by directly utilizing compressed data structure to a greater or lesser extent.

Incremental column-based transformation allows greatly reduced data redundancy, which brings us to the next topic:

In-memory transformations
Because of reduced redundancy and data compression, column-based ETL is a good candidate for in-memory processing. The obvious downside is, apparently, the limitation by RAM (which will be addressed below). The upsides of keeping all data in-memory are:
  • Increased performance due to elimination of slow disk I/O operations.
  • The ability to instantly view results of literally every transformation step without re-running transformations from the beginning. A columnar ETL effectively stores all results of transformations with a relatively little memory overhead, due to data compression and incremental logic.

Storing all data in memory has a quite interesting and useful consequence, barely possible for row-based ETL tools:

Reactive transformations
Having all intermediate transformation results in memory lets us re-calculate transformations starting from any point, instead of running everything from the beginning as in the case with traditional ETL tools. For instance, in a chain of 20 transformations we can modify a formula in the 19th transformation and recalculate only last two transformations. Or last 5, if we decide so. Or last 7. If transformations are a non-linear graph-like workflow, we can intelligently recalculate only necessary transformations, respecting dependencies.

(click to zoom)

Effectively, it enables an Excel-like experience, where transformations are recalculated automatically when one of them changes, similarly to Excel formulas that are re-evaluated when another formula or a cell value changes.

This creates a whole new experience of working with data -- reactive, exploratory, self-service data manipulation.

Resume: a new kind of tool for data
The columnar representation allows incremental transformation of compressed data, which in turn makes it possible to greatly reduce redundancy (typical for row-based ETL tools), and keep entire datasets in memory. This, in turn, speeds up calculations and enables reactive, interactive data exploration and data analysis capabilities.

Columnar in-memory ETL is basically a new kind of hybrid technology in which there is no distinct borderline between data transformation and data analysis. Instead of slicing and dicing a pre-calculated data model like OLAP tools do, we get the ability to explore data by transforming it on the fly. At the same time it does the traditional ETL job, typical for row-based ETL utilities.

The RAM limitation still remains though. It can be partially mitigated with a data partitioning strategy, where a big dataset is sliced into parts which then are processed in parallel in a map/reduce fashion. In the long term, the Moore's law is still effective for RAM prices which benefits in-memory data processing in general.

Row-based vs Columnar In-memory ETL

All-in-all, while processing billions of rows is still more appropriate for row-based ETL tools than for columnar ones, the latter represent a new paradigm of mixed data transformation and analysis, which makes it especially relevant now, when public interest to self-service data transformation is growing.

[1] You can take a look at an example of a columnar in-memory ETL tool here:

June 13, 2016

EasyMorph 3.0: A combine for your data kitchen

EasyMorph v3.0 is about to be released, its beta version is already available for downloading. As the tool matures its product concept solidifies. Version 3.0 is a major milestone in this regard, because the long process of product re-positioning started last year is now complete, and a long-term vision has been formed. In this post I would like to explain a bit more what EasyMorph has morphed into (pun intended).

To put it simply, EasyMorph has become a "combine for data kitchen" (if you've never heard about the data kitchen concept check out this post). The analogy with kitchen combine is not a coincidence -- just as real-life kitchen combines, EasyMorph has several distinct functions that all utilize the same engine. In our case it's a reactive transformation-based calculation engine with four major functions built on top of it:
  • Data transformation
  • Data profiling and analysis
  • File conversion and mass operations with files
  • Reporting

Data transformation
This is probably the most obvious function as people usually know EasyMorph as an ETL tool. In this role everything is more or less typical -- tabular data from databases and files is transformed using a set of configurable transformations. What's less typical is support for numbers and text in one column, non-relational transformations (e.g. creating column names from first N rows, or filling down empty cells), and the concept of iterations inspired by functional programming.

Data profiling and analysis
This function it usually less obvious because typically data profiling tools don't do transformations, but rather show some hard-coded statistics on data -- counts, uniqueness, distribution histograms, etc. QViewer is a typical example of such tool.

Data profiling with EasyMorph is different, because instead of using a fixed set of pre-defined hard-coded metrics you can calculate such metrics on the fly, and visualize them using drag-and-drop charts. While this approach sacrifices some simplicity (you might need, say, 3 clicks instead of 1 to calculate a metric) it enables much broader analysis and more precisely selected subsets of data thus providing way more flexibility than typical data profiling tools.

I can say that in my work I use EasyMorph for data analysis and profiling much more often that for ETL simply because new data transformations need to be designed once in a while (then they're just scheduled), but I do data analysis every day.

File conversion and mass operations with files
While file conversion is a rather obvious function (read a file in one format, write in another), the ability to conveniently perform mass manipulations with files (copying, renaming, archiving, uploading, etc.) is a surprising and underestimated function of EasyMorph. Really, who would expect that what supposedly is an ETL tool can be used for things like that? But since EasyMorph is a "data kitchen combine" rather than a typical ETL tool, this is exactly what it can be used for.

Since creating a list of files in a folder is as simple as dragging the folder into EasyMorph (recursive subfolder scanning is supported from version 3.0) you can get a list of, say, 40'000 files in 1000 folders in literally 5 seconds, sorted by size, creation time, folder and whatnot. Finding the biggest file? Just two clicks. Filter only spreadsheets? One more click. Exclude read-only? Another click.

Now add the capability of running an external application (or a Windows shell command) for each file (using iterations) with command line composed using a formula, and you get a perfect replacement for batch scripts to do mass renaming, copying, archiving, sending e-mails or anything else that can be done from the command line. And, just like batch scripts, EasyMorph projects themselves can be executed from the command line, so they can be triggered by a 3rd party application (e.g. scheduler).

EasyMorph has a number of workflow transformations, that actually don't transform anything but perform various actions like launching an external program, or even taking a pause for N seconds. Therefore, it's basically a visual workflow design tool with the capability of designing parallelized (e.g. parallel mass file conversion) and asynchronous processes.

PDF reporting is the headline feature of version 3.0 and a new function of EasyMorph. The idea behind it was simple: sometimes a result of data analysis has to be shared, and PDF is the most universally used format for sharing documents. At this point, PDF reporting in EasyMorph is not meant to be pixel-perfect and its customization capabilities are rather limited. Instead, the accent was made on quickness of report creation in order to make it less time-consuming. We're testing the waters with this release, and the direction of future development will depend on feedback received from users.

While it might not count as a distinct function sometimes it's convenient to keep EasyMorph open just for ad hoc calculations, e.g. paste a list and find duplicates in it, or even dynamically generate parts of some script -- e.g. comma-separated lists of fields or values. As a "data kitchen combine" EasyMorph is not an application for a broad audience, but rather a professional tool for data analysts who work with data every day. And like with real-life kitchen combines some people use one function more often, some another. Pick yours.

May 15, 2016

A peek into future Business Intelligence with AI and stuff

What a future Business Intelligence can look like? Usually, I'm skeptical about "disruptive" ideas like natural language queries or automatically generated analytical narrations (although, I respect the research effort), but recently I saw something that for the first time looked really interesting, if you apply it to data analysis. I will tell what it is shortly, but first I have to explain my skepticism.

Typing in natural language queries won't work because it's no better than writing SQL queries. Syntax is surely different, but it still has to be learned. It doesn't provide the expected freedom, like SQL didn't. Besides unexpected syntax restrictions (which has to be learned by user), queries quickly become long and complicated. I played a bit with NLP (natural language processing) queries done in Prolog in my school years and have a bit of understanding of the complexities related to NLP.

This can be somewhat mitigated by voice input, however virtual assistants like Siri/Alexa/Cortana are built around canned responses so it won't work either, because analytical ad hoc queries tend to be very different, and they always have a context.

Now, here is the promising technology. It's called Viv and I highly recommend watching its demo (it's about 30 minutes):

Two things that make Viv different: self-generating queries and the the ability to use a context. This can potentially make voice-based interactive data analysis finally possible. Not only can a service like Viv answer queries, e.g. "How many new customers did we get since January", you should be able to make it actionable. How about setting up alerts, like this: "Let me know next time when monthly sales in the West region drop below 1mln. Do it until the end of this year"? Or, sharing "Send this report to Peter and Jane in Corporate Finance department". Such virtual data analyst can participate in meetings, answer spontaneous questions, send out meeting results -- all done by voice. Quite attractive, isn't it?

Data analysis is a favorable area for artificial intelligence because it has a relatively small "universe" where entities (customers, transactions, products, etc.) are not so numerous, and their relationships are well understood. If you ever tried to design or analyze a conceptual data warehouse model, then most probably you have a good picture of that "universe".

And it seems like right technology to operate with this "universe" might arrive soon.

May 8, 2016

Hints and tips on using QViewer for inspecting resident tables in QlikView

Three years ago I wrote "How to look inside resident tables at any point of loading script". This technique proved to be quite successful and efficient, and was praised by many prominent QlikView developers since then.

This post is a round-up of some best practices of using QViewer for inspecting resident tables in QlikView, collected over the last 3 years:

Here is the most recent variant of the INSPECT subroutine:

SUB Inspect (T)
    // let's add some fault tolerance
    LET NR = NoOfRows('$(T)');
    IF len('$(NR)')>0 THEN
        // Table exists, let's view it
        STORE $(T) into [$(QvWorkPath)\~$(T).qvd] (qvd);
        EXECUTE "C:\<pathToQViewer>\QViewer.exe" "$(QvWorkPath)\~$(T).qvd";
        EXECUTE cmd.exe /c del /q "$(QvWorkPath)\~$(T).qvd";
        //Table doesn't exist. Let's display a messagebox with a warning
        LOAD MsgBox('Table $(T) doesn' & chr(39) & 't exist. Nothing to inspect.', 'Warning', 'OK', 'ICONEXCLAMATION') as X AutoGenerate 1;
        Drop Table _MsgBox;
    // Namespace cleanup
    SET NR=;

Installer of the next version of QViewer will be creating a registry key with path to QViewer, so the subroutine will be able to use the registry key to get location of qviewer.exe instead of hardcoded file path (kudos to Matthew Fryer for the suggestion).

INSPECT is quite helpful in verifying joins for correctness. For this, insert CALL INSPECT twice -- once before a join, and once after it. This will allow you to see whether the resulting table has more rows after the join than before, and check if the join actually appended anything, i.e. if appended columns actually have some data in them.

To find duplicates in a column -- double-click the column header for a listbox with unique values in that column, and then click Count in that list. On first click QViewer will sort values in descending order thus showing duplicate entries (which have counts > 1) at the top of the list. Checking a primary key for duplicates after a join can help detecting wrong joins.

To find duplicate rows in a table -- click "Morph It" to open the table in EasyMorph, and then apply "Keep Duplicates" transformation. You can also filter rows, if you apply "Filter" or "Filter by expression" transformation.

When you deal with wide tables that have many columns, you might need to find specific column. Press F5 to open Table Metadata, and then sort field names in alphabetical order. Another common use case for Table Metadata is checking whether columns have values of expected type. For instance if a column is expected to have only numeric values, its "Text count" should be 0.

To find a value in a column -- double-click the column header to open a list of unique values, then use the search field above the list. To locate the searched value in the main table, simply double-click the value in the list. Press F3 to find next match in the main table.

Currently, the search feature is somewhat obscured (as rightfully pointed by some users). We will be introducing a more convenient full table search in QViewer v2.3 coming out in June. Subscribe to our mailing list on to get a notification when it happens.

May 1, 2016

Why I prototype Qlik apps in EasyMorph before creating them

If you want to create a Qlik app just create it, why would anyone build a prototype in another tool first? Isn't it just a waste of time? For simple cases -- probably yes, but for complex apps prototyping them first allows designing apps faster and more reliably. Here is why:

When developing Qlik apps with a complex transformation logic one of the main challenges is to deal with data quality and data structure of poorly documented source systems. Therefore the most time-consuming phase is figuring out how to process data correctly and what can potentially go wrong. There are many questions to answer during this phase, for instance:
  • How do we merge data -- what are the link fields, what fields are appended? 
  • Does any combination of the link fields have duplicates in one or the other table? 
  • Do the link fields have nulls?
  • Are nulls actually nulls or empty text strings?
  • Are numbers actually numbers, not text?
  • Do text values have trailing spaces?
  • After we join tables, does the result pass a sanity check?
  • How can we detect it if the join goes wrong on another set of data (e.g. for another time period)?
  • Are dates and amounts within expected ranges?
  • Do dimensions have complete set of values, is anything missing?
  • When dealing with data in spreadsheets
    • Are text and numbers mixed in the same column? If yes, what is the rule to clean things up?
    • Are column names and their positions consistent across spreadsheets? If not, how do we handle the inconsistency?
    • Are sheet names consistent across spreadsheets?
These are kinds of questions you would want to have answered before believing that a transformation works correctly and reliably. Unfortunately, scripting in Qlik, like any other scripting, is poorly suitable for interactive data profiling. And that's where you start wasting your time. To answer questions like those above in Qlik you would need to modify and re-run script many times and every time Qlik will reload same data again and again. Even if it's a small test dataset in a QVD, it still takes time, or you risk profiling too little data. Oh, and if you mistakenly omitted a comma your app is dropped and you need to re-open it and re-run again. Then after each reload you would have to create some one-off layout objects and write some one-off expressions, just to get one question answered. Tricks like inspecting resident tables make life a bit easier, but just a bit. Overall, it's an inconvenient and cumbersome process. It effectively discourages data exploration, which sooner or later leads to errors that are discovered on late stages, where the cost of redesigning and fixing errors is much higher than on earlier stages.

This is where EasyMorph comes in handy. First, it loads data once, then keeps it in memory, therefore, it doesn't have to be reloaded every time. And if you load sample data only one time, why not use a bigger data set which is usually better for data profiling? Not only does EasyMorph load data only once, it also keeps in memory results of all successful transformations. So if an error occurs, you continue from where it stopped, not from the beginning -- another time-saving feature.

Second, EasyMorph runs transformations automatically in the background after any change. It's like if you are writing a Qlik script, and while you're writing it after any change or new statement Qlik runs the script proactively, without requiring you to press Reload. Except Qlik doesn't do it. Basically, transformations to EasyMorph is what formulas are to Excel -- you change one and immediately see a new result of calculations, regardless of how many formulas/transformations it took.

Third, designing a transformation process visually is much faster than scripting. Some Qlik developers are exceptionally good at writing scripts, but even they can't beat it when a whole transformation like aggregation is created literally in two clicks. If one knew exactly from the beginning what a script should do then writing it quickly would not be a problem. It's the numerous iterative edits, corrections and reloads that make writing Qlik scripts long. Once I have designed and debugged a transformation logic in EasyMorph, scripting it in Qlik is usually a matter of couple hours, and it typically works reliably and as expected from the 1st run.

Another important advantage of prototyping Qlik apps in EasyMorph is that it allows creating a reference result. When you design a Qlik application off an existing Excel or BI report it usually makes the task easier because numbers in the old report serve as a reference you can compare against. However, if you design a brand new report there might be no reference at all. How can you be sure that your Qlik script, expressions and sets work correctly? There is a whole lot of things that can go wrong. Building a prototype in EasyMorph gives you that reference point and not just for the script, but also for expressions, including set analysis. In airplanes, measuring crucial indicators like altitude and velocity must be done using at least two probes (for each metric) that utilize different principles of physics so that pilots can be sure it's measured correctly. The same principle here -- "get another reference point".

I also found that designing apps in close cooperation with business users is more productive when the users have good understanding of how a transformation logic works. It's better explained by letting them explore a visual process in EasyMorph rather than showing totally cryptic (for them) Qlik scripts.

Resume: EasyMorph is a professional tool which can be used by QlikView / Qlik Sense developers to create robust and reliable applications faster by prototyping them first. I do it myself, and so far it works pretty well.