Qlik Compose is basically an integration tool, which has been acquired by Qlik from an Israeli IT company. So that Qlik can become leaders or can jump into the integration space.
So, there are two tools. One is Qlik Replicate, which replicates the whole data. And then after the replication is done, Qlik Compose is primarily designed because if users use Qlik Replicate, it will replicate the data. For example, if users have ERP data in entity relationships, users can offload it instead of building ETL jobs over the same system. Users can offload that kind of load by pulling the data from the Replicate system. Users are not building a data warehouse from that.
So, Compose will come into the picture from the Qlik stack point of view, which will help users automate it quickly. Users need to define the relationships between different tables of what is present in the OLTP system. Based on that, it will automatically design the dimension. It will automatically design the fact, and it will automatically design the relationships. And it will create the table just like what users do in Erwin. Erwin, it is like users define the relationships and an SQL query is generated. But here, it’s 60% automation, where users define the relationship, and then automatically, it will even identify the dimension. Then, after identifying the dimension attributes and what needs to be in that, it will also generate the call for it. That is the data modeling part. So, the advantage is that data modeling is automated.
And then users will have something like slowly changing dimensions, later having dimensions, similar pattern jobs in the dimension, and fact ETL process that users need to develop. That, again, is time-consuming. So even that is automated. Usually, that can be automated in Compose.
So overall, what we claim from Qlik Replicate is that 60% of the process can be automated. Users have the data modeling effort, where users manually define relationships and put in some effort, and then it's automated. Even in the ETL process, users manually define some connections, map the attributes together, and specify what they need. After that, the rest is automated. So, 60% of the time is spent there.
Qlik Compose plays a key role after users’ve purchased Qlik Replicate. Once replication is done from one system to another, but the data warehouse isn't in place, that's when users start using Qlik Compose. It pulls in tables from the source system and allows to define relationships between those tables. Once this is done, Compose will automatically create dimensions—this is part of the data modeling process.
After the tables are created, the next step is data integration. Data integration can involve developing jobs, and even data modeling is considered a part of data integration. Essentially, data integration is done from the replicated ERP system, which is an entity-relationship (ER) model. Once the data is replicated, users can decide how to compose the dimensions.
From there, users define the relationships, specify details, and set up ETL jobs. Users might deal with data-driven dimensions, slowly changing dimensions (SCD) types 1, 2, or 3. Users simply drag and drop the source and target tables, and Qlik Compose will automatically generate the required code and complete the integration process.
I haven't implemented this fully myself, but I’ve learned it from a pre-sales perspective and demonstrated it once or twice. If I had implemented it over a year, I would have hands-on experience with everything, but I understand the automation of data modeling and ETL jobs well enough to explain it to customers.