What is our primary use case?
We're using it for data warehousing. Typically, we collect data from numerous source systems, structure it, and then make it available to drive business intelligence, dashboard reporting, and things like that. That's the main use of it.
We also do a little bit of moving of data from one system to another, but the data doesn't go into the warehouse. For instance, we sync the data from one of our line of business systems into our support help desk system so that it has extra information there. So, we do a few point-to-point transfers, but mainly, it is for centralizing data for data warehousing.
We use it just as a data integration tool, and we haven't found any problems. When we have big data processing, we use Amazon Redshift. We use Pentaho to load the data into Redshift and then use that for big data processing. We use Tableau for our reporting platform. We've got quite a number of users who are experienced in it, so it is our chosen reporting platform. So, we use Pentaho for the data collection and data modeling aspect of things, such as developing facts and dimensions, but we then publicly export that data to Redshift as a database platform, and then we use Tableau as our reporting platform.
I am using version 8.3, which was the latest long-term support version when I looked at it the last time. Because this is something we use in production, and it is quite core to our operations, we've been advised that we just stick with the long-term support versions of the product.
It is in the cloud on AWS. It is running on an EC2 instance in AWS Cloud.
How has it helped my organization?
It enables us to create low-code pipelines without custom coding efforts. A lot of transformations are quite straightforward because there are a lot of built-in connectors, which is really good. It has got connectors to Salesforce, which makes it very easy for us to wire up a connection to Salesforce and scrape all of that data into another table. Their flows have got absolutely no code in them. It has a Python integrator, and if you want to go into a coding environment, you've got your choice of writing in Java or Python.
The creation of low-code pipelines is quite important. We have around 200 external data sets that we query and pull the data from on a daily basis. The low-code environment makes it easier for our support function to maintain it because they can open up a transformation and very easily see what that transformation is doing, rather than having to troll through reams and reams of code. ETLs written purely in code become very difficult to trace very quickly. You spend a lot of time trying to unpick it. They never get commented on as well as you'd expect, whereas, with a low-code environment, you have your transformation there, and it almost self documents itself. So, it is much easier for somebody who didn't write the original transformation to pick that up later on.
We reuse various components. For instance, we might develop a transformation that does a lookup based on the domain name to match to a consumer record, and then we can repeat that bit of code in multiple transformations.
We have a metadata-driven framework. Most of what we do is metadata-driven, which is quite important because that allows us to describe all of our data flows. For example, Table one moves to Table two, Table two moves to table three, etc. Because we've got metadata that explains all of those steps, it helps people investigate where the data comes from and allows us to publish reports that show, "You've got this end metric here, and this is where the data that drives that metric came from." The variable substitution that Pentaho has to allow metadata-driven frameworks is definitely a key feature that Pentaho offers.
The ability to automate data pipeline templates affects our productivity and costs. We run a lot of processes, and if it wasn't reliable, it would take a lot more effort. We would need a lot bigger team to support the 200 integrations that we run every day. Because it is a low-code environment, we don't have to have support instances escalated to the third line support to be investigated, which affects the cost. Very often our support analysts or more junior members are able to look into what an issue is and fix it themselves without having to escalate it to a more senior developer.
The automation of data pipeline templates affects our ability to scale the onboarding of data because after we've done a few different approaches and we get new requirements, they fit into a standard approach. It gives us the ability to scale with code and reuse, which also ties in with the metadata aspect of things. A lot of our intermediate stages of processing data are purely configured in metadata, so in order to implement transformation, no custom coding is required. It is really just writing a few lines of metadata to drive the process, and that gives us quite a big efficiency.
It has certainly reduced our ETL development time. I've worked at other places that had a similar-sized team to manage a system with a much lesser number of integrations. We've certainly managed to scale Pentaho not just for the number of things we do but also for the type of things we do.
We do the obvious direct database connections, but there is a whole raft of different types of integrations that we've developed over time. We have REST APIs, and we download data from Excel files that are hosted in SharePoint. We collect data from S3 buckets in Amazon, and we collect data from Google Analytics and other Google services. We've not come across anything that we've not been able to do with Pentaho. It has proved to be a very flexible way of getting data from anywhere.
Our time savings are probably quite significant. By using some of the components that we've already got written, our developers are able to, for instance, put in a transformation from a staging area to its model data area. They are probably able to put something in place in an hour or a couple of hours. If they were starting from a blank piece of paper, that would be several days worth of work.
What is most valuable?
The graphical nature of the development interface is most useful because we've got people with quite mixed skills in the team. We've got some very junior, apprentice-level people, and we've got support analysts who don't have an IT background. It allows us to have quite complicated data flows and embed logic in them. Rather than having to troll through lines and lines of code and try and work out what it's doing, you get a visual representation, which makes it quite easy for people with mixed skills to support and maintain the product. That's one side of it.
The other side is that it is quite a modular program. I've worked with other ETL tools, and it is quite difficult to get component reuse by using them. With tools like SSIS, you can develop your packages for moving data from one place to another, but it is really difficult to reuse a lot of it, so you have to implement the same code again. Pentaho seems quite adaptable to have reusable components or sections of code that you can use in different transformations, and that has helped us quite a lot.
One of the things that Pentaho does is that it has the virtual web services ability to expose a transformation as if it was a database connection; for instance, when you have a REST API that you want to be read by something like Tableau that needs a JDBC connection. Pentaho was really helpful in getting that driver enabled for us to do some proof of concept work on that approach.
What needs improvement?
Although it is a low-code solution with a graphical interface, often the error messages that you get are of the type that a developer would be happy with. You get a big stack of red text and Java errors displayed on the screen, and less technical people can get intimidated by that. It can be a bit intimidating to get a wall of red error messages displayed. Other graphical tools that are focused at the power user level provide a much more user-friendly experience in dealing with your exceptions and guiding the user into where they've made the mistake.
Sometimes, there are so many options in some of the components. Some guidance about when to use certain options embedded into the interface would be good so that people know that if they set something, what would it do, and when should they use an option. It is quite light on that aspect.
For how long have I used the solution?
I have been using this solution since the beginning of 2016. It has been about seven years.
What do I think about the stability of the solution?
We haven't had any problems in particular that I can think of. It is quite a workhorse. It just sits there running reliably. It has got a lot to do every day. We have occasional issues of memory if some transformations haven't been written in the best way possible, and we obviously get our own bugs that we introduce into transformations, but generally, we don't have any problems with the product.
What do I think about the scalability of the solution?
It meets our purposes. It does have horizontal scaling capability, but it is not something that we needed to use. We have lots of small-sized and medium-sized data sets. We don't have to deal with super large data sets. Where we do have some requirements for that, it works quite well. We can push some of that processing down onto our cloud provider. We've dealt with some of such issues by using S3, Athena, and Redshift. You can almost offload some of the big data processing to those platforms.
How are customer service and support?
I've contacted them a few times. In terms of Lumada's ability to quickly and effectively solve issues that we brought up, we get a very good response rate. They provide very prompt responses and are quite engaging. You don't have to wait long, and you can get into a dialogue with the support team with back and forth emails in just an hour or so. You don't have to wait a week for each response cycle, which is something I've seen with some of the other support functions.
I would rate them an eight out of 10. We've got quite a complicated framework, so it is not possible for us to send the whole thing over for them to look into it, but they certainly give help in terms of tweaks to server settings and some memory configurations to try and get things going. We run a codebase that is quite big and quite complicated, so sometimes, it might be difficult to do something that you can send over to show what the errors are. They wouldn't log in and look at your actual environment. It has to be based on the log files. So, it is a bit abstract. If you have something that's occurring just on a very specific transformation that you've got, it might be difficult for them to drill into to see why it is causing a problem on our system.
Which solution did I use previously and why did I switch?
I have a little bit of experience with AWS Glue. Its advantage is that it is tied natively into the AWS PySpark processing. Its disadvantage is that it writes some really difficult-to-maintain lines of code for all of its transformations, which might work fine if you have just a dozen or so transformations, but if you have a lot of transformations going on, it can be quite difficult to maintain.
We've also got quite a lot of experience working with SSIS. I much prefer Pentaho to SSIS. The SSIS ties you rigidly to your data flow structure that exists at design time, whereas Pentaho is very flexible. If, for instance, you wanted to move 15 columns to another table, in SSIS, you'd have to configure that with your 15 columns. If a 16th column appears, it would break that flow. With Pentaho, without amending your ETL, you can just amend your end data set to accept the 16th column, and it would just allow it to flow through. This and the fact that the transformation isn't tied down at the design time make it much more flexible than SSIS.
In terms of component reuse, other ETL tools are not nearly as good at being able to just pick up a transformation or a sub-transformation and drop it into your pipelines. You do tend to keep rewriting things again and again to get the same functionality.
What about the implementation team?
I was here during the initial setup, but I wasn't involved in it. We used an external company. They do our upgrades, etc. The reason for that is that we tend to stick with just the long-term support versions of the product. Apart from service packs, we don't do upgrades very often. We never get a deep experience of that, so it is more efficient for us to bring in this external company that we work with to do that.
What was our ROI?
It is always difficult to quantify a return on investment for data warehousing and business intelligence projects. It is a cost center rather than a profit center, but if you take the starting point as this is something that needs to be done, you could pick up the tools to do it. In the long run, you would necessarily find that they are much cheaper. If you went for more of a coded approach, it might be cheaper in terms of licensing, but then you might have higher costs of maintaining that.
What's my experience with pricing, setup cost, and licensing?
It does seem a bit expensive compared to the serverless product offering. Tools, such as Server Integration Services, are "almost" free with a database engine. It is comparable to products like Alteryx, which is also very expensive.
It would be great if we could use our enterprise license and distribute that to analysts and people around the business to use in place of Tableau Prep, etc, but its UI is probably a bit too confusing for that level of user. So, it doesn't allow us to get the tool as widely distributed across the organization to non-technical users as much as we would like.
What other advice do I have?
I would advise taking advantage of using metadata to drive your transformations. You should take advantage of the very nice and easy way in which variable substitution works in a lot of components. If you use a metadata-driven framework in Pentaho, it will allow you to self-document your process flows. At some point, it always becomes a critical aspect of a project. Often, it doesn't crop up until a year or so later, but somebody always comes asking for proof or documentation of exactly what is happening in terms of how something is getting to here and how something is driving a metric. So, if you start off from the beginning by using a metadata framework that self documents that, you'll be 90% of the way in answering those questions when you need to.
We are satisfied with our decision to purchase Hitachi's products, services, or solutions. In the low-code space, they're probably reasonably priced. With the serverless architectures out there, there is some competition, and you can do things differently using serverless architecture, which would have an overall lower cost of running. However, the fact that we have so many transformations that we run, and those transformations can be maintained by a team of people who aren't Python developers or Java developers, and our apprentices can use this tool quite easily, is an advantage of it.
I'm not too familiar with the overall roadmap for Hitachi Vantara. We're just using the Pentaho data integration products. We don't use the metadata injection aspects of Pentaho mainly because we did have a need for them, but we know they're there.
I would rate it a seven out of 10. Its UI is a bit techy and more confusing than some of the other graphical ETL tools, and that's where improvements could be made.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.