Hitachi Lumada Data Integration OverviewUNIXBusinessApplication

Hitachi Lumada Data Integration is the #7 ranked solution in top Data Integration Tools. PeerSpot users give Hitachi Lumada Data Integration an average rating of 7.8 out of 10. Hitachi Lumada Data Integration is most commonly compared to SSIS: Hitachi Lumada Data Integration vs SSIS. Hitachi Lumada Data Integration is popular among the large enterprise segment, accounting for 65% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 15% of all views.
Hitachi Lumada Data Integration Buyer's Guide

Download the Hitachi Lumada Data Integration Buyer's Guide including reviews and more. Updated: May 2023

What is Hitachi Lumada Data Integration?

Hitachi Lumada Data Integration is a top-raking data integration tool that aims to deliver accurate data from various sources to end users. This is a complete data integration platform that utilizes visual tools in the delivery of analytics-ready data. The product eliminates coding and complexity to ensure equal accessibility of its services to IT users as well as businesses that do not specialize in the field.

The solution offers powerful data integration, which is achieved through:

  • Accelerated data onboarding
  • Flexible data self-service
  • Robust data flow orchestration

Users of Hitachi Lumada Data Integration can collaborate to build, deploy, and monitor dataflows in order to streamline data delivery. The visual tools of the product reduce the time of operation and lower complexity, allowing even beginners to operate the platform seamlessly. The onboarding process is initiated through broad connectivity to a wide variety of data sources and applications.

A drag-and-drop interface allows users to easily create data pipelines and ready-made templates to execute edge to cloud. The product provides users with the opportunity to blend data on premises or using cloud services, including Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). The tool allows for a seamless switch between the native engine and Apache Spark, and operationalizes Python, Scala, and Weka machine-learning models.

The tool offers features for extensive business analytics through:

  • Ad-hoc analysis
  • Flexible interface
  • Enterprise reporting

Hitachi Lumada Data Integration offers its clients modern data architectures for data analytics. Through interactive visualizations and easy integration, users are able to increase data integrity for their organizations. The product offers a web-based drag-and-drop dashboard for a flexible experience, collaboration with other applications, and advanced multi tenancy. There is special enterprise reporting which consists of operational self-serving reporting, security with content permissions, and additional high-level protection, achieved through locking, and expirations.

Hitachi Lumada Data Integration Features

The tool offers its clients various features which can be used to achieve efficient data integration and further analysis. These features include:

  • Data access: The tool allows users to access data sources at the edge, core, and cloud. This reduces the time and complexity of the process while blending sources to deliver data in a format ready to be analyzed.

  • Machine learning: The solution offers a feature to orchestrate machine learning. R, Python, Scale, and Weka models are provided to users of this product.

  • Enterprise reporting: Hitachi Lumada Data Integration provides its clients with detailed visualized reporting. This feature is highly secured, which provides additional protection for clients' data.

  • Connect and move: This feature offers users the option to connect to sources on premises or in the cloud and move data of any size and format.

  • Flexibility: The product allows users broad connectivity and flexibility with no vendor lock-in to on-premise or cloud services.

  • Cluster to container: The tool offers the option to create scalable pipelines with Kubernetes clusters. This is possible across multiple clouds.

  • Dataflow studio: This feature allows users to build and manage data pipelines, view run metrics, analyze activities, and resume paused ones.

  • Authoring: The tool has an editor feature, which allows for the transformation of activities while the dataflow is in progress.

Hitachi Lumada Data Integration Benefits

The tool offers increased work productivity through efficient data integration. A number of the benefits include:

  • Ability to increase productivity in the work process due to effective automation.

  • Production deployment time can be sped up while saving costs for the company.

  • The no-code functionality improves pipeline quality in comparison to hand-coding data.

  • The tool offers high-quality reports which reduce implementation time.

  • Employees can save time and resources by manually embedding reporting and applications through this solution.

  • The utilization of this tool can increase business user adoption by improving data accuracy.

Reviews from Real Users

Philip R., a senior engineer at a comms service provider, says this product "Saves time and makes it easy for our mixed-skilled team to support the product".

Ryan F., a senior data engineer at Burgiss, appreciates Hitachi Lumada Data Integration because low-code makes development faster than with Python.

Hitachi Lumada Data Integration was previously known as Kettle, Pentaho Data Integration.

Hitachi Lumada Data Integration Customers

66Controls, Providential Revenue Agency of Ro Negro, NOAA Information Systems, Swiss Real Estate Institute

Hitachi Lumada Data Integration Video

Hitachi Lumada Data Integration Pricing Advice

What users are saying about Hitachi Lumada Data Integration pricing:
  • "It does seem a bit expensive compared to the serverless product offering. Tools, such as Server Integration Services, are "almost" free with a database engine. It is comparable to products like Alteryx, which is also very expensive."
  • "I think Lumada's price is fair compared to some of the others, like BusinessObjects, which is was the other thing that I used at my previous job. BusinessObject's price was more reasonable before SAP acquired it. They jacked the price up significantly. Oracle's OBIEE tool was also prohibitively expensive."
  • "If a company is looking for an ETL solution and wants to integrate it with their tech stack but doesn't want to spend a bunch of money, Pentaho is a good solution"
  • "When we first started with it, it was much cheaper. It has gone up drastically, especially since Hitachi bought out Pentaho."
  • "The pricing has been pretty good. I'm used to using everything open-source or freeware-based. I understand that organizations need to make sure that the solutions are secure, and that's basically where I hit a roadblock in my current organization. They needed to ensure that we had a license and we had a secure way of accessing it so that no outside parties could get access to our data, but in terms of pricing, considering how much other teams are spending on cloud solutions or even their existing solutions, its price point is pretty good. At this time, there are no additional costs. We just have the licensing fees."
  • "We are using the Community Edition. We have been trying to use and sell the Enterprise version, but that hasn't been possible due to the budget required for it."
  • "For most development tasks, the Enterprise edition should be sufficient. It depends on the type of support that you require for your production environment."
  • Hitachi Lumada Data Integration Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Senior Engineer at a comms service provider with 501-1,000 employees
    Real User
    Top 20
    Saves time and makes it easy for our mixed-skilled team to support the product, but more guidance and better error messages are required in the UI
    Pros and Cons
    • "The graphical nature of the development interface is most useful because we've got people with quite mixed skills in the team. We've got some very junior, apprentice-level people, and we've got support analysts who don't have an IT background. It allows us to have quite complicated data flows and embed logic in them. Rather than having to troll through lines and lines of code and try and work out what it's doing, you get a visual representation, which makes it quite easy for people with mixed skills to support and maintain the product. That's one side of it."
    • "Although it is a low-code solution with a graphical interface, often the error messages that you get are of the type that a developer would be happy with. You get a big stack of red text and Java errors displayed on the screen, and less technical people can get intimidated by that. It can be a bit intimidating to get a wall of red error messages displayed. Other graphical tools that are focused at the power user level provide a much more user-friendly experience in dealing with your exceptions and guiding the user into where they've made the mistake."

    What is our primary use case?

    We're using it for data warehousing. Typically, we collect data from numerous source systems, structure it, and then make it available to drive business intelligence, dashboard reporting, and things like that. That's the main use of it. 

    We also do a little bit of moving of data from one system to another, but the data doesn't go into the warehouse. For instance, we sync the data from one of our line of business systems into our support help desk system so that it has extra information there. So, we do a few point-to-point transfers, but mainly, it is for centralizing data for data warehousing.

    We use it just as a data integration tool, and we haven't found any problems. When we have big data processing, we use Amazon Redshift. We use Pentaho to load the data into Redshift and then use that for big data processing. We use Tableau for our reporting platform. We've got quite a number of users who are experienced in it, so it is our chosen reporting platform. So, we use Pentaho for the data collection and data modeling aspect of things, such as developing facts and dimensions, but we then publicly export that data to Redshift as a database platform, and then we use Tableau as our reporting platform.

    I am using version 8.3, which was the latest long-term support version when I looked at it the last time. Because this is something we use in production, and it is quite core to our operations, we've been advised that we just stick with the long-term support versions of the product.

    It is in the cloud on AWS. It is running on an EC2 instance in AWS Cloud.

    How has it helped my organization?

    It enables us to create low-code pipelines without custom coding efforts. A lot of transformations are quite straightforward because there are a lot of built-in connectors, which is really good. It has got connectors to Salesforce, which makes it very easy for us to wire up a connection to Salesforce and scrape all of that data into another table. Their flows have got absolutely no code in them. It has a Python integrator, and if you want to go into a coding environment, you've got your choice of writing in Java or Python.

    The creation of low-code pipelines is quite important. We have around 200 external data sets that we query and pull the data from on a daily basis. The low-code environment makes it easier for our support function to maintain it because they can open up a transformation and very easily see what that transformation is doing, rather than having to troll through reams and reams of code. ETLs written purely in code become very difficult to trace very quickly. You spend a lot of time trying to unpick it. They never get commented on as well as you'd expect, whereas, with a low-code environment, you have your transformation there, and it almost self documents itself. So, it is much easier for somebody who didn't write the original transformation to pick that up later on.

    We reuse various components. For instance, we might develop a transformation that does a lookup based on the domain name to match to a consumer record, and then we can repeat that bit of code in multiple transformations. 

    We have a metadata-driven framework. Most of what we do is metadata-driven, which is quite important because that allows us to describe all of our data flows. For example, Table one moves to Table two, Table two moves to table three, etc. Because we've got metadata that explains all of those steps, it helps people investigate where the data comes from and allows us to publish reports that show, "You've got this end metric here, and this is where the data that drives that metric came from." The variable substitution that Pentaho has to allow metadata-driven frameworks is definitely a key feature that Pentaho offers.

    The ability to automate data pipeline templates affects our productivity and costs. We run a lot of processes, and if it wasn't reliable, it would take a lot more effort. We would need a lot bigger team to support the 200 integrations that we run every day. Because it is a low-code environment, we don't have to have support instances escalated to the third line support to be investigated, which affects the cost. Very often our support analysts or more junior members are able to look into what an issue is and fix it themselves without having to escalate it to a more senior developer.

    The automation of data pipeline templates affects our ability to scale the onboarding of data because after we've done a few different approaches and we get new requirements, they fit into a standard approach. It gives us the ability to scale with code and reuse, which also ties in with the metadata aspect of things. A lot of our intermediate stages of processing data are purely configured in metadata, so in order to implement transformation, no custom coding is required. It is really just writing a few lines of metadata to drive the process, and that gives us quite a big efficiency.

    It has certainly reduced our ETL development time. I've worked at other places that had a similar-sized team to manage a system with a much lesser number of integrations. We've certainly managed to scale Pentaho not just for the number of things we do but also for the type of things we do.

    We do the obvious direct database connections, but there is a whole raft of different types of integrations that we've developed over time. We have REST APIs, and we download data from Excel files that are hosted in SharePoint. We collect data from S3 buckets in Amazon, and we collect data from Google Analytics and other Google services. We've not come across anything that we've not been able to do with Pentaho. It has proved to be a very flexible way of getting data from anywhere.

    Our time savings are probably quite significant. By using some of the components that we've already got written, our developers are able to, for instance, put in a transformation from a staging area to its model data area. They are probably able to put something in place in an hour or a couple of hours. If they were starting from a blank piece of paper, that would be several days worth of work.

    What is most valuable?

    The graphical nature of the development interface is most useful because we've got people with quite mixed skills in the team. We've got some very junior, apprentice-level people, and we've got support analysts who don't have an IT background. It allows us to have quite complicated data flows and embed logic in them. Rather than having to troll through lines and lines of code and try and work out what it's doing, you get a visual representation, which makes it quite easy for people with mixed skills to support and maintain the product. That's one side of it. 

    The other side is that it is quite a modular program. I've worked with other ETL tools, and it is quite difficult to get component reuse by using them. With tools like SSIS, you can develop your packages for moving data from one place to another, but it is really difficult to reuse a lot of it, so you have to implement the same code again. Pentaho seems quite adaptable to have reusable components or sections of code that you can use in different transformations, and that has helped us quite a lot.

    One of the things that Pentaho does is that it has the virtual web services ability to expose a transformation as if it was a database connection; for instance, when you have a REST API that you want to be read by something like Tableau that needs a JDBC connection. Pentaho was really helpful in getting that driver enabled for us to do some proof of concept work on that approach.

    What needs improvement?

    Although it is a low-code solution with a graphical interface, often the error messages that you get are of the type that a developer would be happy with. You get a big stack of red text and Java errors displayed on the screen, and less technical people can get intimidated by that. It can be a bit intimidating to get a wall of red error messages displayed. Other graphical tools that are focused at the power user level provide a much more user-friendly experience in dealing with your exceptions and guiding the user into where they've made the mistake.

    Sometimes, there are so many options in some of the components. Some guidance about when to use certain options embedded into the interface would be good so that people know that if they set something, what would it do, and when should they use an option. It is quite light on that aspect.

    Buyer's Guide
    Hitachi Lumada Data Integration
    May 2023
    Learn what your peers think about Hitachi Lumada Data Integration. Get advice and tips from experienced pros sharing their opinions. Updated: May 2023.
    706,951 professionals have used our research since 2012.

    For how long have I used the solution?

    I have been using this solution since the beginning of 2016. It has been about seven years.

    What do I think about the stability of the solution?

    We haven't had any problems in particular that I can think of. It is quite a workhorse. It just sits there running reliably. It has got a lot to do every day. We have occasional issues of memory if some transformations haven't been written in the best way possible, and we obviously get our own bugs that we introduce into transformations, but generally, we don't have any problems with the product.

    What do I think about the scalability of the solution?

    It meets our purposes. It does have horizontal scaling capability, but it is not something that we needed to use. We have lots of small-sized and medium-sized data sets. We don't have to deal with super large data sets. Where we do have some requirements for that, it works quite well. We can push some of that processing down onto our cloud provider. We've dealt with some of such issues by using S3, Athena, and Redshift. You can almost offload some of the big data processing to those platforms.

    How are customer service and support?

    I've contacted them a few times. In terms of Lumada's ability to quickly and effectively solve issues that we brought up, we get a very good response rate. They provide very prompt responses and are quite engaging. You don't have to wait long, and you can get into a dialogue with the support team with back and forth emails in just an hour or so. You don't have to wait a week for each response cycle, which is something I've seen with some of the other support functions. 

    I would rate them an eight out of 10. We've got quite a complicated framework, so it is not possible for us to send the whole thing over for them to look into it, but they certainly give help in terms of tweaks to server settings and some memory configurations to try and get things going. We run a codebase that is quite big and quite complicated, so sometimes, it might be difficult to do something that you can send over to show what the errors are. They wouldn't log in and look at your actual environment. It has to be based on the log files. So, it is a bit abstract. If you have something that's occurring just on a very specific transformation that you've got, it might be difficult for them to drill into to see why it is causing a problem on our system.

    Which solution did I use previously and why did I switch?

    I have a little bit of experience with AWS Glue. Its advantage is that it is tied natively into the AWS PySpark processing. Its disadvantage is that it writes some really difficult-to-maintain lines of code for all of its transformations, which might work fine if you have just a dozen or so transformations, but if you have a lot of transformations going on, it can be quite difficult to maintain.

    We've also got quite a lot of experience working with SSIS. I much prefer Pentaho to SSIS. The SSIS ties you rigidly to your data flow structure that exists at design time, whereas Pentaho is very flexible. If, for instance, you wanted to move 15 columns to another table, in SSIS, you'd have to configure that with your 15 columns. If a 16th column appears, it would break that flow. With Pentaho, without amending your ETL, you can just amend your end data set to accept the 16th column, and it would just allow it to flow through. This and the fact that the transformation isn't tied down at the design time make it much more flexible than SSIS.

    In terms of component reuse, other ETL tools are not nearly as good at being able to just pick up a transformation or a sub-transformation and drop it into your pipelines. You do tend to keep rewriting things again and again to get the same functionality.

    What about the implementation team?

    I was here during the initial setup, but I wasn't involved in it. We used an external company. They do our upgrades, etc. The reason for that is that we tend to stick with just the long-term support versions of the product. Apart from service packs, we don't do upgrades very often. We never get a deep experience of that, so it is more efficient for us to bring in this external company that we work with to do that.

    What was our ROI?

    It is always difficult to quantify a return on investment for data warehousing and business intelligence projects. It is a cost center rather than a profit center, but if you take the starting point as this is something that needs to be done, you could pick up the tools to do it. In the long run, you would necessarily find that they are much cheaper. If you went for more of a coded approach, it might be cheaper in terms of licensing, but then you might have higher costs of maintaining that.

    What's my experience with pricing, setup cost, and licensing?

    It does seem a bit expensive compared to the serverless product offering. Tools, such as Server Integration Services, are "almost" free with a database engine. It is comparable to products like Alteryx, which is also very expensive.

    It would be great if we could use our enterprise license and distribute that to analysts and people around the business to use in place of Tableau Prep, etc, but its UI is probably a bit too confusing for that level of user. So, it doesn't allow us to get the tool as widely distributed across the organization to non-technical users as much as we would like.

    What other advice do I have?

    I would advise taking advantage of using metadata to drive your transformations. You should take advantage of the very nice and easy way in which variable substitution works in a lot of components. If you use a metadata-driven framework in Pentaho, it will allow you to self-document your process flows. At some point, it always becomes a critical aspect of a project. Often, it doesn't crop up until a year or so later, but somebody always comes asking for proof or documentation of exactly what is happening in terms of how something is getting to here and how something is driving a metric. So, if you start off from the beginning by using a metadata framework that self documents that, you'll be 90% of the way in answering those questions when you need to.

    We are satisfied with our decision to purchase Hitachi's products, services, or solutions. In the low-code space, they're probably reasonably priced. With the serverless architectures out there, there is some competition, and you can do things differently using serverless architecture, which would have an overall lower cost of running. However, the fact that we have so many transformations that we run, and those transformations can be maintained by a team of people who aren't Python developers or Java developers, and our apprentices can use this tool quite easily, is an advantage of it.

    I'm not too familiar with the overall roadmap for Hitachi Vantara. We're just using the Pentaho data integration products. We don't use the metadata injection aspects of Pentaho mainly because we did have a need for them, but we know they're there. 

    I would rate it a seven out of 10. Its UI is a bit techy and more confusing than some of the other graphical ETL tools, and that's where improvements could be made.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Enterprise Data Architect at a manufacturing company with 201-500 employees
    Real User
    Top 10
    It's flexible and can do almost anything I want it to do
    Pros and Cons
    • "Lumada has allowed us to interact with our employees more effectively and compensate them properly. One of the cool things is that we use it to generate commissions for our salespeople and bonuses for our warehouse people. It allows us to get information out to them in a timely fashion. We can also see where they're at and how they're doing."
    • "Some of the scheduling features about Lumada drive me buggy. The one issue that always drives me up the wall is when Daylight Savings Time changes. It doesn't take that into account elegantly. Every time it changes, I have to do something. It's not a big deal, but it's annoying."

    What is our primary use case?

    We mainly use Lumada to load our operational systems into our data warehouse, but we also use it for monthly reporting out of the data warehouse, so it's to and from. We use some of Lumada's other features within the business to move data around. It's become quite the Swiss army knife.

    We're primarily doing batch-type reports that go out. Not many people want to sift through data and pick it to join it in other things. There are a few, but again, I usually wind up doing it. The self-serve feature is not as big a seller to me because of our user base. Most of the people looking at it are salespeople.

    Lumada has allowed us to interact with our employees more effectively and compensate them properly. One of the cool aspects is that we use it to generate commissions for our salespeople and bonuses for our warehouse people. It allows us to get information out to them in a timely fashion. We can also see where they're at and how they're doing. 

    The process that Lumada replaced was arcane. The sentiment among our employees, particularly the warehouse personnel, was that it was punitive. They would say, "I didn't get a bonus this month because the warehouse manager didn't like me." Now we can show them the numbers and say, "You didn't get a bonus because you were slacking off compared to everybody else." It's allowed us to be very transparent in how we're doing these tasks. Previously, that was all done behind the vest. I want people to trust the numbers, and these tools allow me to do that because I can instantly show that the information is correct.

    That is a huge win for us. When we first rolled it out, I spent a third of my time justifying the numbers. Now, I rarely have to do that. It's all there, and they can see it, so they trust what the information is. If something is wrong, it's not a case of "Why is this being computed wrong?" It's more like: "What didn't report?"

    We have 200 stores that communicate to our central hub each night. If one of them doesn't send any data, somebody notices now. That wasn't the case in the past. They're saying, "Was there something wrong with the store?" instead of, "There's something wrong with the data."

    With Lumada's single end-to-end data management, we no longer need some of the other tools that we developed in-house. Before that, everything was in-house. We had a build-versus-buy mentality. It simplified many aspects that we were already doing and made that process quicker. It has made a world of difference. 

    This is primarily anecdotal, but there were times where I'd get an IM from one of the managers saying, "I'm looking at this in the sales meeting and calling out what somebody is saying. I want to make sure that this is what I'm seeing." I made a couple of people mad. Let's say they're no longer working for us, and we'll leave it at that. If you're not making somebody mad, you're not doing BI right. You're not asking the right questions.

    Having a single platform for data management experience is crucial for me. It lets me know when something goes wrong from a data standpoint. I know when a load fails due to bad data and don't need to hunt for it. I've got a status board, so I can say, "Everything looks good this morning." I don't have to dig into it, and that has made my job easier. 

    What's more, I don't waste time arguing about why the numbers on this report don't match the ones on another because it's all coming from the same place. Before, they were coming from various places, and they wouldn't match for whatever reason. Maybe there's some piece of code in one report that isn't being accounted for in the other. Now, they're all coming from the same place. So everything is on the same level.

    What is most valuable?

    I'm a database guy, not a programmer, so Lumada's ability to create low-code pipelines without custom coding is crucial for me. I don't need to do any Java customization. I've had to write SQL scripts and occasionally a Javascript within it, but those are few and far between. I can do everything else within the tool itself. I got into databases because I was sick and tired of getting errors when I compiled something. 

    What needs improvement?

    Some of the scheduling features about Lumada drive me buggy. The one issue that always drives me up the wall is when Daylight Savings Time changes. It doesn't take that into account elegantly. Every time it changes, I have to do something. It's not a big deal, but it's annoying. That's the one issue, but I see the limitation, and it might not be easily solvable. 

    For how long have I used the solution?

    I started working with Lumada long before it was acquired by Hitachi. It's been about 11 years now. I'm the primary person in the company who works with it. A few people know the solution tangentially. Aside from very basic elements, most tasks related to Lumada usually fall in my lap.

    What do I think about the stability of the solution?

    Lumada's stability and performance are pretty good. The limitations I run into are usually with the database that I'm trying to write to rather than read from. The only time I have a real issue is when an incredibly complex query takes 20 minutes to start returning data. It's sitting there going, "All right. Give me something to do." But then again, I've got it running on a machine that's got 64 gigs of memory.

    What do I think about the scalability of the solution?

    Scaling out our processes hasn't been a big deal. We're a relatively small shop with only a couple of production databases. We're more of a regional enterprise, and I haven't had any issues with performance yet. It's always been some other product or solution that has gotten in the way. Lumada can handle anything we throw at it. Every night I run reports on our part ledger. That includes 200 million records, and Lumada can chew through it in about an hour and a half. 

    I know we can extend processing into the Spark realm if we need to. We've thought about that but never really needed it. It's something we keep in our back pocket. Someone suggested trying it out, but it never really got off the ground because other more pressing needs came up. From what I've seen, it'll scale out to whatever I need it to do. Any limitations are in the backend rather than the software. I've done some metrics on it. It's the database that I have to wait on more than the software. It's not doing a whole lot CPU-wise. My limitations are elsewhere, usually.

    Right now, we have about 100 users working with Lumada. About 100 people log in to the system, but probably 200 people get reports from it. Only about 50 use the analysis tools, including the top sales managers and all of the buying group. There are also some analysts from various groups who use it constantly. 

    How are customer service and support?

    I'd give Lumada support a nine out of 10. It has been exceptional historically, but there was a rough patch about a year and a half ago shortly after Hitachi took over. They were in a transition period, but it has been very responsive since. I usually don't need help. When I do, I get a response the same day, and somebody's working on it. I'm not too worried about things going wrong, like an outage. I've never had that happen.

    Sometimes when we do upgrades, and I'm in my test environment, I'll contact them and say, "I ran into this weird issue, and it's not doing what it should. What do you make of it?" They'll tell me, "You got to do this, that, and the other thing." They've been good about it.

    Which solution did I use previously and why did I switch?

    Before Lumada, we had a variety of homegrown solutions. Most of it was centered on our warehouse management system because that was our primary focus. There were also reports within the point of sale system, and the two never crossed paths. Now they're integrated. There was also an analysis tool they had before I came on board. I can't remember the name of it. The company had something, but it didn't do what they thought it would do, and the project fizzled.

    Part of the problem was that they didn't have somebody in-house who understood business intelligence until they brought me on. They were very operationally focused before that. The management was like, "We need more insight into what we're doing and how we're doing it." That was phase two of the big data warehouse push. The management here is relatively conservative in that regard, so they're somewhat slow to say, "Hey. We need to do something along these lines." But when they decide to go, get out of the way because here we come.

    I used a different tool at my previous job called Informatica. Lumada has less of a learning curve for deployment. Lumada was similar enough to Informatica that it's like, "Okay. This makes sense," but there were a few differences. Once I figured out the difference, it made a lot of sense to me. The entire chain of steps Lumada allows you to do is intuitive.

    Informatica was a lot more tedious to use. You had to hook every column up from its source to its target. With Lumada, it's the name that matters and its position. It made aspects a whole lot easier and less tedious. Every so often, it bites me in the butt. If I get a column out of order, it'll let me know I did something wrong. But it's much less error-prone because I don't have to hook every column up from its source to its target anymore. With Informatica, there were times where I spent 20 minutes just sitting there trying not to drool on myself. It was terrible. 

    How was the initial setup?

    Setting up Lumada was pretty straightforward. We just rolled it out and went from proof of concept to live in about a year. I was relatively new to the organization at the time and was still getting a feel for it — knowing where data was and what all these things mean. My experience at a shoe company didn't exactly translate to an auto parts business. I went to classes down in Orlando to learn the product, then we went from there and just tried it. We had a few faux pas here and there, but we knew.

    What was our ROI?

    Lumada has also significantly reduced our ETL development time. It depends on the project, but if someone comes to me with a new data source, I can typically integrate it within a week, whereas it used to take a month. It's a 4-to-1 reduction. It's allowed our IT department to stay lean. I worked at another company with 70 IT people, 50 of which were programmers. My current workplace has 12 people, and six are programmers. The others are UI-type developers, and there are about six database people, including me. We save the equivalent of a full-time employee, so that's anywhere from $50,000 to $75,000 a year.

    What's my experience with pricing, setup cost, and licensing?

    I think Lumada's price is fair compared to some of the others, like BusinessObjects, which is was the other solution that I used at my previous job. BusinessObject's price was more reasonable before SAP acquired it. They jacked the price up significantly. Oracle's OBIEE tool was also prohibitively expensive. We felt the value was much greater than the cost, and the value for the money was much better than if we had gone with other solutions.

    Which other solutions did I evaluate?

    We didn't consider other options besides Lumada because we are members of an auto parts trade association, and they were using the Pentaho tool before it was Hitachi to do some ETL tasks. They recommended it, so we started using it. I evaluated a couple of other ones, but they cost more than we were willing to spend to try out this type of solution. Once we figured out what it could do for us, then it's like, "Okay. Now, we can do some real work here."

    What other advice do I have?

    I rate Lumada nine out of 10. The aspect I like about Lumada is its flexibility. I can make it do pretty much whatever I want. It's not perfect, but I haven't run into a tool that is yet. I haven't used every aspect of it, but there's very little that I can't make it do. I haven't run into a scenario where it couldn't handle a challenge we put in front of it. It's been a solid performer for us. I rarely have a problem that is due to Lumada. The issues I have with my loads are never because of the software.

    If you plan to implement Lumada, I recommend going to the classes. Don't be afraid to ask dumb questions of support because many of them used to be consultants. They've all been there, done that. One of the guys I talk to regularly lives about 80 miles to the north of me. I have a rapport with him. They're willing to go above and beyond to make you successful.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Hitachi Lumada Data Integration
    May 2023
    Learn what your peers think about Hitachi Lumada Data Integration. Get advice and tips from experienced pros sharing their opinions. Updated: May 2023.
    706,951 professionals have used our research since 2012.
    Ryan Ferdon - PeerSpot reviewer
    Senior Data Engineer at Burgiss
    Real User
    Top 10
    Low-code makes development faster than with Python, but there were caching issues
    Pros and Cons
    • "The fact that it's a low-code solution is valuable. It's good for more junior people who may not be as experienced with programming."
    • "If you're working with a larger data set, I'm not so sure it would be the best solution. The larger things got the slower it was."

    What is our primary use case?

    We used it for ETL to transform data from flat files, CSV files, and database. We used PostgreSQL for the connections, and then we would either import it into our database if the data was in from clients, or we would export it to files if clients wanted files or if a vendor needed to import the files into their database.

    How has it helped my organization?

    The biggest benefit is that it's a low-code solution. When you hire junior ETL developers or engineers, who may have a schooling background but no real experience with ETL or coding for ETL, it's a UI-based, low-code solution in which they can make something happen within weeks instead of, potentially, months.

    Because it's low-code, while I could technically have done everything in Python alone, that would definitely have taken longer than using Pentaho. In addition, by being able to standardize pipelines to handle the onboarding process for new clients, development costs were significantly reduced. To put in perspective, prior to my leading the effort to standardize things, it would typically take about a week to build a feed from start to finish, and sometimes more depending on how complicated it was. With this solution, instead of it taking a week, it was reduced to an afternoon, or about three hours. That was a significant difference.

    Instead of paying a developer a full week's worth of work, which could be $2,500 or more, it cut it down to three hours or about $300. That's a big difference.

    What is most valuable?

    The fact that it's a low-code solution is valuable. It's good for more junior people who may not be as experienced with programming. In our case, we didn't have a huge data set. We had small and medium-sized data sets, so it worked fine.

    The fact that it's open source is also helpful in that, if a junior engineer knows they are going to use it in a job, they can download it themselves, locally, for free, and use test data to learn it.

    My role was to use it to write one feed that could facilitate multiple clients. Given that it was an open-source, free solution, it was pretty robust in what it could do. I could make lookup tables and databases and map different clients, and I could use the same feed for 30 clients or 50 clients. It got the job done for our use case.

    In addition, you can install it wherever you need it. We had installed versions in the cloud and I also had local versions.

    What needs improvement?

    If you're working with a larger data set, I'm not so sure it would be the best solution. The larger things got the slower it was.

    It was kind of buggy sometimes. And when we ran the flow, it didn't go from a perceived start to end, node by node. Everything kicked off at once. That meant there were times when it would get ahead of itself and a job would fail. That was not because the job was wrong, but because Pentaho decided to go at everything at once, and something would process before it was supposed to. There were nodes you could add to make sure that, before this node kicks off, all these others have processed, but it was a bit tedious. 

    There were also caching issues, and we had to write code to clear the cache every time we opened the program, because the cache would fill up and it wouldn't run. I don't know how hard that would be for them to fix, or if it was fixed in version 10.

    Also, the UI is a bit outdated, but I'm more of a fan of function over how something looks.

    One other thing that would have helped with Pentaho was documentation and support on the internet: how to do things, how to set up. I think there are some sites on how to install it, and Pentaho does have a help repository, but it wasn't always the most useful.

    For how long have I used the solution?

    I used Hitachi Lumada Data Integration (Pentaho) for three years

    What do I think about the stability of the solution?

    In terms of the stability of the solution, as I noted, I wouldn't use it for large data sets. But for small to midsize companies that are looking for a low-code solution that isn't going to break the budget, it's a great tool for them to use.

    It worked and it was stable enough, once we figured out the little quirks and how to get around them. It mostly handled our production workflows without issue.

    What do I think about the scalability of the solution?

    I think it could scale, but only up to a point. I didn't test it on larger datasets. But after talking to people who have worked on larger datasets, they wouldn't recommend using it, but that is hearsay.

    In my former company, there were about five people in the data engineering department who were using the solution in their roles as ETL data integration Specialists.

    In that company, it's their go-to solution and I think it will work for everything that they need. When I was there, I tried opening pathways to different things, but there were so many feeds already on it, and it worked for what they need, and it's low-code and open source, so I think they'll stick with it. As they gain more clients they'll increase their usage of it.

    How was the initial setup?

    The initial setup wasn't that complicated. You have to set the job environment variables and that was probably the most complicated part, and would be especially so if you're not familiar with it. Otherwise, it was just a matter of downloading the version needed, installing it, and learning how to use the different components. Overall, it was pretty easy and straightforward.

    The first time we deployed it, not knowing what we were doing, it took a couple of days, but that was mainly troubleshooting and figuring out what we were doing wrong because we hadn't used it before. After that, it would take maybe 30 minutes or an hour.

    In terms of maintenance for Pentaho, one developer per feed is what is typically assigned. It will depend on the workflow of the company and how many feeds are needed. In our case there were five people involved.

    What was our ROI?

    It saved us a lot of money. Given that it's open source, and the amount of time over the three that I used it, and the fact that they were using it several years prior, means a lot of money was definitely saved by using Pentaho versus something else.

    What's my experience with pricing, setup cost, and licensing?

    If a company is looking for an ETL solution and wants to integrate it with their tech stack but doesn't want to spend a bunch of money, Pentaho is a good solution. SSIS cores were $10,000 a piece. Although I don't know what they cost nowadays, they're expensive. 

    Pentaho is a nice option without having to pay an arm and a leg. We even had a complicated data set and Pentaho was able to handle pretty much every type of scenario, if we thought about it creatively enough. I would recommend it for a company in that position.

    Which other solutions did I evaluate?

    While the capabilities of Pentaho are good enough for light work, I've started using Alteryx Designer, and it is so much more robust in everything that you can do in real time. I've also used SSIS.

    When you run something in Pentaho, you can click on it to see the output of each one, but it's hard to really change anything. For example, if I were to query data from a database and put it into a "select," if I wanted to reorganize within the select based on something like the first initial of someone's name, it provided that option. But when I would do it, sometimes it would throw an error and I'd have to run the feed again to see it.

    The nodes, or the components, in Pentaho can probably do about 70 percent of what you can do in Alteryx. Don't get me wrong, Pentaho worked for what we needed it for, with just a few quirks. But as a data engineer, I'm always interested in and excited to work with new technologies that may offer different benefits. In this case, one of the benefits is that each node in Alteryx has many more capabilities in real time. I can look at the data that's coming into the node and the data that's going out. There was a way to do that in Pentaho, if you right-clicked and looked, but it would tell you the fields that were coming in and out and not necessarily the data. It's nice to be able to troubleshoot, on the spot, node-by-node, if you're having an issue. You can do that easily with Alteryx.

    In addition to being able to look at data coming in and out of the node, you can also sort it easily and filter it within each data node in Alteryx, and that is something you can't do in Pentaho.

    Another cool thing with Alteryx, although it's a very small difference, is that you don't have to save the workflow before you run it. Pentaho forces you to do that. Of course, it's always good to save.

    What other advice do I have?

    A good thing about Pentaho is that it's not that hard to learn, from an ETL perspective. The way that Pentaho has things laid out they are pretty intuitively organized in the panel: Your input—flat file, CSV, or database—and then the transformation nodes. 

    It was a good baseline and a good open-source tool to use to learn ETL. It's good to have exposure to multiple tools because every company has different needs and, depending on their needs, it would be a different recommendation.

    The lessons I learned using it: Make sure you clear the cache when you open the program. Also, if there are any critical points in your flow that are dependent upon previous nodes, make sure that you put blocking steps in. Make sure you also set up the job environment variables correctly, so that Pentaho runs.

    It worked for what we did but, personally, I wouldn't use it. In the new company I'm working for, we are using large financial data sets and I'm not so sure it could handle that. I know there's an Enterprise version, but I didn't use that.

    The solution can handle ingestion through to export, but you still have to have a batch or Python script to run it with an automation process. I don't know if the Lumada version has something different, but with what I was using, you were simply building the pipeline, but the pipeline outside of the program had to be scheduled and run, and we had other tools to check that the output was as expected.

    We used version 7 for a while and we were reluctant to upgrade to version 9 because we had an 834 configuration, meaning a government standardized feed that our developer spent two years building. There was an issue whenever we tried to run those feeds on version 9, so we were reluctant to upgrade because things were working on 7. We ended up finding out that it didn't take much work for us to fix the problem that we were having with version 9 and, eventually, we moved to it. With every version upgrade of anything, there are going to be pros and cons.

    Depending on what someone needs it for, if it's a small project and they don't want to pay for an enterprise solution, I would recommend it and give it a nine out of 10. The finicky things were a little frustrating, but the fact that it's free, can be deployed easily, and that it can fulfill a lot of things on a small scale, are plusses. If it were for a larger company that needed an enterprise solution, I wouldn't recommend it. In that case, it would be one out of 10.

    For a smaller company or one with a smaller budget, a company that doesn't have highly complex ETL needs, Pentaho is definitely a great option. If a company has the budget and has really specific needs and large data sets, I would suggest looking elsewhere.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Systems Analyst at a university with 5,001-10,000 employees
    Real User
    Top 20
    Reuse of ETLs with metadata injection saves us development time, but the reporting side needs notable work
    Pros and Cons
    • "The fact that it enables us to leverage metadata to automate data pipeline templates and reuse them is definitely one of the features that we like the best. The metadata injection is helpful because it reduces the need to create and maintain additional ETLs. If we didn't have that feature, we would have lots of duplicated ETLs that we would have to create and maintain. The data pipeline templates have definitely been helpful when looking at productivity and costs."
    • "The reporting definitely needs improvement. There are a lot of general, basic features that it doesn't have. A simple feature you would expect a reporting tool to have is the ability to search the repository for a report. It doesn't even have that capability. That's been a feature that we've been asking for since the beginning and it hasn't been implemented yet."

    What is our primary use case?

    We use it as a data warehouse between our HR system and our student system, because we don't have an application that sits in between them. It's a data warehouse that we do our reporting from.

    We also have integrations to other, isolated apps within the university that we gather data from. We use it to bring that into our data warehouse as well.

    How has it helped my organization?

    Lumada Data Integration definitely helps with decision-making for our deans and upper executives. They are the ones who use the product the most to make their decisions. The data warehouse is the only source of information that's available for them to use, and to create that data warehouse we had to use this product.

    And it has absolutely reduced our ETL development time. The fact that we're able to reuse some of the ETLs with the metadata injection saves us time and costs. It also makes it a pretty quick process for our developers to learn and pick up ETLs from each other. It's definitely easy for us to transition ETLs from one developer to another. The ETL functionality satisfies 95 percent of all our needs. 

    What is most valuable?

    The ETL is definitely an awesome feature of the product. It's very easy and quick to use. Once you understand the way it works it's pretty robust.

    Lumada Data Integration requires minimal coding. You can do more complex coding if you want to, because it has a scripts option that you can add as a feature, but we haven't found a need to do that yet. We just use what's available, the steps that they have, and that is sufficient for our needs at this point. It makes it easier for other developers to look at the things that we have developed and to understand them quicker, whereas if you have complex coding it's harder to hand off to other people. Being able to transition something to another developer, and having that person pick it up quicker than if there were custom scripting, is an advantage.

    In addition, the solution's ability to quickly and effectively solve issues we've brought up has been great. We've been able to use all the available features.

    Among them is the ability to develop and deploy data pipeline templates once and reuse them. The fact that it enables us to leverage metadata to automate data pipeline templates and reuse them is definitely one of the features that we like the best. The metadata injection is helpful because it reduces the need to create and maintain additional ETLs. If we didn't have that feature, we would have lots of duplicated ETLs that we would have to create and maintain. The data pipeline templates have definitely been helpful when looking at productivity and costs. The automation of data pipeline templates has also been helpful in scaling the onboarding of data.

    What needs improvement?

    The transition to the web-based solution has taken a little longer and been more tedious than we would like and it's taken away development efforts towards the reporting side of the tool. They have a reporting tool called Pentaho Business Analytics that does all the report creation based on the data integration tool. There are a lot of features in that product that are missing because they've allocated a lot of their resources to fixing the data integration, to make it more web-based. We would like them to focus more on the user interface for the reporting.

    The reporting definitely needs improvement. There are a lot of general, basic features that it doesn't have. A simple feature you would expect a reporting tool to have is the ability to search the repository for a report. It doesn't even have that capability. That's been a feature that we've been asking for since the beginning and it hasn't been implemented yet. We have between 500 and 800 reports in our system now. We've had to maintain an external spreadsheet with IDs to identify the location of all of those reports, instead of having that built into the system. It's been frustrating for us that they can't just build a simple search feature into the product to search for report names. It needs to be more in line with other reporting tools, like Tableau. Tableau has a lot more features and functions.

    Because the reporting is lacking, only the deans and above are using it. It could be used more, and we'd like it to be used more.

    Also, while the solution provides us with a single, end-to-end data management experience from ingestion to insights, it does but it doesn't give us a full history of where it's coming from. If we change a field, we can't trace it through from the reporting to the ETL field. Unfortunately, it's a manual process for us. Hitachi has a new product to do that and it searches all the fields, documents, and files just to get your pipeline mapped, but we haven't bought that product yet.

    For how long have I used the solution?

    I've been using Lumada Data Integration since version 4.2. We're now on version 9.1.

    What do I think about the stability of the solution?

    The stability has been great. Other than for upgrades, it has been pretty stable.

    What do I think about the scalability of the solution?

    The scalability is great too. We've been able to expand the current system and add a lot of customizations to it.

    For maintenance, surprisingly, it's just me who does so in our organization.

    How are customer service and support?

    The only issue that we've had is that it takes a little longer than we would like for support to resolve something, although things do eventually get incorporated. They're very quick to respond to an issue, but the fixing of the issue is not as quick.

    For example, a few versions ago, when we upgraded it, we found that the upgrade caused a whole bunch of issues with the Oracle data types and the way the ETL was working with them. It wasn't transforming to the data types properly, the way we were expecting it to. In the previous version that we were using it was working fine, but the upgrade caused the issue, and it took them a while to fix that.

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We didn't have another tool. This is the only tool we have used to create the data warehouse between the two systems. When we started looking at solutions, this one was great because it was open source and Java-based, and it had a Community Edition. But we actually purchased the Enterprise Edition.

    How was the initial setup?

    I came in after it was purchased and after the first deployment.

    What's my experience with pricing, setup cost, and licensing?

    We renew our license every two years. When I spoke to the project manager, he indicated that the pricing has been going up every two years. It's going to reach a point where, eventually, we're going to have to look at alternative solutions because of the price.

    When we first started with it, it was much cheaper. It has gone up drastically, especially since Hitachi bought out Pentaho. When they bought it, the price shot up. They said the increase is because of all the improvements they put into the product and the support that they're providing. From our point of view, their improvements are mostly on the data integration part of it, instead of the reporting part of it, and we aren't particularly happy with that.

    Which other solutions did I evaluate?

    I've used Tableau and other reporting tools, but Tableau sticks out because the reporting tool is much nicer. Tableau has its drawbacks with the ETL, because you can only use Tableau datasets. You have to get data into a Tableau file dataset and then the ETL part of it is stuck in Tableau forever.

    If we could use the Pentaho ETL and the Tableau reporting we'd be happy campers.

    What other advice do I have?

    It's a great product. The ETL part of the product is really easy to pick up and use. It has a graphical interface with the ability to be more complex via scripting and features that you can add.

    When looking at Hitachi Vantara's roadmap, the ability to upgrade more easily is one element of it that is important to us. Also, they're going more towards web-based solutions, instead of having local client development tools. If it does go on the web, and it works the same way it works on the client, that would be a nice feature. Currently, because we have these local client development tools, we have to have a VM client for our developers to use, and that makes it a little more tricky. Whereas if they put it on the web, then all our developers would be able to use any desktop and access the web for development.

    When it comes to the query performance of the solution on large datasets, we haven't had any issues with it. We have one table in our data warehouse that has about 120 million rows and we haven't had any performance issues.

    The solution gives you the flexibility to deploy it in any environment, whether on-prem or in the cloud. With our particular implementation, we've done a lot of customizations. We have special things that we bolted onto the product, so it's not as easy to put it onto the cloud for us. All of our customizations and bolt-ons end up costing us more because they make upgrades more difficult and time-consuming. We don't use an automated upgrade process. It's manual. We have to do a full reinstall and then apply all our bolt-ons and make sure it still works. If we could automate that process it would certainly reduce our costs.

    In terms of updating to version 9.2, which is the latest version, we're going to look into it next year and see what level of effort is required and determine how it impacts our current system. They release a new update about every six months, and there is a major release every year or two, so it's quite a fast schedule for updates.

    Overall, I would rate our satisfaction with our decision to purchase Hitachi products as a seven out of 10. I would definitely recommend the data integration tool but I wouldn't recommend the reporting tool.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Dale Bloom - PeerSpot reviewer
    Credit Risk Analytics Manager at MarketAxess
    Real User
    Top 10
    Integrates easily, significantly reduces our development time, and allows us to put as much code as we want
    Pros and Cons
    • "I absolutely love Hitachi. I'm one of the forefront supporters of Hitachi for my firm. It's so easy to integrate within our environments. In terms of being able to quickly build ETL jobs, transform, and then automate them, it's really easy to integrate throughout for data analytics."
    • "In the Community edition, it would be nice to have more modules that allow you to code directly within the application. It could have R or Python completely integrated into it, but this could also be because I'm using an older version."

    What is our primary use case?

    The use case is for data ETL on our various data repositories. We use it to aggregate and transform data for visualization purposes for our upper management.

    Currently, I am using the PDI locally on my laptop, but we are undergoing an integration to push this off. We have purchased the Enterprise edition and have licenses, and we are just working with our infrastructure to get that set up on a server. 

    We haven't yet launched the Enterprise edition, so I've had very minimal touch with Lumada, but I did have an overview with one of the engineers as to how to use the customer portal in terms of learning documentation. So, the documentation and support are basically the two main areas that I've been using it for. I haven't piped any data or anything through it. I've logged in a couple of times to the customer portal, and I've pretty much been using it as support functionality. I have been submitting requests to understand more about how to get everything to be working for the Enterprise edition. So, I have been using the Lumada customer portal mostly for Pentaho Data Integration.

    How has it helped my organization?

    When we get a question from our CEO that needs a response and that requires a little bit of legwork of pulling in from various market data, our own in-house repositories, and everything else, it allows me to arrive at the solutions much faster than having to do it through scripting in Python, coding, or anything else. I use multiple tools within my toolkit. I'm pretty heavy on Python, but I find that I can do quite a bit of pre-transformation of the data within the actual application for PDI Spoon than having to do everything through coding in Python.

    It has significantly reduced our ETL development time. I can't really quantify the hours, but it's a no-brainer for me for just pumping in things. If I have a simple question to ascertain, I can pull up and create any type of job or transform to easily get the solution within minutes, as opposed to however many hours of coding it would take. My estimate is that per week, I would be spending about 75% of my time in coding external to the application, whereas, with the application itself, I can do things within a fraction of that. So, it has reduced my time from 75% to about 5%. In terms of the cost of full-time employee coding and everything, the savings would also roughly be the same, which is from 75% to 5% per week. There is also a broader impact on other colleagues within my team. Currently, their processes are fairly manual, such as Excel-based, so the time savings are carried over to them as well.

    What is most valuable?

    I'm at the early stages with Lumada, and I have been using the documentation quite a bit. The support has definitely been critical right now in terms of trying to find out more about the architectural elements that need to go in for pushing the Enterprise edition.

    I absolutely love Hitachi. I'm one of the forefront supporters of Hitachi for my firm. It's so easy to integrate within our environments. In terms of being able to quickly build ETL jobs, transform, and then automate them, it's really easy to integrate throughout for data analytics. 

    I also appreciate the fact that it's not one of the low-code/no-code solutions. You can put as much JavaScript or another code into it as you want, and that makes it a really powerful tool.

    What needs improvement?

    I haven't been able to broach all the functionality of the Enterprise edition because it hasn't been integrated into our server. We're still building out the server, app server, and repository to support it.

    In the Community edition, it would be nice to have more modules that allow you to code directly within the application. It could have R or Python completely integrated into it, but this could also be because I'm using an older version.

    For how long have I used the solution?

    I have been using it here for about two months. 

    What do I think about the stability of the solution?

    I haven't had any problems with stability. Right now, for the implementation of the Enterprise edition, we're trying to make sure that it's highly available in case anything goes down, and we have proper safety nets in place, but personally, I haven't found any issues.

    What do I think about the scalability of the solution?

    It seems highly scalable. I've used the product in other firms, and we've managed to work pretty coherently pushing our changes for code, revisions, and everything else to Git and things like that.

    In terms of users, currently, in my firm, I'm the only user, but the intention is to push it globally for all of our users to be able to use it. 

    We would like to be able to support other teams and other departments within the organization. Currently, this is being used only for our credit risk team, but in general, within risk, we have many departments such as operational risk, enterprise risk, market risk, and credit risk. I'm bridging all of them right now. However, with other teams that have expressed an interest, it also will include our settlements team and potentially even our research team and FP&A.

    How are customer service and support?

    So far, it's been pretty good. I would rate them an eight out of 10. 

    People are fairly responsive initially to saying, "Okay, yes, we have this on our radar. Coming back." Sometimes, it might take a little bit longer for some responses, but it's still very good, and the quality is a 10 out of 10.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    At my current firm, we weren't using anything in this team. I just came in, and I knew I wanted to use this product. I had used it quite heavily at my previous firm, and it was just very easy. Even the folks who did not have prior coding experience or data ETL experience could fairly quickly learn its semantics or the ways to work with it. So, I figured that it would be a great product to push forward.

    Other teams in my firm were using low-code or no-code solutions, but I just can't stand their interfaces. It's rather limited in terms of even viewing what's on the screen and what you have. I appreciate the way you can debug very quickly within PDI.

    How was the initial setup?

    It was pretty straightforward for me. I had no problem with configuring it. For my personal use of the product, it took an hour of my time to get it onto my machine. For the Enterprise edition, the deployment is still going on, but it's mainly because we don't have many people on our infrastructure team to help. They have multiple ongoing projects. 

    The implementation strategy for my personal use case was fairly straightforward. It involved getting the Community edition and configuring it so that I can set up the pipelines for connecting to my data sources and databases and then output to a file share drive for now. All our databases are fairly read-only on our side. In terms of the implementation strategy for the Enterprise edition, we haven't gotten to the stage of completing it, but it'll work somewhat similarly. It's just that the repositories, instead of them being folder repositories, are going to be database-driven, and any code is going to be pushed to the database repository.

    What about the implementation team?

    We are not using any integrator or consultant for this. For its deployment and maintenance, we're rather limited in terms of the staff. We have one infrastructure person and me. I'm going to be in charge of maintaining it for the time being until I can increase my team.

    What was our ROI?

    When you can get things done much faster and free up people's time, it's a no-brainer.

    When I came into the firm, I was using the Community edition, which is the freeware version. Because the Enterprise edition costs something, it has actually increased our costs, but as a whole, in terms of operational ability and time savings for the rest of my team, the output from PDI and everything else has only increased the value of using this product.

    What's my experience with pricing, setup cost, and licensing?

    The pricing has been pretty good. I'm used to using everything open-source or freeware-based. I understand that organizations need to make sure that the solutions are secure, and that's basically where I hit a roadblock in my current organization. They needed to ensure that we had a license and we had a secure way of accessing it so that no outside parties could get access to our data, but in terms of pricing, considering how much other teams are spending on cloud solutions or even their existing solutions, its price point is pretty good.

    At this time, there are no additional costs. We just have the licensing fees.

    What other advice do I have?

    If you don't have the comfort level for the architectural build-out, then you can definitely opt for the white gloves treatment with an additional cost of about 50,000 to help with the integration and implementation effort of it. We chose not to go that route. Therefore, we're using support for any of the fine-tuning questions about making it highly available and other things.

    I have not used Lumada for creating pipelines. I'm using PDI to help with our data pipelines. Similarly, I am not using its ability to develop and deploy data pipeline templates at this time, and I also haven't used it for single end-to-end data management from ingestion to insight.

    The biggest lesson that I have learned from using this solution is that the order of operations is critical. Other than that, it has been an absolute treat to use.

    I've been espousing this product to everybody. I would rate it a 10 out of 10.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    System Engineer at a tech services company with 11-50 employees
    Real User
    Top 20Leaderboard
    Enterprise Edition pricing and reduced Community Edition functionality are making us look elsewhere
    Pros and Cons
    • "We also haven't had to create any custom Java code. Almost everywhere it's SQL, so it's done in the pipeline and the configuration. That means you can offload the work to people who, while they are not less experienced, are less technical when it comes to logic."
    • "The support for the Enterprise Edition is okay, but what they have done in the last three or four years is move more and more things to that edition. The result is that they are breaking the Community Edition. That's what our impression is."

    What is our primary use case?

    We use it for two major purposes. Most of the time it is for ETL of data. And based on the loaded and converted data, we are generating reports out of it. A small part of that, the pivot tables and the like, are also on the web interface, which is the more interactive part. But about 80 percent of our developers' work is on the background processes for running and transforming and changing data.

    How has it helped my organization?

    Before, a lot of manual work had to be done, work that isn't done anymore. We have also given additional reports to the end-users and, based upon them, they have to take some action. Based on the feedback of the users, some of the data cleaning tasks that were done manually have been automated. It has also given us a fast response to new data that is introduced into the organization.

    Using the solution we were able to reduce our ETL deployment time by between 10 and 20 percent. And when it comes to personnel costs, we have gained 10 percent.

    What is most valuable?

    The graphical user interface is quite okay. That's the most important feature. In addition, the different types of stores and data formats that can be accessed and transferred are an important component.

    We also haven't had to create any custom Java code. Almost everywhere it's SQL, so it's done in the pipeline and the configuration. That means you can offload the work to people who, while they are not less experienced, are less technical when it comes to logic. It's more about the business logic and less about the programming logic and that's really important.

    Another important feature is that you can deploy it in any environment, whether it's on-premises or cloud, because you can reuse your steps. When it comes to adding to your data processing capacity dynamically that's key because when you have new workflows you have to test them. When you have to do it on a different environment, like your production environment, it's really important.

    What needs improvement?

    I would like to see better support from one version to the next, and all the more so if there are third-party elements that you are using. That's one of the differences between the Community Edition and the Enterprise Edition. 

    In addition to better integration with third-party tools, what we have seen is that some of the tools just break from one version to the next and aren't supported anymore in the Community Edition. What is behind that is not really clear to us, but the result is that we can't migrate, or we have to migrate to other parts. That's the most inconvenient part of the tool.

    We need to test to see if all our third-party plugins are still available in a new version. That's one of the reasons we decided we would move from the tool to the completely open-source version for the ETL part. That's one of the results of the migration hassle we have had every time.

    The support for the Enterprise Edition is okay, but what they have done in the last three or four years is move more and more things to that edition. The result is that they are breaking the Community Edition. That's what our impression is.

    The Enterprise Edition is okay, and there is a clear path for it. You will not use a lot of external plugins with it because, with every new version, a lot of the most popular plugins are transferred to the Enterprise Edition. But the Community Edition is almost not supported anymore. You shouldn't start in the Community Edition because, really early on, you will have to move to the Enterprise Edition. Before, you could live with and use the Community Edition for a longer time.

    For how long have I used the solution?

    I have been working with Hitachi Lumada Data Integration for seven or eight years.

    What do I think about the stability of the solution?

    The stability is okay. In the transfer from before it was Hitachi to Hitachi, it was two years of hell, but now it's better.

    What do I think about the scalability of the solution?

    At the scale we are using it, the solution is sufficient. The scalability is good, but we don't have that big of a data set. We have a couple of billion data records involved in the integration. 

    We have it in one location across different departments with an outside disaster recovery location. It's on a cluster of VMs and running on Linux. The backend data store is PostgreSQL.

    Maybe our design wasn't quite optimal for reloading the billions of records every night, but that's probably not due to the product but to the migration. The migration should have been done in a bit of a different way.

    How are customer service and support?

    I had contact with their commercial side and with the technical side for the setup and demos, but not after we implemented it. That is due to the fact that the documentation and the external consultant gave us a lot of information about it.

    Which solution did I use previously and why did I switch?

    We came from the Microsoft environment to Hitachi, but that was 10 years back. We switched due to the licensing costs and because there wasn't really good support for the PostgreSQL database.

    Now, I think the Microsoft environment isn't that bad, and there is also better support for open-source databases.

    How was the initial setup?

    I was involved in the initial migration from Microsoft to Hitachi. It was rather straightforward, not too complex. Granted, it was a new toolset, but that is the same with every new toolset. The learning curve wasn't too steep.

    The maintenance effort is not significant. From time to time we have an error that just pops up without our having any idea where it comes from. And then, the next day, it's gone. We get that error something like three times a year. Nobody cares about it or is looking into the details of it. 

    The migrations from one version to the next that we did were all rather simple. During that process, users don't have it available for a day, but they can live with that. The migration was done over a weekend and by the following Monday, everything was up and running again.

    What about the implementation team?

    We had some external help from someone who knows the product and had already had some experience with implementing the tool.

    What was our ROI?

    In terms of ROI, over the years it was a good step to make the move to Hitachi. Now, I don't think it would be. Now, it would be a different story.

    What's my experience with pricing, setup cost, and licensing?

    We are using the Community Edition. We have been trying to use and sell the Enterprise version, but that hasn't been possible due to the budget required for it.

    Which other solutions did I evaluate?

    When we made the choice, it was between Microsoft, Hitachi, and Cognos. The deciding factor in going with Hitachi was its better support for open-source databases and data stores. Also, the functionality of the Community version was what was needed by most of our customers.

    What other advice do I have?

    Our experience with the query performance of Lumada on large data sets is that Lumada is not what determines performance. Most of the time, the performance comes from the database or the data store underneath Lumada. Depending on how big your data set is, you have to change or optimize your data store and then you can work with large data sets.

    The fine-tuning of the database that is done outside of Lumada is okay because a tool can't provide every insight into every type of data store or dataset. If you are looking into optimization, you have to use your data store optimization tools. Hitachi isn't designed for that, and we were not expecting to have that.

    I'm not really that impressed with Hitachi's ability to quickly and effectively solve issues we have brought up, but it's not that bad either. It's halfway, not that good and not that bad.

    Overall, our Hitachi solution was quite good, but over the last couple of years, we have been trying to move away from the product due to a number of things. One of them is the price. It's really expensive. And the other is that more and more of what used to be part of the Community Edition functionality is moving to the Enterprise Edition. The latter is okay and its functions are okay, but then we are back to the price. Some of our customers don't have the deeper pockets that Hitachi is aiming for.

    Before, it was more likely that I would recommend Hitachi Ventara to a colleague. But now, if you are starting in an environment, you should move to other solutions. If you have the money for the Enterprise Edition, then I would say my likelihood of recommending it, on a scale of one to 10, would be a seven. Otherwise, it would be a one out of 10.

    If you are going with Hitachi, go for the Enterprise version or stay away from Hitachi.

    It's also really important to think in great detail about your loading process at the start. Make sure that is designed correctly. That's not directly related to the tool itself, but it's more about using the tool and how the loads are transferred.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Flag as inappropriate
    PeerSpot user
    Solution Integration Consultant II at a tech vendor with 201-500 employees
    Consultant
    Top 20
    Reduces the effort required to build sophisticated ETLs
    Pros and Cons
    • "We use Lumada’s ability to develop and deploy data pipeline templates once and reuse them. This is very important. When the entire pipeline is automated, we do not have any issues in respect to deployment of code or with code working in one environment but not working in another environment. We have saved a lot of time and effort from that perspective because it is easy to build ETL pipelines."
    • "It could be better integrated with programming languages, like Python and R. Right now, if I want to run a Python code on one of my ETLs, it is a bit difficult to do. It would be great if we have some modules where we could code directly in a Python language. We don't really have a way to run Python code natively."

    What is our primary use case?

    My work primarily revolves around data migration and data integration for different products. I have used them in different companies, but for most of our use cases, we use it to integrate all the data that needs to flow into our product. Also, we can have outbound from our product when we need to send to different, various integration points. We use this product extensively to build ETLs for those use cases.

    We are developing ETLs for the inbound data into the product as well as outbound to various integration points. Also, we have a number of core ETLs written on this platform to enhance our product.

    We have two different modes that we offer: one is on-premises and the other is on the cloud. On the cloud, we have an EC2 instance on AWS, then we have installed that EC2 instance and we call it using the ETL server. We also have another server for the application where the product is installed.

    We use version 8.3 in the production environment, but in the dev environment, we use version 9 and onwards.

    How has it helped my organization?

    We have been able to reduce the effort required to build sophisticated ETLs. Also, we now are in the migration phase from an on-prem product to a cloud-native application. 

    We use Lumada’s ability to develop and deploy data pipeline templates once and reuse them. This is very important. When the entire pipeline is automated, we do not have any issues in respect to deployment of code or with code working in one environment but not working in another environment. We have saved a lot of time and effort from that perspective because it is easy to build ETL pipelines.

    What is most valuable?

    The metadata injection feature is the most valuable because we have used it extensively to build frameworks, where we have used it to dynamically generate code based on different configurations. If you want to make a change at all, you do not need to touch the actual code. You just need to make some configuration changes and the framework will dynamically generate code for that as per your configuration. 

    We have a UI where we can create our ETL pipelines as needed, which is a key advantage for us. This is very important because it reduces the time to develop for a given project. When you need to build the whole thing using code, you need to do multiple rounds of testing. Therefore, it helps us to save some effort on the QA side.

    Hitachi Vantara's roadmap has a pretty good list of features that they have been releasing with every new version. For instance, in version 9, they have included metadata injection for some of the steps. The most important elements of this roadmap to our organization’s strategy are the data-driven approach that this product is taking and the fact that we have a very low-code platform. Combining these two is what gives us the flexibility to utilize this software to enhance our product.

    What needs improvement?

    It could be better integrated with programming languages, like Python and R. Right now, if I want to run a Python code on one of my ETLs, it is a bit difficult to do. It would be great if we have some modules where we could code directly in a Python language. We don't really have a way to run Python code natively. 

    For how long have I used the solution?

    I have been working with this tool for five to six years.

    What do I think about the stability of the solution?

    They are making it a lot more stable. Earlier, stability used to be an issue when it was not with Hitachi. Now, we don't see those kinds of issues or bugs within the platform because it has become far more stable. Also, we see a lot of new big data features, such as connecting to the cloud.

    What do I think about the scalability of the solution?

    Lumada is flexible to deploy in any environment, whether on-premises or the cloud, which is very important. When we are processing data in batches on certain days, e.g., at the end of the week or month, we might have more data and need more processing power or RAM. However, most times, there might be very minimal usage of that CPU power. In that way, the solution has helped us to dynamically scale up, then scale down when we see that we have more data that we need to process.

    The scalability is another key advantage of this product versus some of the others in the market since we can tweak and modify a number of parameters. We are really impressed with the scalability.

    We have close to 80 people who are using this product actively. Their roles go all the way from junior developers to support engineers. We also have people who have very little coding knowledge and are more into the management side of things utilizing this tool.

    How are customer service and support?

    I haven't been part of any technical support discussions with Hitachi.

    Which solution did I use previously and why did I switch?

    We are very satisfied with our decision to purchase Hitachi's product. Previously, we were using another ETL service that had a number of limitations. It was not a modern ETL service at all. For anything, we had to rely on another third-party software. Then, with Hitachi Lumada, we don't have to do that. In that way, we are really satisfied with the orchestration or cloud-native steps that they offer. We are really happy on those fronts.

    We were using something called Actian Services, which had less features and it ended up costing more than the enterprise edition of Pentaho.

    We could not do a number of things on Actian. For instance, we were unable to call other APIs or connect to an S3 bucket. It was not a very modern solution. Whereas, with Pentaho, we could do all these things as well as have great marketplaces where we could find various modules and third-party plugins. Those features were simply not there in the other tool.

    How was the initial setup?

    The initial setup was pretty straightforward. 

    What about the implementation team?

    We did not have any issues configuring it, even in my local machine. For the enterprise edition, we have a separate infrastructure team doing that. However, for at least the community edition, the deployment is pretty straightforward.

    What was our ROI?

    We have seen at least 30% savings in terms of effort. That has helped us to price our service and products more aggressively in the market, helping us to win more clients.

    It has reduced our ETL development time. Per project, it has reduced by around 30% to 35%.

    We can price more aggressively. We were actually able to win projects because we had great reusability of ETLs. A code that was used for one client can be reused with very minimal changes. We didn't have any upfront cost for kick-starting projects using the Community edition. It is only the Enterprise edition that has a cost. 

    What's my experience with pricing, setup cost, and licensing?

    For most development tasks, the Enterprise edition should be sufficient. It depends on the type of support that you require for your production environment.

    Which other solutions did I evaluate?

    We did evaluate SSIS since our database is based on Microsoft SQL server. SSIS comes with any purchase of an SQL Server license. However, even with SSIS, there were some limitations. For example, if you want to build a package and reuse it, SSIS doesn't provide the same kinds of abilities that Pentaho does. The amount of reusability reduces when we try to build the same thing using SSIS. Whereas, in Pentaho, we could literally reuse the same code by using some of its features.

    SSIS comes with the SQL Server and is easier to maintain, given that there are far more people who would have knowledge of SSIS. However, if I want to do a PCP encryption or make an API connection, it is difficult. To create a reusable package is not that easy, which would be the con for SSIS. 

    What other advice do I have?

    The query performance depends on the database. It is more likely to be good if you have a good database server with all the indexes and bells and whistles of a database. However, from a data integration tool perspective, I am not seeing any issues with respect to query performance.

    We do not build visualization features that much with Hitachi. For the reporting purposes, we have been using one of the tools from the product, then prepare the data accordingly. 

    We use this for all the projects that we are currently running. Going forward, we will be sticking only to using this ETL tool.

    We haven't had any roadblocks using Lumada Data Integration.

    On a scale of one to 10, I would recommend Hitachi Vantara to a friend or colleague as a nine.

    If you need to build ETLs quickly in a low-code environment, where you don't want to spend a lot of time on the development side of things but it is a little difficult to find resources, then train them in this product. It is always worth that effort because it ends up saving a lot of time and resources on the development side of projects.

    Overall, I would rate the product as a nine out of 10.

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Ridwan Saeful Rohman - PeerSpot reviewer
    Data Engineering Associate Manager at Zalora Group
    Real User
    Top 10
    Good abstraction and useful drag-and-drop functionality but can't handle very large data amounts
    Pros and Cons
    • "The abstraction is quite good."
    • "If you develop it on MacBook, it'll be quite a hassle."

    What is our primary use case?

    I still use this tool on a daily basis. Comparing it to my experience with other ETL tools, the system that I created using this tool was quite simple. It is just as simple as extracting the data from MySQL, exporting it on the CSV, putting it on the S3, then pushing it into Redshift. 

    The PDI Kettle Job and Kettle Transformation are bundled by the shell script, then scheduled, and orchestrated by Jenkins.

    We still use this tool due to the fact that there are a lot of old systems that still use it. The new solution that we use is mostly on Airflow. We are still in the transition phase. To be clear, Airflow is a data orchestration tool that mainly uses Python. Everything from the ETL, all the way to the scheduling and the monitoring of any issues. It's in one system and entirely on Airflow.

    How has it helped my organization?

    In my current company, it does not have any major impact. We use it for old and simple ETLs only.

    In terms of setting the ETL tools that we put on the panel, it's quite useful. However, this kind of functionality that we currently put on the solution can be easily switched by other tools that exist on the market. It's time to change it entirely to Airflow. We'll likely change in the next six months. 

    What is most valuable?

    This solution offers tools that are drag and drop. The script is quite minimal. Even if you do not come from IT or your background is not in software engineering, it is possible. It is quite intuitive. You can drag and drop many functions.

    The abstraction is quite good.

    Also, if you're familiar with the product itself, they have transformational abstractions and job abstractions. Here, we can create a smaller transformation in the Kettle transformation, and then the bigger ones on the Kettle job. For someone who has familiarity with Python or someone who has no scripting background at all, the product is useful.

    For larger data, we are using Spark.

    The solution enables us to create pipelines with minimal manual, or custom coding efforts. Even if you have no advanced experience in scripting, it is possible to create ETL tools. I have a recent graduate coming from a management major who has no experience with SQL. I trained him for three months, and within that time he became quite fluent, with no prior experience using ETL tools.

    Whether or not it's important to handle the creation of pipelines with minimal coding depends on the team. If I change the solution to Airflow, then I will need more time to teach them to become fluent in the ETL tool. By using these kinds of abstractions in the product, I can compress the training time to just three months. With Airflow, it will take longer than six months to get new users to the same point.

    We use the solution's ability to develop and deploy data pipeline templates and reuse them.

    The old system was created by someone prior to me in my organization and we still use it. It was developed by him a long time ago. We also use the solution for some ad hoc reporting.

    The ability to develop and deploy data pipeline templates once and reuse them is really important to us. There are some requests to create the pipelines. I create them and then deploy them on our server. It then has to be as robust as when we do the scheduling so that it does not fail.

    We like the automation. I cannot imagine how the data teams will work if everything was done on an ad hoc basis. Everything should be automated. Using my organization as an example, I can with confidence say that 95% of our data distributions are automated and only 5% ad hoc. With this solution, we query the data manually. We process the data on the spreadsheets manually and then distribute it to the organization. It’s important to be robust and be able to automate.

    So far, we can deploy the solution easily on the cloud, which is on AWS. I haven't really tried it on another server. We deploy it on our AWS EC2, however, we develop it on our local computer, which consists of people who use Windows. There are some people who also use MacBooks.

    I personally have used it on both. I have to develop both on Windows and MacBook. I can say that Windows is easier to navigate. On the MacBook, the display becomes quite messed up if you are enabling the dark mode.

    The solution did reduce our ETL development time if you compare it to the scripting. However, this will really depend on your experience.

    What needs improvement?

    Five years ago, when I had less experience with scripting, I would have definitely used this product over Airflow, as it will be easier for me with the abstraction being quite intuitive. Five years ago, I would choose the product over the other tools using pure scripting as it would reduce most of my time in terms of developing ETL tools. This isn't the case anymore as I have more familiarity with scripting.

    When I first joined my organization, I was still using Windows. It is quite straightforward to develop the ETL system on Windows. However, when I changed my laptop to MacBook, it was quite a hassle. When we tried to open the application, we had to open the terminal first, go to the solution's directory, and then run the executable file. The display also becomes quite messed up when we enable dark mode on MacBook.

    Therefore, if you develop it on MacBook, it'll be quite a hassle, however, when you develop it on Windows, it's not really different from other ETL tools on the market, like SQL Server Integration Services, Informatica, et cetera.

    For how long have I used the solution?

    I have been using this tool since I moved to my current company, which is about one year ago.

    What do I think about the stability of the solution?

    The performance is good. I have not done a test on the bleeding edge of the product. We only do simple jobs. In terms of data, we extract it and then exported it from MySQL to the CSV. There were only millions of data points, not billions of data points. So far, it has met our expectations. It's quite good for a smaller number of data points. 

    What do I think about the scalability of the solution?

    I'm not sure that the product could keep up with the data growth. It can be useful for millions of data points. However, I haven't explored the option of billions of data points. I think there are better solutions that are on the market. It's also applied to the other drag-and-drop ETL tools too like SQL Server Integration Service, Informatica, etc. 

    How are customer service and support?

    We don't really use technical support. The current version that we are using is no longer supported by their representatives. We didn't update it yet to the newer version. 

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We're moving to Airflow. The reason for the switch was mostly due to a problem when we are debugging. If you're familiar with the SQLs for integration services, the ETL tools from Microsoft and the debugging function are quite intuitive. You can exactly spot which transformation has failed or which transformation has an error. However, in the solution, from what my colleagues told me, it is hard to do that. When there is an error, we cannot directly spot where the error is coming from.

    Airflow is quite customized and it's not as rigid as this product. We can deploy the simple ETL tools all the way to the machine learning systems on Airflow. Airflow mainly uses Python, which our team is quite familiar with. This solution is still handled by only two people out of 27 people on our team. Not enough people know it. 

    How was the initial setup?

    There are no separations between the deployment and other teams. Each of our teams acts like an individual contributor. We handle the implementation process all the way from face-to-face business meetings, setting timelines, developing the tools, and defining the requirements, to the production deployment. 

    The initial setup is straightforward. Currently, the use of versioning control in our organization is quite loose. We are not using any versioning control software. The way we deploy it is just as simple as putting the Kettle transformation file into our EC2 server and rewriting the old file, that's it.

    What's my experience with pricing, setup cost, and licensing?

    I'm not really sure what the price for the product is. I don't handle the purchasing or the commissioning.

    What other advice do I have?

    We put it on our AWS EC2 server, however, when we developed it, it was put on our local server. We deploy it onto our EC2 server. We bundle it on our shell scripts and the shell scripts are run by Jenkins.

    I'd rate the solution a seven out of ten. 

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    Download our free Hitachi Lumada Data Integration Report and get advice and tips from experienced pros sharing their opinions.
    Updated: May 2023
    Product Categories
    Data Integration Tools
    Buyer's Guide
    Download our free Hitachi Lumada Data Integration Report and get advice and tips from experienced pros sharing their opinions.