The primary use of this product at my company is for data warehousing. We utilize it as a data integration tool and it works fine, as long as you do not overload it - if so you may have to resort to another product. But Hitachi Lumada Data Integration is great for what it is made for. We collect data from various sources through it and structure it. Through the tool, we access reporting of the data and increase business intelligence. In my opinion, it does a good job for the value. We have used competitors of this solution before and this one ranks on the top according to my colleagues and I.
My company has used this product to transform data from databases, CSV files, and flat files. It really does a good job. We were most satisfied with the results in terms of how many people could use it. We have ETL developers but most of our team varies in computer background. This tool is low-code and has many visualization features that make using it extremely easy! All the things we have utilized it for were easy to explain and perform by most of my teammates and the company was very satisfied that everyone could access the data this tool integrated.
In my opinion, the reporting side of this tool needs serious improvements. In my previous company, we worked with Hitachi Lumada Data Integration and while it does a good job for what it’s worth, it can definitely up its reporting features. While I used it, there were plenty of basic components I felt that it missed. For example, it did not provide the option to search the repository for a report. I feel like this is a huge miss out since it would save on a lot of manual searching. If they upped their game there, I would be among the first to recommend it.
I do not see any big problems with this solution. It works very well for my company, a medium-sized one. We have been using it for a while now and the only issue I have caught during that time is with caching. We had to integrate a larger data set once and then we encountered a slight problem with the tool. We were used to its high speed with all operations but with the larger data set, things got slower than usual. I am not sure if it was a one-time thing or if it is common, so you may want to look into that if you are a big data company.