"It is really easy to set up and the interface is easy to use."
"StreamSets’ data drift resilience has reduced the time it takes us to fix data drift breakages. For example, in our previous Hadoop scenario, when we were creating the Sqoop-based processes to move data from source to destinations, we were getting the job done. That took approximately an hour to an hour and a half when we did it with Hadoop. However, with the StreamSets, since it works on a data collector-based mechanism, it completes the same process in 15 minutes of time. Therefore, it has saved us around 45 minutes per data pipeline or table that we migrate. Thus, it reduced the data transfer, including the drift part, by 45 minutes."
"StreamSets data drift feature gives us an alert upfront so we know that the data can be ingested. Whatever the schema or data type changes, it lands automatically into the data lake without any intervention from us, but then that information is crucial to fix for downstream pipelines, which process the data into models, like Tableau and Power BI models. This is actually very useful for us. We are already seeing benefits. Our pipelines used to break when there were data drift changes, then we needed to spend about a week fixing it. Right now, we are saving one to two weeks. Though, it depends on the complexity of the pipeline, we are definitely seeing a lot of time being saved."
"In StreamSets, everything is in one place."
"The data copy template is a valuable feature."
"The security of the agent that is installed on-premises is very good."
"I enjoy the ease of use for the backend JSON generator, the deployment solution, and the template management."
"The initial setup is very quick and easy."
"It is very modular. It works well. We've used Data Factory and then made calls to libraries outside of Data Factory to do things that it wasn't optimized to do, and it worked really well. It is obviously proprietary in regards to Microsoft created it, but it is pretty easy and direct to bring in outside capabilities into Data Factory."
"Allows more data between on-premises and cloud solutions"
"The most valuable feature is the copy activity."
"When it comes to our business requirements, this solution has worked well for us. However, we have not stretched it to the limit."
"Access to numerous forums and internet information."
"The most valuable features of Denodo are the extraction option for adapters, and there are many things for the views, that are cached. Denodo is not storing the data, it looks first to tune the query, and these things are for the agents."
"Overall, the product works quite well and has a good set of features."
"Denodo makes it easy to export data as a service or data link to other services."
"The most valuable features are data lineage and the concept of a semantic layer."
"The most valuable feature is Data Catalogs."
"The most valuable feature is the performance. Denodo is very useful, especially in this huge pharma environment. I've found that older SAP solutions were very tightly coupled to each other, which resulted in data restrictions. Getting data from different sources was tough and tedious. Compared to these old solutions, Denodo is very easy to work with for the analytical team. Now that we've implemented this virtualization layer, we are capable of getting the data very smoothly. We implemented a very small unit, but the performance and integration have been very good."
"We've seen a couple of cases where it appears to have a memory leak or a similar problem."
"Currently, we can only use the query to read data from SAP HANA. What we would like to see, as soon as possible, is the ability to read from multiple tables from SAP HANA. That would be a really good thing that we could use immediately. For example, if you have 100 tables in SQL Server or Oracle, then you could just point it to the schema or the 100 tables and ingestion information. However, you can't do that in SAP HANA since StreamSets currently is lacking in this. They do not have a multi-table feature for SAP HANA. Therefore, a multi-table origin for SAP HANA would be helpful."
"The logging mechanism could be improved. If I am working on a pipeline, then create a job out of it and it is running, it will generate constant logs. So, the logging mechanism could be simplified. Now, it is a bit difficult to understand and filter the logs. It takes some time."
"If you use JDBC Lookup, for example, it generally takes a long time to process data."
"One area for improvement is documentation. At present, there isn't enough documentation on how to use Azure Data Factory in certain conditions. It would be good to have documentation on the various use cases."
"Integration of data lineage would be a nice feature in terms of DevOps integration. It would make implementation for a company much easier. I'm not sure if that's already available or not. However, that would be a great feature to add if it isn't already there."
"The performance could be better. It would be better if Azure Data Factory could handle a higher load. I have heard that it can get overloaded, and it can't handle it."
"There is no built-in function for automatically adding notifications concerning the progress or outline of a pipeline run."
"Some of the optimization techniques are not scalable."
"Data Factory's cost is too high."
"Occasionally, there are problems within Microsoft itself that impacts the Data Factory and causes it to fail."
"Real-time replication is required, and this is not a simple task."
"The dropdown menus feel antiquated to me, and the administrative portals need improvement."
"There have been some issues when you are at a table. Currently, Denodo exports data sets for a tabular model. When you are finished modeling your database or data warehouse they export a link to be used in Tableau. They should support other tools like Power BI."
"The support is not the best and should be improved."
"Denodo can improve usage management-related aspects. If you deal with the mini views, it gets stuck. The performance is very slow when we go with a large number of views and high volume."
"Denodo's training documentation could be improved by providing more material. From an administrative standpoint, I've found that only Denodo websites provide the usual tutorials. It may be because it's a bit of a restricted tool, but it results in trouble with learning. Normally, I can find help and solutions from other sources, but I haven't been able to find any for Denodo. Other that, it's fine and it performs well. I only have six months of experience, so I can't accurately suggest improvements."
"Lacks integrations with AWS, GCP and the like."
"We occasionally have some integration issues that we need to work through."
StreamSets offers an end-to-end data integration platform to build, run, monitor and manage smart data pipelines that deliver continuous data for DataOps, and power the modern data ecosystem and hybrid integration.
Only StreamSets provides a single design experience for all design patterns for 10x greater developer productivity; smart data pipelines that are resilient to change for 80% less breakages; and a single pane of glass for managing and monitoring all pipelines across hybrid and cloud architectures to eliminate blind spots and control gaps.
With StreamSets, you can deliver the continuous data that drives the connected enterprise.
Create, schedule, and manage your data integration at scale with Azure Data Factory - a hybrid data integration (ETL) service. Work with data wherever it lives, in the cloud or on-premises, with enterprise-grade security.
Denodo operates in the data virtualization field– providing high performance, unified access to the broadest range of enterprise, big data, cloud and unstructured sources, and the most agile data services provisioning and governance – at less than half the cost of traditional data integration. Denodo’s customers have gained significant business agility and ROI by creating a unified virtual data layer that serves strategic enterprise-wide information needs for agile BI, big data analytics, web and cloud integration, single-view applications, and SOA data services across every major industry
Denodo Platform offers the broadest access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real-time, exceeding the performance needs of data-intensive organizations for both analytical and operational use cases, delivered in a much shorter timeframe than traditional data integration tools.
The Denodo Platform drives agility, faster time to market, increased customer engagement from single view of customer and operational efficiencies from real-time business intelligence and self-serviceability.
Founded in 1999, Denodo is privately held, with main offices in Palo Alto (CA), Madrid (Spain), Munich (Germany) and London (UK).
Azure Data Factory is ranked 2nd in Data Integration Tools with 32 reviews while Denodo is ranked 15th in Data Integration Tools with 7 reviews. Azure Data Factory is rated 7.8, while Denodo is rated 7.4. The top reviewer of Azure Data Factory writes "Easy to bring in outside capabilities, flexible, and works well". On the other hand, the top reviewer of Denodo writes "Good performance and integration, but needs more training documentation". Azure Data Factory is most compared with Informatica PowerCenter, Informatica Cloud Data Integration, Talend Open Studio, Alteryx Designer and Snowflake, whereas Denodo is most compared with Informatica PowerCenter, Informatica Enterprise Data Catalog, Delphix, Mule Anypoint Platform and Alteryx Designer. See our Azure Data Factory vs. Denodo report.
See our list of best Data Integration Tools vendors.
We monitor all Data Integration Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.