StreamSets Valuable Features

Karthik Rajamani - PeerSpot reviewer
Principal Engineer at Tata Consultancy Services

I have used Data Collector, Transformer, and Control Hub products from StreamSets. What I really like about these products is that they're very user-friendly. People who are not from a technological or core development background find it easy to get started and build data pipelines and connect to the databases. They would be comfortable like any technical person within a couple of weeks. I really like its user-friendliness. It is easy to use. They have a single snapshot across different products, which is very helpful to learn and use the product based on your use case.

Its interface is very cool. If I'm using a batch project or an ETL, I just have to configure appropriate stages. It is the same process if you go with streaming. The only difference is that the stages will change. For example, in a batch, you might connect to Oracle Database, or in streaming, you may connect to Kafka or something like that. The process is the same, and the look-and-feel is the same. The interface is the same across different use cases.

It is a great product if you are looking to ramp up your teams and you are working with different databases or different transformations. Even if you don't have any skilled developers in Spark, Python, Java, or any kind of database, you can still use this product to ramp up your team and scale up your data migration to cloud or data analytics. It is a fantastic product.

View full review »
AbhishekKatara - PeerSpot reviewer
Technical Lead at Sopra Steria

It is a pretty easy tool to use. There is no coding required. StreamSets provides us a canvas to design our pipeline. At the beginning of any project, it gives us a picture, which is an advantage. For example, if I want to do a data migration from on-premise to cloud, I will draw it for easier understanding based on my target system, and StreamSets does exactly the same thing by giving us a canvas where I can design our pipeline.

There are a wide range of available stages: various sources, relational sources, streaming sources. There are various processes like to transform the source data. It is not only to migrate data from source to destination, but we can utilize different processes to transform the data. When I was working on the healthcare project, there was personal identification information on the personal health information (PHI) data that we needed to mask. We can't simply move it from source to destination. Therefore, StreamSets provides masking of that sensitive data.

It provides us a facility to generate schema. There are different executors available, e.g., Pipeline Finisher executor, which helps us in finishing the pipeline. 

There are different destinations, such as S3, Azure Data Lake, Hive, and Kafka Hadoop-based systems. There are a wide range of available stages. It supports both batch and streaming. 

Scheduling is quite easy in StreamSets. From a security perspective, there is integration with keywords, e.g., for password fetching or secrets fetching. 

It is pretty easy to connect to Hadoop using StreamSets. Someone just needs to be aware about the configuration details, such as which Hadoop cluster to connect and what credentials will be available. For example, if I am trying with my generic user, how do I connect with the Hadoop distributed system? Once we have the details of our cluster and the credential, we can load data to the Hadoop standalone file system. In our use case, we collected data from our RDBMS sources using JDBC Query Consumer. We queried the data from the source table, captured that data, and then loaded the data into the destination Hadoop distributed file system. Thus, configuration details are required. Once we have the configuration details, i.e., the required credentials, we can connect with Hadoop and Hive. 

It takes care of data drift. There are certain data rules, matrix rules, or capabilities provided by StreamSets that we can set. So, if the source schema gets deviated somehow, StreamSets will automatically notify us or send alerts in automated fashion about what is going wrong. StreamSets also provides Change Data Capture (CDC). As soon as the source data is changed, it can capture that and update the details into the required destination. 

View full review »
SS
Senior Data Engineer at a energy/utilities company with 1,001-5,000 employees

The types of the source systems that it can work with are quite varied. There are numerous source systems that it can work with, e.g., a SQL Server database, an Oracle Database, or REST API. That is an advantage we are getting. 

The most important feature is the Control Hub that comes with the DataOps Platform and does load balancing. So, we do not worry about the infrastructure. That is a highlight of the DataOps platform: Control Hub manages the data load to various engines.

It is quite simple for anybody who has an ETL or BI background and worked on any ETL technologies, e.g., IBM DataStage, SAP BODS, Talend, or CloverETL. In terms of experience, the UI and concepts are very similar to how you develop your extraction pipeline. Therefore, it is very simple for anybody who has already worked on an ETL tool set, either for your data ingestion, ETL pipeline, or data lake requirements.

We use StreamSets to load into AWS S3 and Snowflake databases, which are then moved forward by Power BI or Tableau. It is quite simple to move data into these platforms using StreamSets. There are a lot of tools and destination stages within StreamSets and Snowflake, Amazon S3, any database, or an HTTP endpoint. It is just a drag-and-drop feature that is saving a lot of time when rewriting any custom code in Python. StreamSets enables us to build data pipelines without knowing how to code, which is a big advantage.

The data resilience feature is good enough for our ETL operations, even for our production pipelines at this stage. Therefore, we do not need to build our own custom framework for it since what is available out-of-the-box is good enough for a production pipeline.

StreamSets data drift feature gives us an alert upfront so we know that the data can be ingested. Whatever the schema or data type changes, it lands automatically into the data lake without any intervention from us, but then that information is crucial to fix for downstream pipelines, which process the data into models, like Tableau and Power BI models. This is actually very useful for us. We are already seeing benefits. Our pipelines used to break when there were data drift changes, then we needed to spend about a week fixing it. Right now, we are saving one to two weeks. Though, it depends on the complexity of the pipeline, we are definitely seeing a lot of time being saved.

View full review »
Buyer's Guide
StreamSets
November 2022
Learn what your peers think about StreamSets. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
655,774 professionals have used our research since 2012.
BR
Data Engineer at a consultancy with 11-50 employees

It's very effective in project delivery. This month, at the end of June, I will deploy all the integrations which I developed in StreamSets to production remit. The business users and customers are happy with the data flow optimizer from the SoPlex cloud. It all looks good.

Not many challenges are there in terms of learning new technologies and using new tools. We will try and do an R&D analysis more often.

Everything is in place and it comes as a package. They install everything. The package includes Python, Ruby, and others. I just need to configure the correct details in the pipeline and go ahead with my work.

The ease of the design experience when implementing batch streaming and ETL pipelines is very good. The streaming is very good. Currently, I'm using Data Collector and it’s effective. If I'm going to use less streaming, like in Java core, I need to follow up on different processes to deploy the core and connect the database. There are not so many cores that I need to write.

In StreamSets, everything is in one place. If you want to connect a database and configure it, it is easy. If you want to connect to HTTP, it’s simple. If I'm going to do the same with my other tools, I don’t need many configurations or installations. StreamSets' ability to connect enterprise data stores such as OLTP databases and Hadoop, or messaging systems such as Kafka is good. I also send data to both the database and Kafka as well.

You will get all the drives that you will need to install with the database. If you use other databases, you're going to need a JDBC, which is not difficult to use.

I'm sending data to different CDP URL databases, cloud areas, and Azure areas.

StreamSets' built-in data drift resilience plays a part in our ETL operations. We have some processors in StreamSets, and it will tell us what data has been changed and how data needs to be sent.

It's an easy tool. If you're going to use it as a customer, then it should take a long time to process data. I'm not sure if in the future, it will take some time to process the billions of records that I'm working on. We generally process billions of records on a daily basis. I will need to see when I work on this new project with Snowflake. We might need to process billions of records, and that will happen from the source. We’ll see how long it needs to take and how this system is handling it. At that point, I’ll be able to say how effectivly StreamSets processes it.

The Data Collector saves time. However, there are some issues with the DPL.

StreamSets helped us break down data silos within our organizations.

One advantage is that everything happens in one place, if you want to develop or create something, you can get those details from StreamSets. The portal, however, takes time. However, they are focusing on this.

StreamSets' reusable assets have helped to reduce workload by 32% to 40%.

StreamSets helped us to scale our data operations.

If you get a request to process data for other processing tools, it might take a long time, like two to three hours. With this, I can do it within half an hour, 20 or 30 minutes. It’s more effective. I have everything in one place and I can configure everything. It saves me time as it is so central. 

View full review »
Prateek Agarwal - PeerSpot reviewer
Manager at NISG

It is a very powerful, modern data analytics solution, in which you can integrate a large volume of data from different sources. It integrates all of the data and you can design, create, and monitor pipelines according to your requirements. It is an all-in-one day data ops solution.

It is quite easy to implement batch, streaming, or ETL pipelines. You need some initial hands-on training to use it, but they provide very good training material and a user manual. They also provide some initial training to the user, so that they can easily run the application. 

It has drag-and-drop features, in which almost no code is required for creating ETL pipelines. You can easily create data pipelines according to the requirements. We have so many team members who don't know how to code, but they are perfect in data analytics. StreamSets enable us to integrate the data pipelines. Things are moving to almost-no-code or low-code platforms, like Azure Analytics and AWS. They all provide almost-no-code platforms for data integration activities.

Because we are working on a large data analytics project, our data volume is huge. We are integrating StreamSets with Kafka, Hadoop, and some analytics tools like Power BI and Tableau for the visualization of the data. It is quite easy to connect to these systems because it supports all the data connectors, like Oracle, ERP, CRM, Azure, and AWS. It has the ability to connect to any of these systems. 

View full review »
Buyer's Guide
StreamSets
November 2022
Learn what your peers think about StreamSets. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
655,774 professionals have used our research since 2012.