StreamSets OverviewUNIXBusinessApplication

StreamSets is the #11 ranked solution in top Data Integration Tools. PeerSpot users give StreamSets an average rating of 8.4 out of 10. StreamSets is most commonly compared to Informatica PowerCenter: StreamSets vs Informatica PowerCenter. StreamSets is popular among the large enterprise segment, accounting for 74% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a financial services firm, accounting for 17% of all views.
StreamSets Buyer's Guide

Download the StreamSets Buyer's Guide including reviews and more. Updated: November 2022

What is StreamSets?

StreamSets offers an end-to-end data integration platform to build, run, monitor and manage smart data pipelines that deliver continuous data for DataOps, and power the modern data ecosystem and hybrid integration.

Only StreamSets provides a single design experience for all design patterns for 10x greater developer productivity; smart data pipelines that are resilient to change for 80% less breakages; and a single pane of glass for managing and monitoring all pipelines across hybrid and cloud architectures to eliminate blind spots and control gaps.

With StreamSets, you can deliver the continuous data that drives the connected enterprise.

StreamSets Customers

Availity, BT Group, Humana, Deluxe, GSK, RingCentral, IBM, Shell, SamTrans, State of Ohio, TalentFulfilled, TechBridge

StreamSets Pricing Advice

What users are saying about StreamSets pricing:
  • "There are different versions of the product. One is the corporate license version, and the other one is the open-source or free version. I have been using the corporate license version, but they have recently launched a new open-source version so that anybody can create an account and use it. The licensing cost varies from customer to customer. I don't have a lot of input on that. It is taken care of by PMO, and they seem fine with its pricing model. It is being used enterprise-wide. They seem to have got a good deal for StreamSets."
  • "StreamSets Data Collector is open source. One can utilize the StreamSets Data Collector, but the Control Hub is the main repository where all the jobs are present. Everything happens in Control Hub."
  • "It has a CPU core-based licensing, which works for us and is quite good."
  • "The pricing is good, but not the best. They have some customized plans you can opt for."
  • StreamSets Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Karthik Rajamani - PeerSpot reviewer
    Principal Engineer at Tata Consultancy Services
    Real User
    Top 10
    Integrates with different enterprise systems and enables us to easily build data pipelines without knowing how to code
    Pros and Cons
    • "I have used Data Collector, Transformer, and Control Hub products from StreamSets. What I really like about these products is that they're very user-friendly. People who are not from a technological or core development background find it easy to get started and build data pipelines and connect to the databases. They would be comfortable like any technical person within a couple of weeks."
    • "We create pipelines or jobs in StreamSets Control Hub. It is a great feature, but if there is a way to have a folder structure or organize the pipelines and jobs in Control Hub, it would be great. I submitted a ticket for this some time back."

    What is our primary use case?

    I worked mostly on data injection use cases when I was using Data Collector. Later on, I got involved with some Spark-based transformations using Transformer.

    Currently, we are not using CI/CD. We are not using automated deployments. We are manually deploying in prod, but going forward, we are planning to use CI/CD to have automated deployments.

    I worked on on-prem and cloud deployments. The current implementation is on-prem, but in my previous project, we worked on AWS-based implementation. We did a small PoC with GCP as well.

    How has it helped my organization?

    It is very easy to use when connecting to enterprise data stores such as OLTP databases or messaging systems such as Kafka. I have had integration with OLTP as well as Kafka. Until a few years ago, we didn't have a good way of connecting to the streaming databases or streaming products. This ability is important because most of our use cases in recent times are of streaming nature. We have to deliver certain messages or data as per our SLA, and the combination of Kafka and StreamSets helps us meet those timelines. I'm not sure what I would have used to achieve the same five years ago. The combination of Kafka and StreamSets has opened up a new world of opportunities to explore. I recently used orchestration wherein you can have multiple jobs, and you can orchestrate them. For example, you can specify to let Job A run first, then Job B, and then Job C in an automated fashion. You don't need any manual intervention. In one of my projects, I had a data hub from 10 different databases. It was all automated by using Kafka and StreamSets.

    It enables you to build data pipelines without knowing how to code. You can build data pipelines even if you don't know how to code. You can just drag and drop. If you know how to code, you can do some custom coding as well, but you don't need to know coding to work with StreamSets, which is important if somebody in your team is not familiar with coding. The nature of coding is changing, and the number of technologies is changing. The range is so wide right now. Even if I know Java or Oracle, it may not be enough in today's times because we might have databases in Teradata. We might have Snowflake or other different kinds of databases. StreamSets is a great solution because you don't need to know all different databases or all different coding mechanisms to work with StreamSets. Rather than learning each and every technology and building your data pipelines, you can just plug and play at a faster pace.

    StreamSets’ built-in data drift resilience plays a part in our ETL operations. It is a very helpful feature. Previously, we had a lot of jobs coming from different source systems, and whenever there was any change in columns, it was not informed. It required a lot of changes on our end, which would take from a couple of weeks to a month. Because of the data drift feature, which is embedded in StreamSets, we don't have to spend that much time taking care of the columns and making sure they are in sync. All this is taken care of. We don't have to worry about it. It is a very helpful feature to have.

    StreamSets' data drift resilience reduces the time to fix data drift breakages. It has definitely saved around two to three weeks of development time. Previously, any kind of changes in our jobs used to require changing our code or table structure and doing some testing. It required at least two to three weeks of effort, which is now taken care of because of StreamSets.

    StreamSets’ reusable assets helped to reduce workload. We can use pipeline fragments across multiple projects, which saves development time. The time saved varies from team to team.

    It saves us money by not having to hire people with specialized skills. Without StreamSets, for example, I would've had to hire someone to work on Teradata or Db2. We definitely save some money on creating a new position or hiring a new developer. StreamSets provides a lot of features from AWS, Azure, or Snowflake. So, we don't have to find specialized, skilled resources for each of these technologies to create data pipelines. We just need to have StreamSets and one or two DBAs from each team to get the right configuration items, and we can just use it. We don't have to find a specialized resource for each database or technology.

    It has helped us to scale our data operations. It saves the licensing costs on some legacy software, and we can reuse pipelines. Once we have a template for a certain use case, we can reuse the same template across different projects to move data to the cloud, which saves us money.

    What is most valuable?

    I have used Data Collector, Transformer, and Control Hub products from StreamSets. What I really like about these products is that they're very user-friendly. People who are not from a technological or core development background find it easy to get started and build data pipelines and connect to the databases. They would be comfortable like any technical person within a couple of weeks. I really like its user-friendliness. It is easy to use. They have a single snapshot across different products, which is very helpful to learn and use the product based on your use case.

    Its interface is very cool. If I'm using a batch project or an ETL, I just have to configure appropriate stages. It is the same process if you go with streaming. The only difference is that the stages will change. For example, in a batch, you might connect to Oracle Database, or in streaming, you may connect to Kafka or something like that. The process is the same, and the look-and-feel is the same. The interface is the same across different use cases.

    It is a great product if you are looking to ramp up your teams and you are working with different databases or different transformations. Even if you don't have any skilled developers in Spark, Python, Java, or any kind of database, you can still use this product to ramp up your team and scale up your data migration to cloud or data analytics. It is a fantastic product.

    What needs improvement?

    There are a few things that can be better. We create pipelines or jobs in StreamSets Control Hub. It is a great feature, but if there is a way to have a folder structure or organize the pipelines and jobs in Control Hub, it would be great. I submitted a ticket for this some time back.

    There are certain features that are only available at certain stages. For example, HTTP Client has some great features when it is used as a processor, but those features are not available in HTTP Client as a destination.

    There could be some improvements on the group side. Currently, if I want to know which users are a part of certain groups, it is not straightforward to see. You have to go to each and every user and check the groups he or she is a part of. They could improve it in that direction. Currently, we have to put in a manual effort. In case something goes wrong, we have to go to each and every user account to check whether he or she is a part of a certain group or not.

    Buyer's Guide
    StreamSets
    November 2022
    Learn what your peers think about StreamSets. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
    656,474 professionals have used our research since 2012.

    For how long have I used the solution?

    I got exposed to StreamSets in late 2018. Initially, I worked on StreamSets Data Collector, and then, for a year or so, I got exposed to Transformer as well.

    What do I think about the stability of the solution?

    It is stable, and they're growing rapidly.

    What do I think about the scalability of the solution?

    It is pretty scalable, but it also depends on where it is installed, which is something a lot of developers misunderstand. Most of the time, the implementation is done on on-prem servers, which is not very scalable. If you install it on cloud-based servers, it is fast. So, the problem is not with StreamSets; the problem is with the underlying hardware. I have worked on both sides. Therefore, I'm aware of the scenarios, but if I were to work purely in the development team, I might not be aware that it is underlying hardware that is causing problems.

    In terms of its usage, it is available enterprise-wide. I don't know the exact number of users now because I am not a part of the platform or admin team, but at one time, we had more than 200 users working on this platform. We had one implementation on AWS Cloud and one on GCP. We had Dev, QA, and prod environments. Even now, we have about four environments. We have SIT and NFT, and in prod, we have two environments.

    We plan to increase its usage. We are rapidly increasing its usage in our projects. There is a lot of excitement around it. A lot of people want to explore this tool in our organization. A lot of people are trying to learn this technology or use it to migrate their data from legacy databases to the cloud. This will actually encourage more folks to join the data engineering or analytics team. There is a lot of curiosity around the product.

    How are customer service and support?

    Currently, I'm not involved with them on a daily basis. I'm no longer a part of the platform team, but when I was involved with them two years back, their support was good. Most of the interactions I have had with them were pretty good. They were responsive, and they responded within a day or two. I would rate them a nine out of ten. They were good most of the time, but it could be a challenge to get the right person. They are still a growing company. You need to be a little patient with them to get to the right person to help you with the issues you have.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    About three or four years ago, I worked on Trifacta, which is now acquired by Alteryx. The features were different, and the requirements were different.

    Talend is a good product. It seems quite close to StreamSets, but I have not worked on Talend. I just got a demo of Talend a couple of years ago, but I never worked on it. I felt that StreamSets had more features. Its UI was good, and functionality-wise, I found it a little bit more comfortable to use.

    How was the initial setup?

    I was involved with AWS deployment. At that time, I was a part of the platform team. Now, I work with the application development team, and I'm not involved in that. It was complex at that time. About four years ago, when StreamSets was new, we had a tough time deploying because the documentation was not very clear at that time. A lot of the documents were very good and available on the web, but the documentation wasn't exhaustive or elaborate. We also had our own learning curve. We had someone from StreamSets to help us with the deployment. So, it went well. Now, it is better, but when we did it, it was very complex.

    We implemented it in phases. We just implemented or installed the StreamSets platform in our company, and we let a couple of teams use it. We started with Data Collector, and we allowed teams to use and feel it. When they said that this is a good tool to use, we got the enterprise license, and we installed Control Hub and Data Collector. It was not implemented enterprise-wide at the same time. It was released to teams in phases.

    What about the implementation team?

    It was a mix of a consultant and reseller. It probably was Access Group that helped us with this implementation. At that time, I was in the US, and they were good. Our experience with them was fantastic. We had a couple of consultants from their team to help us with the installation. Now, we have a different vendor in the UK. We have a different partner to help us with that.

    We started with about three people, and now, we have more than 20 people on the team. It requires regular maintenance in terms of user management. It is not because of StreamSets; it is because of the underlying software. Data Collector can support a certain number of jobs in parallel. In case we have more tenants on board, we have to increase the Data Collector or Transformer instances to support the increased number of users. 

    What was our ROI?

    We have definitely seen an ROI. It has helped us in moving into the data analytics world at a faster pace than any other tool would've done. The traditional tools we had didn't provide the functionality that StreamSets offers.

    The time for realizing its benefits from deployment depends on the use case or the end requirement. For example, we deployed one project last year, and within a couple of months, we could see a lot of benefits for that team. For some use cases, it could be two months to six months or one year. You can build data pipelines, and you can move data to Snowflake or any cloud database using StreamSets in a matter of a few weeks.

    What's my experience with pricing, setup cost, and licensing?

    There are different versions of the product. One is the corporate license version, and the other one is the open-source or free version. I have been using the corporate license version, but they have recently launched a new open-source version so that anybody can create an account and use it.

    The licensing cost varies from customer to customer. I don't have a lot of input on that. It is taken care of by PMO, and they seem fine with its pricing model. It is being used enterprise-wide. They seem to have got a good deal for StreamSets.

    What other advice do I have?

    It is very user-friendly, and I promote it big time in my organization among my peers, my juniors, and across different departments. 

    They're growing rapidly. I can see them having a lot of growth based on the features they are bringing. They could capture a lot more market in coming times. They're providing a lot of new features.

    I love the way they are constantly upgrading and improving the product. They're working on the product, and they're upgrading it to close the gaps. They have developed a data portal recently, and they have made it free. Anyone who doesn't know StreamSets can just create an account and start using that portal. It is a great initiative. I learned directly on the corporate portal license, but if I were to train somebody in my team who doesn't yet have a license, I would just recommend them to go to the free portal, register, and learn how to use StreamSets. It is available for anyone who wants to learn how to work on the tool.

    We use StreamSets' ability to move data into modern analytics platforms. We use it for Tableau, and we use it for ThoughtSpot. It is quite easy to move data into these analytics platforms. It is not very complicated. The problems that we had were mostly outside of StreamSets. For example, most of our databases were on-prem, and StreamSets was installed on the cloud, such as AWS Cloud. There were some issues with that. It wasn't a drawback because of StreamSets. It was pretty straightforward to plug and play.

    I have used StreamSets Transformer, but I haven't yet used it with Snowflake. We are planning to use it. We have a couple of use cases we are trying to migrate to Snowflake. I've seen a couple of demos, and I found it to be very easy to use. I didn't see any complications there. It is a great product with the integration of StreamSets Transformer and Snowflake. When we move data from legacy databases to Snowflake, I anticipate there could be a lot of data drift. There could be some column mismatches or table mismatches, but what I saw in the demo was really fantastic because it was creating tables during runtime. It was creating or taking care of the missing columns at runtime. It is a great feature to have, and it will definitely be helpful because we will be migrating our databases to Snowflake on the cloud. It will definitely help us meet our customer goals at a faster pace. 

    I would rate it a nine out of ten. They're improving it a lot, and they need to improve a lot, but it is a great product to use.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    AbhishekKatara - PeerSpot reviewer
    Technical Lead at Sopra Steria
    Real User
    Top 10
    Easy-to-use tool with no coding required
    Pros and Cons
    • "StreamSets’ data drift resilience has reduced the time it takes us to fix data drift breakages. For example, in our previous Hadoop scenario, when we were creating the Sqoop-based processes to move data from source to destinations, we were getting the job done. That took approximately an hour to an hour and a half when we did it with Hadoop. However, with the StreamSets, since it works on a data collector-based mechanism, it completes the same process in 15 minutes of time. Therefore, it has saved us around 45 minutes per data pipeline or table that we migrate. Thus, it reduced the data transfer, including the drift part, by 45 minutes."
    • "The logging mechanism could be improved. If I am working on a pipeline, then create a job out of it and it is running, it will generate constant logs. So, the logging mechanism could be simplified. Now, it is a bit difficult to understand and filter the logs. It takes some time."

    What is our primary use case?

    StreamSets is a wonderful data engineering, data ops tool where we can design and create data pipelines, loading on-prem data to the cloud. One of our major projects was to move data from on-premises to Azure and GCP Cloud. From there, once data is loaded, the data scientist and data analyst teams use that data to generate patterns and insights. 

    For a US healthcare service provider company, we designed a StreamSets pipeline to connect to relational database sources. We did generate schema from the source data loaded into Azure Data Lake Storage (ADLS) or any cloud, like S3 or GCP. This was one of our batch use cases. 

    With StreamSets, we have also tried to solve our real-time streaming use cases as well, where we were streaming data from source Kafka topic to Azure Event Hubs. This was a trigger-based streaming pipeline, which moved data when it appeared in a Kafka topic. Since this pipeline was a streaming pipeline, it was continuously streaming data from Kafka to Azure for further analysis.

    How has it helped my organization?

    We can securely fetch the passwords and credentials stored in Azure Key Vault. This is a fundamentally very strong feature that has improved our day-to-day life.

    What is most valuable?

    It is a pretty easy tool to use. There is no coding required. StreamSets provides us a canvas to design our pipeline. At the beginning of any project, it gives us a picture, which is an advantage. For example, if I want to do a data migration from on-premise to cloud, I will draw it for easier understanding based on my target system, and StreamSets does exactly the same thing by giving us a canvas where I can design our pipeline.

    There are a wide range of available stages: various sources, relational sources, streaming sources. There are various processes like to transform the source data. It is not only to migrate data from source to destination, but we can utilize different processes to transform the data. When I was working on the healthcare project, there was personal identification information on the personal health information (PHI) data that we needed to mask. We can't simply move it from source to destination. Therefore, StreamSets provides masking of that sensitive data.

    It provides us a facility to generate schema. There are different executors available, e.g., Pipeline Finisher executor, which helps us in finishing the pipeline. 

    There are different destinations, such as S3, Azure Data Lake, Hive, and Kafka Hadoop-based systems. There are a wide range of available stages. It supports both batch and streaming. 

    Scheduling is quite easy in StreamSets. From a security perspective, there is integration with keywords, e.g., for password fetching or secrets fetching. 

    It is pretty easy to connect to Hadoop using StreamSets. Someone just needs to be aware about the configuration details, such as which Hadoop cluster to connect and what credentials will be available. For example, if I am trying with my generic user, how do I connect with the Hadoop distributed system? Once we have the details of our cluster and the credential, we can load data to the Hadoop standalone file system. In our use case, we collected data from our RDBMS sources using JDBC Query Consumer. We queried the data from the source table, captured that data, and then loaded the data into the destination Hadoop distributed file system. Thus, configuration details are required. Once we have the configuration details, i.e., the required credentials, we can connect with Hadoop and Hive. 

    It takes care of data drift. There are certain data rules, matrix rules, or capabilities provided by StreamSets that we can set. So, if the source schema gets deviated somehow, StreamSets will automatically notify us or send alerts in automated fashion about what is going wrong. StreamSets also provides Change Data Capture (CDC). As soon as the source data is changed, it can capture that and update the details into the required destination. 

    What needs improvement?

    The logging mechanism could be improved. If I am working on a pipeline, then create a job out of it and it is running, it will generate constant logs. So, the logging mechanism could be simplified. Now, it is a bit difficult to understand and filter the logs. It takes some time. For example, if I am starting with StreamSets, everything is fine. However, if I want to dig into problems that my pipeline ran into, it initially takes some time to get familiar with it and understand it.

    I feel the visualization part can be simplified or enhanced a bit, so I can easily see what happened with my job seven days earlier and how many records it transmitted. 

    For how long have I used the solution?

    I have been using StreamSets for close to four and a half years when creating my data pipelines in our projects.

    What do I think about the stability of the solution?

    Stability-wise, it is wonderful and quite good. Mostly, since the solution is completely cloud-based in our project, we just need to hit a URL and then we are logged into StreamSets with our credentials. Everything is present there. Other than some rare occasions, StreamSets behaves pretty well. 

    There were certain memory leak issues for a few stages, like Azure Data Lake, but those were corrected with immediate solutions, like patches and version upgrades. 

    Stability-wise, I would rate it as eight and a half or nine out of 10.

    What do I think about the scalability of the solution?

    I would like auto scaling for heavy load transfer. This applied particularly when we were our data migration project. The tables had more than 10 millions of records in them. When we utilized StreamSets, it took a huge amount of time. Though we were doing every schema generation, we were using ADLS as a destination, and it hung for a good amount of time. So, we considered PySpark processes for our tables, which have greater than 10 millions of records. Usually, it works pretty well with the source tables and the data size is close to five to six million records, but when it is closer to 10 million, I personally feel the auto scaling feature could be improved.

    How are customer service and support?

    We have spent a good amount of time dealing with their technical support team. The first step is to check the documentation, then work with them. 

    I had a chance to work with StreamSets during our use case. They helped us out in a good manner with a memory leak issue that we were facing in our production pipeline. So, there was one issue where our pipelines were running fine in dev and the lower environment, i.e., dev and QA, but when we moved those pipelines into production, we were getting a memory leak issue where the JVM ran out of memory exception. 

    We tried reducing the number of threads and the batch size for the small table, but it was still creating issues. Then, we connected with StreamSets' support team. They gave us a customized patch, which our platform team installed in our production environment. With some collaborative effort of around a week, we were finally able to run our pipeline pretty well.

    I would rate the customer support and the technical support as quite good and knowledgeable (eight out of 10). They helped with issues that were occurring in our work. They accepted that there were some issues with the version, which StreamSets released and we were using. They accepted that the version particularly had some issues with the memory management. Therefore, the immediate solution that they provided was a patch, which our platform team installed. However, the long-term solution was to update or upgrade our StreamSets Data Collector platform from version 3.11 to 4.2, and that solved our problem.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We were using Cloudera distribution. All our projects were running, utilizing Hadoop, and the distribution was Cloudera Hortonworks. We were utilizing Sqoop and Hive as well as PySpark or Scala-based processes to code. However, StreamSets helped us a lot in designing our data pipeline quickly in a very fast way.

    It has made our job pretty easy in terms of designing, managing, and running our data engineering pipeline. Previously, if I needed to transfer data from source to destination, I would need to use Sqoop, which is a Hadoop stack technology used to establish connectivity with the RDBMS, then load it to the Hadoop distributed file system. With Sqoop, I needed to have my coding skills ready. I needed to be very precise about the connection details and syntax. I needed to be very aware of them. StreamSets solved this problem. 

    Its greatest feature is that it provides an easy way to design your pipeline. I just need to drag and drop source JDBC Query Consumer to my canvas as well as drag and drop my destination to the canvas. I then need to connect both these stages and be ready with my configuration details. As soon as I am done with that, I will validate the pipeline. I can create a job out of it and schedule it, even the monitoring. All these things can be achieved by a single control panel. So, it not only solves the developer's basic problems, but it also has greatly improved the experience.

    We were previously completely using the Hadoop technology stack. Slowly, we started converting our processes into data engineering pipelines, which are designed into StreamSets. Earlier, the problem area was to write code into Sqoop or create Sqoop scripts to capture data from source, then put it into HDFS. Once data was in HDFS, we would write another PySpark process, which did the optimization and faster loading of the data, which is in Hadoop Distributed File System to a cloud-based storage data lake, like ADLS or S3. However, when StreamSets came into picture, we didn't need an intermediary, three-storage distributed file system like HDFS. We could simply create a pipeline that connects to RDBMS and load data directly to the cloud-based Azure Data Lake. So there is no requirement for an intermediary Hadoop Distributed File System (HDFS), which saves us a great amount of time and also helps us a lot in creating our data engineering pipelines.

    Microsoft provided Change Data Capture tools, which one of our team members was using. Performance-wise, I personally feel StreamSets is way faster. A few of the support team members were using Informatica as well, but it does not provide powerful features that can handle big amounts of data.

    How was the initial setup?

    For our deployment model, we were following three environments: dev, QA and prod. Our team's main responsibility is to hydrate Azure Data Lake and GCP from the source system. Control Hub is hosted on GCP, and we were hitting the URL to log into StreamSets. All the data collector machines are created on Google Cloud Platform, and we use a dev environment. Whenever we create and do a PoC, we work in a dev environment. Once our pipeline and jobs are working fine, we move our pipelines to our QA environment, which is export and import. It is pretty easy to do via StreamSets Control Hub. We can simply select a job and export it, then log back into the QA environment and import the job. Once we import the job, the associated pipeline, and all the parameters, we have an option to import the whole bundle, like the pipeline, parameter, and instances. We can import everything. Once this is also working fine, we have another final environment, which is the production which is based on the source refresh frequencies. 

    What about the implementation team?

    In our company, we have a good data engineering team. We have a separate administrator team who is mainly responsible for deploying it on cloud, providing us libraries whenever required. There is a separate team who is taking care of all the installations and platform-related activities. We are primarily data engineers who utilize the product for solutions.

    What was our ROI?

    StreamSets’ data drift resilience has reduced the time it takes us to fix data drift breakages. For example, in our previous Hadoop scenario, when we were creating the Sqoop-based processes to move data from source to destinations, we were getting the job done. That took approximately an hour to an hour and a half when we did it with Hadoop. However, with the StreamSets, since it works on a data collector-based mechanism, it completes the same process in 15 minutes of time. Therefore, it has saved us around 45 minutes per data pipeline or table that we migrate. Thus, it reduced the data transfer, including the drift part, by 45 minutes.

    What's my experience with pricing, setup cost, and licensing?

    StreamSets Data Collector is open source. One can utilize the StreamSets Data Collector, but the Control Hub is the main repository where all the jobs are present. Everything happens in Control Hub. 

    What other advice do I have?

    For people who are starting out, the simple advice is to first try out the cloud login of StreamSets. It is freely available for everyone these days. StreamSets has released its online practice platform to design and create pipelines. Someone simply needs to go to cloud.login.streamsets.com, which is StreamSets official website. It is there that people who are starting out can log into StreamSets cloud and spin up their StreamSets Data Collector machines. Then, they can choose their execution mode. It is all in a Docker-containerized fashion. You don't need to do anything. 

    You simply need to have your laptop ready and step-by-step instructions are given. You just simply spin up your Data Collector, the execution mode, and then you are ready with the canvas. You can design your pipeline, practice, and test there. So, if you want to evaluate StreamSets in basic mode, you can take a look online. This is the easiest way to evaluate StreamSets.

    It is a drag-and-drop, UI-based approach with a canvas, where you design the pipeline. It is pretty easy to follow. So, once your team feels confident, then they can purchase the StreamSets add-ons, which will provide them end-to-end solutions and vendor support. The best way is to log into their cloud practice platform and create some pipelines.

    In my current project, there is a requirement to integrate with Snowflake, but I don't have Snowflake experience. I have not integrated Snowflake with StreamSets yet.

    I personally love working on StreamSets. It is part of my day-to-day activities. I do a lot of work on StreamSets, so I would rate them pretty well as nine out of 10.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    StreamSets
    November 2022
    Learn what your peers think about StreamSets. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
    656,474 professionals have used our research since 2012.
    Senior Data Engineer at a energy/utilities company with 1,001-5,000 employees
    Real User
    Top 20
    Quite simple to use for anybody who has an ETL or BI background
    Pros and Cons
    • "StreamSets data drift feature gives us an alert upfront so we know that the data can be ingested. Whatever the schema or data type changes, it lands automatically into the data lake without any intervention from us, but then that information is crucial to fix for downstream pipelines, which process the data into models, like Tableau and Power BI models. This is actually very useful for us. We are already seeing benefits. Our pipelines used to break when there were data drift changes, then we needed to spend about a week fixing it. Right now, we are saving one to two weeks. Though, it depends on the complexity of the pipeline, we are definitely seeing a lot of time being saved."
    • "Currently, we can only use the query to read data from SAP HANA. What we would like to see, as soon as possible, is the ability to read from multiple tables from SAP HANA. That would be a really good thing that we could use immediately. For example, if you have 100 tables in SQL Server or Oracle, then you could just point it to the schema or the 100 tables and ingestion information. However, you can't do that in SAP HANA since StreamSets currently is lacking in this. They do not have a multi-table feature for SAP HANA. Therefore, a multi-table origin for SAP HANA would be helpful."

    What is our primary use case?

    We are using the StreamSets DataOps platform to ingest data to a data lake.

    How has it helped my organization?

    Our time to value has increased because our development time has been considerably reduced. The major benefit that we are getting out of the solution is the ability to easily transform and upskill a person who has already worked on an ETL or BI background. We don't need to specifically look for people who know programming or worked on Python, DataOps, or a DevOps sort of functionality. In the market, it is easier to find people with ETL or BI skills than people with hardcore DevOps or programming skills. That is the major benefit that we are getting out of moving to a GUI-based tool like StreamSets. How quickly we are delivering to our customers, as well as our ability to ingest to a data lake, have actually improved a lot by using this tool.

    What is most valuable?

    The types of the source systems that it can work with are quite varied. There are numerous source systems that it can work with, e.g., a SQL Server database, an Oracle Database, or REST API. That is an advantage we are getting. 

    The most important feature is the Control Hub that comes with the DataOps Platform and does load balancing. So, we do not worry about the infrastructure. That is a highlight of the DataOps platform: Control Hub manages the data load to various engines.

    It is quite simple for anybody who has an ETL or BI background and worked on any ETL technologies, e.g., IBM DataStage, SAP BODS, Talend, or CloverETL. In terms of experience, the UI and concepts are very similar to how you develop your extraction pipeline. Therefore, it is very simple for anybody who has already worked on an ETL tool set, either for your data ingestion, ETL pipeline, or data lake requirements.

    We use StreamSets to load into AWS S3 and Snowflake databases, which are then moved forward by Power BI or Tableau. It is quite simple to move data into these platforms using StreamSets. There are a lot of tools and destination stages within StreamSets and Snowflake, Amazon S3, any database, or an HTTP endpoint. It is just a drag-and-drop feature that is saving a lot of time when rewriting any custom code in Python. StreamSets enables us to build data pipelines without knowing how to code, which is a big advantage.

    The data resilience feature is good enough for our ETL operations, even for our production pipelines at this stage. Therefore, we do not need to build our own custom framework for it since what is available out-of-the-box is good enough for a production pipeline.

    StreamSets data drift feature gives us an alert upfront so we know that the data can be ingested. Whatever the schema or data type changes, it lands automatically into the data lake without any intervention from us, but then that information is crucial to fix for downstream pipelines, which process the data into models, like Tableau and Power BI models. This is actually very useful for us. We are already seeing benefits. Our pipelines used to break when there were data drift changes, then we needed to spend about a week fixing it. Right now, we are saving one to two weeks. Though, it depends on the complexity of the pipeline, we are definitely seeing a lot of time being saved.

    What needs improvement?

    One room for improvement is probably the GUI. It is pretty basic and a lot of improvement is required there. 

    In terms of security, from an architecture perspective, when we want to implement something, and because our organization is very strict when it comes to cybersecurity, we have been struggling a bit because the platform has a few gaps. Those gaps are really gaps based on our organization's requirements. These are not gaps on StreamSets' side. The solution could improve a lot in terms of having more features added to the security model, which would help us.

    There are quite a few features that we wanted. One is SAP HANA. Currently, we can only use the query to read data from SAP HANA. What we would like to see, as soon as possible, is the ability to read from multiple tables from SAP HANA. That would be a really good thing that we could use immediately. For example, if you have 100 tables in SQL Server or Oracle, then you could just point it to the schema or the 100 tables and ingestion information. However, you can't do that in SAP HANA since StreamSets currently is lacking in this. They do not have a multi-table feature for SAP HANA. Therefore, a multi-table origin for SAP HANA would be helpful.

    For how long have I used the solution?

    I have been using it for the past 12 months.

    What do I think about the stability of the solution?

    I have no concerns in terms of the application's core stability. We haven't had any major outages as such, and even if we had one, those were internal and related to our network, proxy, or firewall. As someone who implemented it and has been working on it day in, day out, sometimes 24/7, I am quite confident with the stability of the solution.

    As with any application, it requires periodical maintenance, at least to do an upgrade. That maintenance is to simply upgrade the product, and nothing more than that.

    What do I think about the scalability of the solution?

    A core feature of the DataOps Platform is you can easily scale through engines when you have more pipelines running and data to process. So, if you would need to purchase more engines or cores, it is quite scalable. That is a major advantage that we are getting. 

    In the Control Hub Platform, the orchestration and load balancing are quite scalable. You don't need to fiddle with the existing solution. Everything is run on another engine that gets hooked up automatically to Control Hub, which makes it seamless.

    There is sort of a developed template out of StreamSets, where you just have one template and can point it to any source system. You can just start ingesting, which has reduced a lot of time in building our new pipelines.

    How are customer service and support?

    They are quite good and responsive. We have a dedicated support portal for StreamSets. We have authorized members who can raise support tickets using the portal, including myself. They have a quick turnaround with good responses, so we are quite happy as of now. I would rate the technical support between 7.5 and 8 out of 10.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We previously developed our own custom platform. We switched because maintaining a custom platform is difficult. We are not a product team. We are an energy company who services business customers. Therefore, maintaining a custom platform is difficult. Another thing was that the custom platform was written programmatically. So, you need a lot of people who have a programmatic knowledge, both to maintain and use it.

    The time to value is quite a critical KPI. Before, when our business needed data quickly on the platform, our previous solutions struggled to get it. Thus, our time to value has improved a lot and our customers are happy because they are able to get the data quickly.

    How was the initial setup?

    I was there right from the start when they adopted an open-source version. Late last year, we moved to an enterprise version, i.e., the DataOps platform. So, I worked on the 3.2.2 version, and now I am working on the 5.0 version, which is the enterprise license version.

    The implementation is straightforward, except for a few hiccups with known network, process, and firewall issues. Other than that, it was a very simple, lean implementation.

    Because we had a lot of firewall issues and issues with our optimization, it took probably four weeks for us to get things running. However, if you exclude the issues, it took probably a week to a week and a half to get things up and running.

    We are working, as a separate piece of the project, to migrate whatever is running in our existing custom platform to StreamSets. From a certain date, we started to work purely on StreamSets. For any future ingestion requirements, we are using StreamSets DataOps platform. However, the previous platform is inactive at the moment. We are only using it for existing pipelines, and the plan is to migrate them to the DataOps platform this year very soon.

    What about the implementation team?

    Two people were needed for the deployment of this solution: a cloud engineer and a senior data engineer.

    What was our ROI?

    First, it has saved us a lot of time because we do not need to come up with our own custom platform, which is a huge expenditure in building and maintaining the custom platform. Second, even if we go for other products in the market, there are lots of gaps with the other products. Even if we picked up another product, we would have to customize it. An off-the-shelf product is not enough to meet our needs. Therefore, StreamSets has definitely helped us in getting the information into our data lake very quickly, in terms of ingestion.

    The most important thing is it has helped us from a resourcing point of view. You can easily upskill a BI or ETL resource without any programming knowledge to work with this. That is a major advantage that we are getting since we have a lot of ETL people who do not have programming knowledge. They have vast ETL experience working with GUI-based tools, and StreamSets is really useful for them.

    It has drastically reduced the time that we are spending on workloads by 60% to 70% as well as reducing the time spent on ingestion by 30%. 

    What's my experience with pricing, setup cost, and licensing?

    It has a CPU core-based licensing, which works for us and is quite good.

    Which other solutions did I evaluate?

    We did evaluate other solutions. It was not a quick decision for us to take this product. We evaluated other products in the market, but they were not close to StreamSets or not in the data integration space. One thing that caught our attention with StreamSet was the processes that it could work with. Secondly, the Control Hub DataOps platform manages the load balancing, etc. We were quite interested in that since we would not need to maintain it ourselves. The third most important thing was that you can create job templates in StreamSets. So, this means you create a template for a particular type of ingestion. Going forward, you just change the parameters, then you can point it to any source. This means there is less pipeline development and we can quickly ingest data into the data lake. Those are the features that we were interested in and why we switched StreamSets.

    There is actually a gap in the entire data integration market at the moment, and StreamSets Data Collector is trying to fill that gap. The reason is because most data ingestion has to be done through programming languages, like Python or Java. We currently do not have a GUI-based tool set that is as robust as StreamSets. That is what I found out in the lab over the last 12 months. There are new products coming up, but it will still be a few more years until they are stabilized. Whereas, StreamSets is already there to solve your immediate data ingestion requirements. 

    What other advice do I have?

    Every tool in the market at the moment has some major gaps, especially for large enterprises. It could be the way that the data or pipeline is secured. At present, StreamSets looks like the market leader and is trying to fill that gap. For anyone going through a proof of concept for various tools, StreamSets is almost at the top. I don't think that they need to look any further.

    We are working only with API, a relational database management system, and our enterprise warehouses at the moment. We are not using any streaming sort of ingestion at the moment.

    We are not using Snowflake Transformer yet. It just got released. We are using a traditional Snowflake destination stage because our enterprise is huge. We have our own Snowflake architecture. We load the security in the data into our own databases using the destination stage, not Transformer yet.

    I would rate the solution as 7.5 out of 10.

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Data Engineer at a consultancy with 11-50 employees
    Real User
    Effective, and helps scale data operations, but sometimes the support's response is slow
    Pros and Cons
    • "In StreamSets, everything is in one place."
    • "If you use JDBC Lookup, for example, it generally takes a long time to process data."

    What is our primary use case?

    The project which I work on is developed in StreamSets and I lead the team. I'm the team leader and the Solution Architect. I also train my juniors and my team.

    For the last year and a half, I’ve been using this tool and this tool is very effective for data processing from source to destination. This tool is very effective and I developed many integrations in this tool.

    How has it helped my organization?

    The solution is really effective.

    What is most valuable?

    It's very effective in project delivery. This month, at the end of June, I will deploy all the integrations which I developed in StreamSets to production remit. The business users and customers are happy with the data flow optimizer from the SoPlex cloud. It all looks good.

    Not many challenges are there in terms of learning new technologies and using new tools. We will try and do an R&D analysis more often.

    Everything is in place and it comes as a package. They install everything. The package includes Python, Ruby, and others. I just need to configure the correct details in the pipeline and go ahead with my work.

    The ease of the design experience when implementing batch streaming and ETL pipelines is very good. The streaming is very good. Currently, I'm using Data Collector and it’s effective. If I'm going to use less streaming, like in Java core, I need to follow up on different processes to deploy the core and connect the database. There are not so many cores that I need to write.

    In StreamSets, everything is in one place. If you want to connect a database and configure it, it is easy. If you want to connect to HTTP, it’s simple. If I'm going to do the same with my other tools, I don’t need many configurations or installations. StreamSets' ability to connect enterprise data stores such as OLTP databases and Hadoop, or messaging systems such as Kafka is good. I also send data to both the database and Kafka as well.

    You will get all the drives that you will need to install with the database. If you use other databases, you're going to need a JDBC, which is not difficult to use.

    I'm sending data to different CDP URL databases, cloud areas, and Azure areas.

    StreamSets' built-in data drift resilience plays a part in our ETL operations. We have some processors in StreamSets, and it will tell us what data has been changed and how data needs to be sent.

    It's an easy tool. If you're going to use it as a customer, then it should take a long time to process data. I'm not sure if in the future, it will take some time to process the billions of records that I'm working on. We generally process billions of records on a daily basis. I will need to see when I work on this new project with Snowflake. We might need to process billions of records, and that will happen from the source. We’ll see how long it needs to take and how this system is handling it. At that point, I’ll be able to say how effectivly StreamSets processes it.

    The Data Collector saves time. However, there are some issues with the DPL.

    StreamSets helped us break down data silos within our organizations.

    One advantage is that everything happens in one place, if you want to develop or create something, you can get those details from StreamSets. The portal, however, takes time. However, they are focusing on this.

    StreamSets' reusable assets have helped to reduce workload by 32% to 40%.

    StreamSets helped us to scale our data operations.

    If you get a request to process data for other processing tools, it might take a long time, like two to three hours. With this, I can do it within half an hour, 20 or 30 minutes. It’s more effective. I have everything in one place and I can configure everything. It saves me time as it is so central. 

    What needs improvement?

    If you use JDBC Lookup, for example, it generally takes a long time to process data.

    StreamSets enables us to build data pipelines without knowing how to code. You can do it, however, you need to know data flow. Without knowing anything, it's a bit difficult for new people. You need some technical skills if you are to create a data pipeline. When procuring the data pipeline, for example, you need the original processor and destination. If you don't know where you're going to read the data, where to send the data, and if you have to send the data, you have to configure it. If the destination you're looking for is some particular message permit or data permit, then you should write your own code there. You need some knowledge of coding as StreamSets does not provide any coding.

    StreamSets data drift resilience has not exactly reduced the time it takes for us to fix data drift breakages. A lot of improvements are required from StreamSets. I'm not sure how they're planning to make it happen. There are some issues in the case of data processing, and other scenarios.

    If the data processing in StreamSets takes a long time as compared to the previous solution, then we will reconsider why we use StreamSets.

    For how long have I used the solution?

    I've been using this StreamSets for the last two years.

    What do I think about the stability of the solution?

    In terms of stability, there have been one or two issues. Good people work on the solutions when we have issues. However, sometimes we don't get a good solution. 

    As a user, I expect a lot more and that the solution will come quicker as compared to keeping projects on hold or keeping them for a long time. If they do not have any solution, then we can plan accordingly how to use the other processors. They just need to let us know quickly. 

    What do I think about the scalability of the solution?

    The scalability is good.

    We do plan to increase usage. 

    How are customer service and support?

    In terms of technical support, they generally do a detailed analysis from their end. They always try to give a proper solution. However, sometimes, they won't get to any proper solution. They'll come back and look into it and sometimes it takes time. If they can speed up the process a little bit that would be ideal. We are always sitting on the edge. If we don't get a proper response from them, then it will be very difficult for us to answer to higher management. 

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    This is my first solution of this kind. Previously, I was working in open source systems, with scripting, et cetera. This is the first time I've worked in the data area. I've got full support. As a new data user, I'm still getting used to it.

    How was the initial setup?

    The setup is straightforward, it's not complex and it is simple. 

    We treat it like a pipeline. We are not writing code and putting things in. In the case of a pipeline, you can export it and input it, or you can make it a pipeline. It can be auto-deployed into a respective environment. That's what we did.

    We have different destinations we need to send to. We aren't using a single destination. In that sense, we do have multiple computations. We set up, send the data and do the deployments. 

    There is occasional maintenance needed. Sometimes, if something goes wrong, we'll have to correct the data. We just check here and there for the most part.

    What about the implementation team?

    We did not need an integrator or consultant to assist with the setup. 

    As a team, we do the deployment. We won't send it to others, whatever we develop, we will test and deploy. We already have the system in place and it is really helpful for the deployment of the solution.

    What was our ROI?

    I haven't seen an ROI. 

    It's not exactly saving us money as it's a new tool. If I'm going to hire someone new, I will not hire based on the StreamSets tool or some specific tools, and I might save money right away. However, I'm spending time on my side. StreamSets is not being used by many horizons. In some places in Europe, fewer companies are using StreamSets. People should get to know StreamSets and they should get some expertise in the area, the way AWS and Azure do. I’m spending a lot more time and therefore I’m not saving money. That said, I’m also not losing money.

    What's my experience with pricing, setup cost, and licensing?

    Higher management handled the licensing. However, I can't say how much it costs. I'm more on the user side.

    Which other solutions did I evaluate?

    I did not evaluate other options. 

    What other advice do I have?

    I have not yet used StreamSets' Transformer for Snowflake functionality. I created one POC, not with Snowflake, however, I'm going to use Snowflake in my next project.

    I'd rate the solution seven out of ten. They are doing a good job. Using this solution I can feel the data and see the user flows. 

    If you are going to withdraw on-premise, and you're just copying the data to a table, you're not going to see how much data has been copied. With this, I'm seeing how much data has been transferred, and where the processor is. It gives a clear picture with metric details and notifications. That's the reason I used this tool for the last two years. 

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Prateek Agarwal - PeerSpot reviewer
    Manager at NISG
    Real User
    Top 5Leaderboard
    Very effective data drift resilience and helped us break down data silos
    Pros and Cons
    • "It is a very powerful, modern data analytics solution, in which you can integrate a large volume of data from different sources. It integrates all of the data and you can design, create, and monitor pipelines according to your requirements. It is an all-in-one day data ops solution."
    • "Sometimes, when we have large amounts of data that is very efficiently stored in Hadoop or Kafka, it is not very efficient to run it through StreamSets, due to the lack of efficiency or the resources that StreamSets is using."

    What is our primary use case?

    We are working on a very large data analytics project, in which we are integrating large data sets to a platform from multiple sources. We need to create data pipelines. We are using StreamSets for all the data integration activities, for creating the pipelines, monitoring them, and running all the data processes smoothly.

    How has it helped my organization?

    Previously our ETL tools were done manually by our DevOps team. But StreamSets gives us the flexibility to integrate data sets, create pipelines, design them in any way, and monitor them at any time with a single click. We are providing that to the other team members as well so that they can easily track and monitor the data pipeline in progress. And if any batch fails, it notifies us where it failed and if there were any issues with the data, which is quite a benefit for our organization.

    The data drift resilience is also very effective. Sometimes we get data that is not in the proper format. It enables us to clear data ambiguity from our data sets so that all the data sets are in the proper format. We spend 70 to 80 percent of our time fixing data. StreamSets enables us to remove all the data exceptions. It is quite effective. We can't imagine working without the data drift capabilities. Before, our team spent 10 to 12 hours a week fixing data, but that has now been reduced to one to two hours. It has had a wonderful impact on our organization.

    In addition, the reusable assets have reduced our workload because if you are not spending too much time on fixing data, you have sufficient time to work on other activities within the whole solution.

    Before StreamSets, we had 40 to 45 people working on data engineering for data analytics. We have reduced that headcount to 25 to 30 and that has helped increase our budget for other activities.

    We have also been able to break down data silos in our company. Now the team can collaborate, through StreamSets, in a very unique way. They can own the data sets and work according to the data pipelines, anywhere around the world. We have a very large, diverse, geographically dispersed team. It enables them to work from different locations on the data pipelines and integration activities.

    Overall, the solution saves us 40 to 45 percent of our time because, manually, ETL jobs are very tedious.

    What is most valuable?

    It is a very powerful, modern data analytics solution, in which you can integrate a large volume of data from different sources. It integrates all of the data and you can design, create, and monitor pipelines according to your requirements. It is an all-in-one day data ops solution.

    It is quite easy to implement batch, streaming, or ETL pipelines. You need some initial hands-on training to use it, but they provide very good training material and a user manual. They also provide some initial training to the user, so that they can easily run the application. 

    It has drag-and-drop features, in which almost no code is required for creating ETL pipelines. You can easily create data pipelines according to the requirements. We have so many team members who don't know how to code, but they are perfect in data analytics. StreamSets enable us to integrate the data pipelines. Things are moving to almost-no-code or low-code platforms, like Azure Analytics and AWS. They all provide almost-no-code platforms for data integration activities.

    Because we are working on a large data analytics project, our data volume is huge. We are integrating StreamSets with Kafka, Hadoop, and some analytics tools like Power BI and Tableau for the visualization of the data. It is quite easy to connect to these systems because it supports all the data connectors, like Oracle, ERP, CRM, Azure, and AWS. It has the ability to connect to any of these systems. 

    What needs improvement?

    Sometimes, when we have large amounts of data that is very efficiently stored in Hadoop or Kafka, it is not very efficient to run it through StreamSets, due to the lack of efficiency or the resources that StreamSets is using.

    Also, the hierarchy of names within the dropdowns and the drag-and-drop features are not familiar to users that do not have a technical or programming background. In those cases, the naming conventions are a challenge.

    For how long have I used the solution?

    I have been using StreamSets for more than two years.

    What do I think about the stability of the solution?

    It's a very smooth and reliable solution. The performance is good and it is very efficient for running multiple data integration pipelines. It's highly efficient and it's reliable most of the time.

    What do I think about the scalability of the solution?

    It is a scalable solution. It automatically scales all the data and the data analytics. We have thousands of users concurrently using the data analytics software. StreamSets has been perfectly scalable.

    How are customer service and support?

    The customer support team needs to better understand data analytics and data engineering. If a user is stuck when creating or designing pipelines, or needs any other technical support, the customer support team needs to be fully efficient in resolving those problems. They need more dedicated technicians with more professional knowledge.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We have used Azure Data Factory for data integration and ETL activities, but we switched because we are using AWS. Working with Azure Data Factory requires using the Azure cloud platform. Also, Azure Data Factory is not as efficient and does not have as many data connectors compared to StreamSets. And StreamSets has a very nice pricing plan for their services, compared to Azure Data Factory.

    How was the initial setup?

    The deployment is quite easy because it is cloud-based. No external software or solution is required. You just start your work from day one, once your deployment is done. It takes 15 to 21 days for the initial setup. We have it deployed at a single location and used by 50 to 60 people who are mainly from product DevOps and DataOps.

    Our data operations and DevOps teams checked and tested all the results for all the use cases that we have.

    It is a fully managed cloud-based solution, so everything is managed by the StreamSets team. There is no maintenance on our end.

    What about the implementation team?

    We did it in-house.

    What was our ROI?

    Our return on investment is due to all of the positives I mentioned already. It is also because of the pricing plan, and it has all the features required for the integration of the pipeline. And there is the almost-good customer support. Sometimes they lack professional knowledge about the solution, but on average they are good.

    What's my experience with pricing, setup cost, and licensing?

    The pricing is good, but not the best. They have some customized plans you can opt for. It is quite affordable for any organization. Pricing is not a concern, as compared to Informatica and other solutions.

    Which other solutions did I evaluate?

    We evaluated Azure Data Factory and open-source solutions like Alteryx, Talend, and Informatica.

    After analyzing all the available features in StreamSets and the pricing plan, we made our decision to go with StreamSets because it met our needs for all of the data integration activities we have.

    What other advice do I have?

    Go through your data integration requirements and compare the other solutions with your requirements. But I hope StreamSets works perfectly with your requirements, because it has all the features that you require.

    Sometimes we use StreamSets’ Transformer for Snowflake functionality when the data is huge and cannot be integrated through other processes. Transformer for Snowflake is quite good, useful, and easy to set up. But it requires some initial setup training. It is used when you have a large volume of data through your API calls or through IoT devices. 

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    Download our free StreamSets Report and get advice and tips from experienced pros sharing their opinions.
    Updated: November 2022
    Product Categories
    Data Integration Tools
    Buyer's Guide
    Download our free StreamSets Report and get advice and tips from experienced pros sharing their opinions.