Try our new research platform with insights from 80,000+ expert users

Share your experience using Perspectium DataSync

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 102,000+ reviews on PeerSpot:

Anurag Pal - PeerSpot reviewer
Technical Lead at a consultancy with 10,001+ employees
Real User
Feb 11, 2026
Search and aggregations have transformed how I manage and visualize complex real estate data
Pros and Cons
  • "My favorite feature is always aggregations and aggregators; you do not have to do multiple queries and it is always optimized for me, and I always got the perfect results because I am using full text search with aliases and keyword search, everything I am performing it, and it always performs out of the box."
  • "According to me, as far as I have seen, people will start moving from Elastic Search sooner or later. Why? Because it is expensive."

What is our primary use case?

I am using Elastic Search not only for search purposes but for rendering on maps as well.

I have not searched any vectors so far, so I cannot provide you with the exact output of that.

I was not using vectors in Elastic Search because I was using a vector database. As I mentioned, I use other databases for that. I have not explored it because when it comes to the data, Elastic Search will become expensive. In that case, what I suggest to my clients is to go with PostgreSQL, a vector database, or any other vector database. They are a startup, which is the problem.

We are using streams.

What is most valuable?

My favorite feature is always aggregations and aggregators. You do not have to do multiple queries and it is always optimized for me.

I always got the perfect results because I am using full text search with aliases and keyword search, everything I am performing it. It always performs out of the box.

It is easy because I have been doing it for years. The last version I remember is 3.5 or 3.1 that I used. Since then, I have been following Elastic Search and the changes they do. For configuration, I have never seen any problem.

What needs improvement?

Elastic Search consumes lots of memory. You have to provide the heap size a lot if you want the best out of it. The major problem is when a company wants to use Elastic Search but it is at a startup stage. At a startup stage, there is a lot of funds to consider. However, their use case is that they have to use a pretty significant amount of data. For that, it is very expensive. For example, if you take OLTP-based databases in the current scenario, such as ClickHouse or Iceberg, you can do it on 4GB RAM also. Elastic Search is for analytical records. You have to do the analytics on it. According to me, as far as I have seen, people will start moving from Elastic Search sooner or later. Why? Because it is expensive. Another thing is that there is an open source available for that, such as ClickHouse. Around 2014 and 2012, there was only one competitor at that time, which was Solr. But now, not only is Solr there, but you can take ClickHouse and you have Iceberg also. How are we going to compete with them? There is also a fork of Elastic Search that is OpenSearch. As far as I have seen in lots of articles I am reading, users are using it as the ELK stack for logs and analyzing logs. That is not the exact use case. It can do more than that if used correctly. But as it involves lots of cost, people are shifting from Elastic Search to other sources.

When I am talking about pricing, it is not only the server pricing. It is the amount of memory it is using. The pricing is basically the heap Java, which is taking memory. That is the major problem happening here. If we have to run an MVP, a client comes to me and says, "Anurag, we need to do a proof of concept. Can we do it if I can pay a 4GB or 16GB expense?" How can I suggest to them that a minimum of 16GB is needed for Elastic Search so that your proof of concept will be proved? In that case, what I have to suggest from the beginning is to go with Cassandra or at the initial stage, go with PostgreSQL. The problem is the memory it is taking. That is the only thing.

For how long have I used the solution?

I have been using Elastic Search since around 2012.

What do I think about the stability of the solution?

I have never seen any instabilities, even from the initial state.

What do I think about the scalability of the solution?

I have checked it for a petabyte of records. It is scalable.

How are customer service and support?

One person can do it, but when it comes to DevOps, we need a team always. Only if we have to manage Elastic Search, one person is fine.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

I have used Solr and MongoDB as direct alternatives. According to the situation, it basically happens based on what the client wants. Sometimes they want Cassandra in place of Elastic Search. Our thing is only to suggest them. When it comes to the server costing, they are always asking, "Can we move to another server?" For example, I was working with a lower attorney's application and we implemented Elastic Search. For AWS only, we had to take two instances of 32GB for Elastic Search. After a few months only, the client asked, "Anurag, is it possible if we can go to another source if the latency is reduced or if some concurrency will reduce?" In that case, we had to move to Cassandra. Alternatives, I do use them.

What other advice do I have?

Elastic Search is working fine with streaming. I do not have any problem with that. I do not feel any problem with it because the library works well for the solution I am providing in Go. The libraries are healthy over there and it has worked well. I am satisfied with that. If there are some lags, I manage that. I have not used it. My review rating for Elastic Search is 9.5 out of 10.

Which deployment model are you using for this solution?

On-premises

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Last updated: Feb 11, 2026
Flag as inappropriate
reviewer2801493 - PeerSpot reviewer
Architect at a transportation company with 1,001-5,000 employees
Real User
Top 20
Feb 6, 2026
Migration projects have accelerated data processing and now require better latency and support
Pros and Cons
  • "I would tell others looking into using SnapLogic to use it, take it, use it, and use it, because you can't go beyond SnapLogic and SnapLogic is not going to give you any kind of bad experience, definitely."
  • "From the HR point of view, or for HR tech, improvement is required. A couple of connectors are not working with all the relevant APIs, and there is always a restriction in terms of fetching the data."

What is our primary use case?

My main use case for SnapLogic is migrations from Boomi and other SQL databases to SnapLogic. Regarding the SQL database, I'm not going to disclose the customer name, but I can give you some highlights. It was financial data for a Fintech organization. They wanted to move from their PNL and GL data. All these migrations and the benefits data, which they have to send to the third-party system, moved from another iPaaS system to SnapLogic because of delays and latency, which was the biggest issue for them. So, we built up plenty of data lakes and migrated the data back to SnapLogic.

Similarly, for the SQL database, it wasn't only the latency, but they were also facing issues where people were unnecessarily writing a lot of triggers. They didn't think that was the correct way because there was no alignment or streamlining in terms of process design or the reusable storage of technical resources. They wanted to identify and streamline the entire process, as well as build some kind of reusable process. I have also found that SnapLogic, or perhaps Boomi or MuleSoft, whatever the customer is going to choose, has some kind of edge in terms of processing data quickly and with a reduced amount of latency. I don't know what changes they are going to do with the help of agent AI, all the pipelines, and DataBricks, and all these things, and how they are going to put the data in perspective within the application. That is the use case I can tell you.

What is most valuable?

The best features SnapLogic offers include the different integration patterns you can quickly adapt. Manipulation is easy with the data. When I'm saying easy, I mean that there are a lot of ways to get different data sources into one place, and then you can do a lot of customization. It is easy as well. For a person who doesn't have that kind of exposure, they can quickly adapt. There is no space to say, 'I don't know this system.' If your basics are clear, then you can quickly adapt. So, it is a quick adoption from SnapLogic's side. The second thing is that the kind of features they are providing for the different integration patterns is really unique.

You can build a lot of quick JMS connections. There are inbuilt connections for the likes of SuccessFactors, Salesforce, and Oracle where you can get the data from the ERPs. Then you can do the data manipulation and map the data to the destination system, and send the data to SFTP, to an S3 bucket, or via the API. You can build the API as well within that suite, and then manipulate the data over there. Accordingly, you can utilize the RBAC system in a better manner.

SnapLogic is coming with a lower latency rate when it comes to a huge number of pipelines and the huge terabytes of data they need to read and then they need to write. Especially if they can look towards Apache Iceberg or DataBricks or anything Spark, in terms of how these systems handle the kind of data they are going to send. So there is a quick turnaround to processing the data for any of the downstream systems.

I believe that SnapLogic impacts the organizations I work with as a contractor. It is really useful for them, especially for new use cases that are quickly adapted with a quick turnaround. It is very useful in terms of testing and realization. Especially when there are unique features in SnapLogic, the entire chain and transport management system for sending the whole config from development to quality and from quality to production is really fantastic. This gives a different kind of aspect to the customer as well, to quickly make changes in development with a quick turnaround. In no time, they can send the data from dev to production.

What needs improvement?

The latency is the biggest issue across iPaaS. That is the important part. I have worked not only with SnapLogic, but also with MuleSoft, Dell Boomi, and Jitterbit. They are all fine, but they have different kinds of functionality. They all have similar kinds of problems in different domains. But as I mentioned, SnapLogic has a little bit of an edge because of its functionality and inbuilt functions for Fintech, as compared to the others. This I can say firmly.

They need to assess themselves. In this day and age, as I mentioned earlier, terabytes of data need to be read and then have a quick turnaround for downstream systems, especially for GenAI or any LLMs. Then they can definitely improve themselves because right now, most things, in fact, the GenAI, ChatGPT, Cloud, Anthropic, and so many others, require data quality with perfection and more precision. But for all of that, we require a data pipeline that can be read without latency and without any delay, for any reason. So if they can improvise that over the cloud, that would be really fantastic and a really good achievement for them. Not only for them, but for the customer as well. Then no matter what, people cannot leave SnapLogic. They need to be there with the snaps.

I don't know much about that. I haven't referred to the documentation that much. But support is something that is pretty obviously required, rather than just providing videos. Technical support is required. The roadmap also needs to be very clearly mentioned and specified. Be specific in which domain they are going to do what, if they are coming out with that roadmap. Otherwise, overall, if they are going to improve their entire system as I mentioned earlier, for the reusability concept and the data pipeline concept, then they will definitely do some magic in the future.

From the HR point of view, or for HR tech, improvement is required. A couple of connectors are not working with all the relevant APIs, and there is always a restriction in terms of fetching the data. So that is why I chose six. From the Fintech point of view, if you are asking on a scale of one to 10, then I would give it an eight out of 10. It is a huge one. There is always a margin for improvement, so that is why I chose eight. If you talk about HR, sales, or any other domain, there is a significant amount of improvement required.

For how long have I used the solution?

I have been working in my current field for close to 18 years.

What do I think about the scalability of the solution?

SnapLogic's scalability is huge. There is a huge amount of scalability. That is why I put it on the scale of six. There is an area for improvement. Once that area of improvement is already done, or about to be completed, then it will definitely be a nine out of 10.

How are customer service and support?

Customer support for SnapLogic is neither bad nor good. It is okay, normal. Sometimes it is very good, sometimes there is no response.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

I did a lot with Boomi. I love Boomi a lot, along with SnapLogic. Where we switched to SnapLogic, it wasn't from Boomi. The customer chose SnapLogic over the traditional way of working, such as a SQL database, Oracle database, or some other iPaaS solution, which was not Boomi. They switched because the traditional solutions did not fit their commitment or approach to making an ecosystem out of the cloud. That is why they switched to SnapLogic. Everywhere I have worked where SnapLogic is used, they were all in Fintech. Fintech has huge, secured, and very cumbersome numeric logic. They don't have a huge population within the organization, but they require very complex calculations which they would to complete easily. This allows them to connect with it over many years. So if they would want to switch from or to SnapLogic, they need the entire ecosystem as it is: process optimization, approach, documentation, etc. That is why they chose SnapLogic.

I have evaluated other options. Honestly speaking, I gave the suggestion for both Boomi and MuleSoft.

What about the implementation team?

I am a contractor, so I can work. I know this. I did not work with a partner, reseller, or any kind of relationship for anyone.

What was our ROI?

Time was definitely saved with SnapLogic. Money, I don't know. I'm not part of the pricing and setup, so I can't say. I can't comment on that. It is not that fewer employees were needed, but time was definitely saved, and our process was optimized with the help of SnapLogic. That I can say for sure.

Which other solutions did I evaluate?

Before using SnapLogic, I had a different opinion.

What other advice do I have?

I would rate SnapLogic a six on a scale of one to ten.

I don't really remember if you are talking about metrics, but I can give you one example where we built around eighty odd interfaces out of three hundred interfaces that needed to be migrated from Oracle. It worked really well. It was fantastic in terms of processing, latency, data manipulation, and the downstream system. For one interface, we had to send it to more than thirty places in different time zones with different scheduling. We heavily customized those interfaces, and it really works well.

I would tell others looking into using SnapLogic to use it. Take it, use it, and use it. You can't go beyond SnapLogic. SnapLogic is not going to give you any kind of bad experience, definitely. Because any solution that has a huge amount of scalability, there is a probability that a solution SnapLogic is not going to feel uncomfortable for any customer expectations.

Good luck. Do great things and have great achievements. I would love to see and then move myself also onto SnapLogic's new roadmaps. You will definitely do some wonders with SnapLogic.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Last updated: Feb 6, 2026
Flag as inappropriate