VMware Tanzu Data Solutions Room for Improvement
There is general room for improvement.
View full review »RR
Ravichandra Reddy
Senior Director, Engineering at Tanla Solutions Ltd
Other tools besides RabbitMQ provide good TPS and HA. If RabbitMQ can match what its competitors provide, then we can probably give it a ten out of ten.
Implementing a circuit breaker scenario using RabbitMQ is complicated. This complexity arises because manual intervention is required to manage worker details and handle operations based on worker IP addresses.
The use of public and private ports, specifically HTTP 8082 and HTTPS 8092, introduces complexity.
So, we had thought about moving away from RabbitMQ to Anypoint MQ because there is a complex scenario where we need to implement a circuit breaker.
Using RabbitMQ is getting complicated, so we need to implement it manually. Anypoint MQ is well-optimized within the Anypoint platform itself. Anypoint MQ is a Mule-related broker, so it's very well optimized for the Anypoint platform itself. So, it will be easy if we are using Anypoint MQ; we can implement this circuit breaker scenario in a very easy way.
But because using RabbitMQ, it is a bit complex, because we need to get the details of the worker and we need to catch, we are stopping the worker based on how many workers we are having by getting their IP addresses, and this stuff is getting pretty much complicated. Then, all this stuff will work when you are on HTTP's public port 8082. But in general, we won't use public ports. HTTP 8082. We generally use 8092, which is a private port over HTTPS configuration.
This will help us in several ways: the security purposes of the organization and everything else. So, that's why it's getting a bit complicated to implement this scenario while using RabbitMQ. So we got to move to Anypoint MQ.
View full review »Buyer's Guide
VMware Tanzu Data Solutions
June 2025

Learn what your peers think about VMware Tanzu Data Solutions. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
859,957 professionals have used our research since 2012.
When we had outages, we had timeouts. Sometimes, the nodes would drop out of a cluster. Therefore, the applications would get timeouts. There is a newer version out that we're planning to upgrade to for some stability. It could also be caused by our design.
The product must improve its reliability. It'd be nice if nodes could drop out and notify each other in a timely manner so that the applications wouldn't know if there was a problem.
View full review »Once in a while, we have downtimes associated with RabbitMQ. However, the long-term solution is to architect your solution for a commercially supported messaging broker.
View full review »AF
Abdullah Jan Farooqui
Director Consulting Services at M3tech
The solution is a fine product. However, to make it perfect, in some cases, there might be a need to traverse the queue. RabbitMQ currently lacks the capability for archiving the queue, which essentially turns it into a log.
For such requirements, you may need to explore other options like Kafka or custom drivers that allow traversing the entire queue. In RabbitMQ, while you can traverse the entire queue, you need to devise a workaround to handle the messages. For example, you can read a message from one queue, publish it to another queue or keep it in some other way to retain the desired entries, and then stop at that point.
Additionally, the need for support may vary depending on the usage and potential heavy loads on the system. The support feature could benefit from some improvement in terms of accessibility and responsiveness.
I don't encounter significant challenges or areas that require improvement while using the solution. Everything works smoothly, and I find it well thought out. It's got excellent compliance with MQP 9.0. Overall, I have had a positive experience with the solution.
View full review »Maintenance is time-consuming. It takes time to VACUUM and ANALYZE the tables to remove the fragmentations.
View full review »The product needs to focus on offering more use case documentation because browsing the internet to find it can be a process filled with struggles. It would be difficult to find some documentation that can let you know how to choose the right integrations for the product. The documentation should be improved by the solution.
I think the cloud version of VMware is still lacking significantly because we are using Snowflake or Redshift on the cloud. We have a lot of on-prem use cases and Greenplum is good for that but we also have a lot of cloud use cases and the solution is lacking in that sphere. A cloud-native version of Greenplum would simplify things by enabling the move from Greenplum to Greenplum.
It would be helpful if they could include some inbuilt machine-learning functions for complex use cases. I'd also like to see some kind of integrated dashboarding within the product for visualization. For example, Power BI integration or Tableau integration so that we could create a quick dashboard without getting Power BI onboarded.
It doesn't have any GUI-based monitoring tools. Oracle has some proprietary tools for monitoring all the databases and all that. Postgres doesn't have any graphical capabilities where we can monitor the database. We have to do it with the Sierra stuff and run some random commands. Then we can get the data from the cluster and databases table.
The initial setup is complex.
It would be ideal if they could provide an active cluster in Postgres. If one primary DB goes down, it should automatically fail over to the second database.
View full review »This solution struggled with multi-regional synchronization.
View full review »PK
PraveenKumar28
Packaged App development Senior Analyst at a consultancy with 10,001+ employees
We needed to configure additional plugins. While it was relatively easy to do this on-premises, it became more challenging in the cloud.
View full review »MG
Michael Grayson
Director of IT Operations at a financial services firm with 10,001+ employees
They should improve product performance. It does not share resources well whenever it has more than one process running. If you have a user consuming a huge load size of resources, it takes down the entire system.
They should work on resource pulling and sharing those resources properly. At present, it allows one user to completely take over everything.
View full review »The product is pretty hard to configure.
View full review »VMware RabbitMQ's configuration process could be easier to understand.
View full review »I'd like to see more support for structured data and features related to queries on NoSQL keys, extra filters would be helpful.
The user interface could be improved. We have an interface that shows the consumption rate, the number of consumers, their occupation rate. We should have a column in that interface that shows the estimated time until, at the current rate of consumption, the number of messages is to be consumed from a specific queue. That would be great. I wanted to read, however, as it is right now, JavaScript would have loaded the browser too much. Basically, I'd just like to see the consumption rate in each queue without too much fuss.
The solution could use some plugins that could be integrated into the server installation. We had a plugin that we used to delay something that from one version to the other was integrated into the server setup. Maybe it was more of an extension. However, more plugins could be also be integrated into newer versions of Rabbit.
VMware RabbitMQ needs to create a new queue system.
View full review »VB
Vivek Bajpai
Solutions Architect at a tech services company with 51-200 employees
The availability could be better. When something crashes, a queue gets deleted, and my data is lost. They need to improve this so that we don't lose data during issues like crashes.
We'd like to understand how many queues are running on RabbitMQ. I'm not sure how to get these details and how to verify the information.
We need other protocols.
View full review »Tanzu Greenplum's compression for GPText could be made more efficient.
View full review »DP
DustyPressley
Sr Technical Consultant at a tech services company with 1,001-5,000 employees
One of the issues is that as soon as you go outside of a switch or not in IP address range, the clustering no longer has all the wonderful features so clustering outside of network boundaries is a problem. I'd like to see stream processing as an additional feature. Kafka has a streaming API and I'd like Rabbit to have that too.
MT
Michael Twisdale
CTO, CIO, Chief Architect at a tech services company with 11-50 employees
RabbitMQ provides the ability to scale queues in a very simple and elegant way. If it had a “failure queue” with robust delivery and recovery built-in with the same power, that would be great. We use a completely different queuing system for failures. So there is a little more effort to take messages in a failure queue, analyze them, figure out what went wrong and then restart them in Rabbit. It is doable, and we do it, but if we had a round trip solution in Rabbit, that would be awesome.
For me, having a robust failure queue, is high on the list of improvements needed in the near future. This is an important update needed because right now we are using Doctrine for our failure queue. Doctrine does a great job.
View full review »GB
Gyula Bereczky
Senior Data Engineer at a financial services firm with 10,001+ employees
The initial setup is somewhat complex and the out-of-the-box configuration requires optimization.
- OS settings need to be tuned according to the Install guide.
- Only group/spread mirroring by gpinistsystem, block mirroring is manual (Best Practices Guide)
- Db maintenance scripts are not supplied - some of them added in cloud - need to be implemented based on the Admin Guide.
- Comes with two query optimizers, PQO is default, some queries perform better with the legacy planner, it needs to be set.
View full review »DB
David-B
Chief Executive Officer at Couragium Solutions
When you have complex tasks, RabbitMQ is hard to use.
There are several things that you have to do manually, so there should be better tools for that.
View full review »There are some security concerns that have been raised with this product.
The configuration works with a config file, where all of the controls, including that of the administrator and user access, are stored there. The security isn't very stringent or very elaborate.
View full review »JA
Jack Angoe
Technical Lead at Interface Fintech Ltd
Their implementation is quite tricky. It's not that easy to implement RabbitMQ as a cluster. It would be great if they could improve that.
View full review »I'd like to see a bridge between Greenplum and Hadoop.
View full review »TS
TomaszSobota
Java Programmer at Netcompany
I was struggling with installing a few things. It would be good if was somewhat similar to RedHat. There should be more documentation regarding installation troubleshooting.
It's pretty straightforward, the setup, but it would be useful to know what to do if you do face certain challenges. Right now, without more in-depth documentation, it's unclear.
View full review »The deployment process for this solution could be made easier.
I saw some limitation with respect to the column store, and removing this would be an improvement.
View full review »Some integration with other platforms like design tools, and ETL development tools, that will enable some advanced functionality (like fully down processing, etc.) would be helpful in future releases. Also, if the solution could offer automated creation of DDL statements from power designers, for example, it would be very useful.
View full review »The installation is difficult and should be made easier. Maybe if the process was simpler it would have a quicker adoption by other developers. This could also be accomplished by providing training aids, such as videos to help with installation or using certain features. There are resources currently available on their website, but you have to search through a lot of documentation.
View full review »MR
Mark Ray-Smith
Head of Engineering at Contineo
- Difficult to integrate with automated test and CICD
- Moving beyond basic configurations can be challenging
- Not clear how to implement durable subscriber connections
- Not clear how a Rabbit service restart allows subscriber auto re-connect
- Service cluster failover depends on shared disk infrastructure.
RabbitMQ is clearly better supported on Linux than it is on Windows. There are idiosyncrasies in the Windows version that are not there on Linux.
The documentation for the Windows version is also less plentiful and less accurate.
The online community clearly provides better Linux support, but this naturally follows from the smaller Windows installed base.
There are also some potential concerns about how we maintain high-availability whilst also scaling out.
View full review »The implementation of an upgrade takes a long time. But maybe it's different from one instance to another, I'm not sure.
Also, one of the disadvantages, not a disadvantage with the product itself, but overall, is the expertise in the marketplace. It's not easy to find a Greenplum administrator in the market, compared to other products such as Oracle. We used to work with such products, but for Greenplum, it's not easy to find resources with the knowledge of administration of the database.
View full review »The next release should include some of the flexibility and features that Kafka offers.
View full review »It will be very useful if we could communicate with other database types from Greenplum (using a database link).
View full review »In build monitoring, the interface could be improved.
View full review »We would like to see Greenplum maintain a closer relationship with and parity to features implemented in PostgreSQL. The current version of Greenplum is based on a fork of PostgreSQL v8.2.15. This edition of PostgreSQL was EOL by the PostgreSQL project on Dec 2011. The current version of PostgreSQL is v9.5.
View full review »The debugging capabilities and testing flexibilities need to be improved.
View full review »AF
Andrew-Ferguson
Software Engineer at a tech vendor with 1,001-5,000 employees
After creating a RabbitMQ service, they provide you with a sort of web management dashboard.
The dashboard allows you see things on your queues, purge/delete queues, etc. The dashboard is pseudo-real time, refreshing every N secs/mins, specified with a drop down.
I’d like this dashboard to use web sockets, so it would actually be in real time. It would slightly increase debugging, etc.
View full review »- The product has to improve the crisis management, especially in memory issues.
- Its clustering feature also needs improvement.
- I would simplify the configuration. I would add default configuration that prevents the queue system from filling out the server storage.
- I would also decouple the queue from the RabbitMQ Management, so that the queues won't get stuck.
- Clustering and clustering crisis management: When the cluster falls, there needs to be a simple way to recover it. It currently suffers from a recover problem.
The solution needs improvement on performance.
View full review »The biggest area we struggled with was operations troubleshooting. We were running a pretty big cluster and ended up with some random cluster failures that were difficult to troubleshoot. A good portion of these were self inflicted but occasionally the distributed database would end up corrupted.
View full review »BL
Boris Levin
Head of Data & Infrastructure at a tech services company with 51-200 employees
- The product should have much better scaling and scalability capabilities. Currently, they're really falling behind some of the competitors such as Kafka and NSQ.
- The installation of the HA version and clustering mechanism should be made much easier.
- The fact that a single queue can't be distributed across multiple instances/nodes is a major disadvantage.
NT
Nikola Tzaprev
Head of Cloud Platform Development at a tech vendor with 501-1,000 employees
The web management tool.
View full review »The documentation needs to be improved. There's a learning curve on setting it up and there are issues arising from slower networks that they lack documentation on.
View full review »The High Availability feature is not really reliable. It also took a really long time to restart the box when there were a lot of messages in the queue.
As mentioned on its document page, it cannot tolerate network partition well.
I suffered a network parturition with 3 nodes cluster and lost all data. So with our cloud provider, we can’t rely on pause_minority and seems like auto_heal is a better fit for us.
Apart from that, RabbitMQ doesn’t seem to be stable when it has high RAM usage. Especially when you have millions of queue items in a queue and a node crashes, adding a new node to such cluster will be a pain as the replication takes forever.
View full review »- You cannot edit shovels other than by recreating them.
- Routing of data could be more enhanced with a nice GUI. ("IF header.contains(this.thing) THEN data.goesTo(cluster_02)").
- In its current form, you have to recreate a shovel with the same parameters except for the one you want to change. You end up doing more or less a delete/create.
- There is no HTML form where you can click on a shovel and adjust the wrong parameter.
- If I click on a shovel, I get on a page that lists the shovel, but it is not editable. You have to create a shovel and then delete the old one with all the same parameters, except for the one you want to change.
- Temporarily stopping shovels is also not possible in the web interface. I do not know if the CLI version can do it, but if somebody wants to temporarily stop the incoming flow, he or she has to delete the shovel and then recreate it afterwards. This is annoying, to say the least.
- RabbitMQ has to be started before one can define exchanges, queues, and even users with rabbitmqctl. See https://www.rabbitmq.com/man/r...
- This is no problem if one lives in the monolithic server environment. However, if one wanted to make a RabbitMQ Docker-container with a pre-defined set of exchanges, queues, users, and shovels, you have to literally jump start the server. You would have to configure it in the Docker build phase. You would do it like this in the Dockerfile: RUN service start rabbitmq-server && wait 30 && rabbitmqctl add_user mike mikespassword.
I want it to reorder messages in a queue, if possible. If you could reorder messages in a queue directly, then you would not need a sequencer to reorder messages outside of RabbitMQ.
View full review »- RabbitMQ is great, but it depends on the Erlang VM.
- I understand that Erlang is the reason why RabbitMQ is what it is. However, having to install and maintain yet another VM product has been annoying.
- The configuration for RabbitMQ borders on the esoteric. Once we got all of the moving parts working, it’s been a dream. However, it was an effort just to get it going.
- Have more features such as being able to replay a sequence of what was received.
- Handle more messages per second.
- Consume fewer resources: NATS can handle millions of requests within a few minutes. RabbitMQ handles hundreds of requests with the same resources (RAM). Finding a way to be more efficient in this aspect would open them up to other markets, like IoT or embedded systems.
We found some issues with larger tables that have daily data appended, where after a while this seems to create lag in the query speed. This might just have to do with local knowledge rather than the product itself.
We have a table which is currently contains 27.6m rows and has a daily delta added to it of roughly 16.5k rows per day. While this isn’t particularly large, we have noticed the table begins to perform poorly when queried, in spite of having set up a VACUUM process to be performed weekly. It may be that the VACUUM process needs to be performed more frequently (like daily), but we’ve not yet found the optimal way of maintaining this particular table.
It’s worth saying that this is one table out of over 400 perfectly well performant tables and views in the same database. Hope that helps,
I would love to see better documentation/demo for few technologies. There is need for better stability in the Windows environment.
View full review »RabbitMQ needs 2 additional features:
- It is lacking a good dashboard on the web interface; maybe they can develop a dashboard for monitoring.
- There is no alert mechanism. For example, sometimes consumers may be killed or the input messages in queues are greater than the consumed messages. Thus, I would like them to define a rule for alert; maybe they can develop an alert mechanism.
The solution can be improved in terms of how to handle the rolled off data from the queue. Currently, if the consumer does not consume a queue, the data in the queue will eventually overflow and be discarded.
View full review »Support for Windows systems needs to improve. This could move Microsoft shops away from it. We provisioned Linux servers specifically for our RabbitMQ servers.
RabbitMQ clusters run on two kinds of protocols: AMQP and HTTP. The one we were using was AMQP (this requires all your cluster nodes to be in the same network partition). With our Windows servers, every time we used to run Puppet, RabbitMQ used to think it got partitioned. This problem never occurred in our Linux cluster.
All this is subjective. Maybe we were doing something wrong. There are a few other things which they have listed here: https://www.rabbitmq.com/windows-quirks.html Overall, I don't think it's RabbitMQ's fault because Windows can be a problematic OS at times.
So, I would recommend using Linux servers instead of Windows servers for a RabbitMQ cluster.
I would like to see improvements in fluent configuration. I'd also like to see more support for code-first environment configuration. We do a lot of this stuff as part of our deployment process via command line scripts, but I'd rather have a specific API to target rabbitmq.config and rabbitmq-env.config so that configuration could scale with my environments more easily. If more of that was baked into the RabbitMQ management HTTP API, it would help.
View full review »The product works pretty well, but one small thing could be an improvement to the monitoring site. It could be a little bit more modern, instead of postback refreshing, etc.
View full review »I would like to see better documentation on how to set up complex webs of RabbitMQ servers — master/slave, multi-master, etc.
View full review »Improve the ability to handle the large message load.
People usually use RabbitMQ as the lightweight messenger, if they have a large message load people are inclined to use Kafka. But at the beginning stage of most projects, the data is small, people do not need to use a Kafka type of messenger, they are more likely to use RabbitMQ. If RabbitMQ can handle the large message load and support ordered delivery, with the project growing, data bigger, people can still use RabbitMQ and wouldn't need to find another tool to use like Kafka which is much more convenient.
BR
Barath Ravichander
Data Engineer at Broadridge Financial Solutions
I'd like so see better scaling, better performance from in-memory databases, and a higher compression rate. We have been facing some performance issue when doing batch loading with optimizer the scaling does works fine. They are working on having optimization techniques which made me write room for improvement.
View full review »BR
Barath Ravichander
Data Engineer at Broadridge Financial Solutions
Scaling of the solution needs to be improved.
HD connection is available where as, not to any file system.
Connecting Greenplum with Gemfire(In-Memory) to load, sync, and reconcile data would be really valuable.
View full review »With the ORCA optimizer the earlier Append-Only feature has been upgraded to Append-Optimized where now we can update the data on earlier Append-Only tables just like any other heap tables. But I found this has increased the time taken for Vacuum Analyze operation on these tables like from 10 mins to 1 hr + (on large tables). In our case we don't need an update on our Append Only tables and hence this became a drawback. VA on Append-Optimized tables need to be improved.
Backup & Restore performance need to be improved.
ORCA optimizer when turned on is not showing consistency. Some workloads shows improved performance and some workloads became very slow. This need to be improved for consistency.
View full review »The fact GreenPlum is using an older version of Postgres means developers coming from other products will find many missing features in PostgreSQL, features which you would assume are standard.
Greenplum is based on Postgres 8.2.15 which was released in 2009. While the SQL syntax and functionality has continued to evolve in other platforms in the ensuing years it appears to have stagnated in Greenplum.
View full review »EMC already developed DCA V3, But if the hardware is little stable, I prefer DCA V2.
View full review »It doesn't work as efficiently as we'd like because it requires more segment node capacity (size, RAM, CPU) than we currently have.
View full review »- It needs a much more robust and user friendly monitoring and management front-end tool.
- More stability and auto-recovery with the segments.
- Report generations on system health and recommendations.
Stability and scalability for large number of concurrent applications & their users. The results we got were very inconsistent, depending on number of connections taken up by multiple applications and users.
When our application was first deployed using Greenplum, the number of users of the rrack on which Greenplum was deployed was very limited. We got excellent query performance results at that time. But as more applications started getting deployed, we started getting very inconsistent performance results. Sometimes the queries would run in sub-seconds, and sometimes same queries would run 10 times longer. The reason we found this was that Greenplum limits the number of active concurrent connections. Once all connections are being used, any new query gets queued, and thus response time suffers.
The impression we got was that the EMC Sales team that sold Greenplum to the organization did a great job. But later on the ball was dropped when it came to educating on which type of applications are suitable to Greenplum , and how to configure it to get optimal performance. When Pivotal took over support of Greenplum, their consultant visited us to go over the issues we were having. He advised us that Greenplum is not the best environment for our application needs. We ended up migrating our application out of Greenplum, along with a few other applications.
View full review »Since we are upgrading to a new version at this time, it’s hard to say. But we seem to be replacing a disk on the appliance every week.
View full review »It would be best if Greenplum would support array writes through ODBC drivers . Currently through Data Direct Greenplum drivers we can have single row inserts into Greenplum only. It would help if array loading is supported.
View full review »It acts like a mainstream product not a novice any more. There are a lot of areas that can be improved. The bug fixes come as many patches like a start up instead of having scheduled release with proper improvements.
View full review »- Better integration with big data tech stack
- Scalability: for example, system schema (pg_catalog) is one bottleneck for scalability
It should support more feature that do exist on Postgres – like JSON datatype.
The performance needs to be improved.
View full review »The Greenplum appliance itself has had some reliability issues, so it would be great if that could be improved in the next version. More critical, though, is that the latest devices are not backward compatible. i.e., We have to replace our entire environment to upgrade. That’s quite an expense. I would hope they could improve the upgrade roadmap in the future.
View full review »Session management for client tools needs work.
View full review »More stability in terms of query result.
View full review »VMware Tanzu Greenplum needs improvement in the memory area and improved methods for quick access to the disc. So, one of the quick goals of Greenplum must work on enhancing access to the disc by adding hints in the database.
View full review »CZ
Charlene Zaide
Solutions Engineer at VST-ECS Philippines Inc.
As of now, I honestly don't see any room for improvement. I am very happy with this solution, It's helped me a lot.
They should add more analytics. Their documentation could also be improved so that I don't have to bother my coworkers and tech support so often.
They should advertise more and make it easier for consumers to leave feedback. That way, customers could read the reviews and decide if it's the right solution for them and their needs.
View full review »VP
Vlad Popa
Integration Consultant at a tech services company with 11-50 employees
I would like to see the performance of the administration portal improved and additional messaging protocols.
Buyer's Guide
VMware Tanzu Data Solutions
June 2025

Learn what your peers think about VMware Tanzu Data Solutions. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
859,957 professionals have used our research since 2012.