The primary use of this solution is for horizontal scalability.
We have an on-premises deployment.
The primary use of this solution is for horizontal scalability.
We have an on-premises deployment.
The most valuable feature for us is horizontal scaling.
The deployment process for this solution could be made easier.
I saw some limitation with respect to the column store, and removing this would be an improvement.
This solution is stable enough.
We are not yet sure about scalability, but we will see in the future.
We have not been in touch with technical support for this solution.
We were using MySQL prior to this solution.
The initial setup of this solution was not simple, but it was not too complex, either. I would say that it was of medium complexity.
Our deployment took approximately two days, although this was only a partial deployment. We have yet to add another node.
We performed the deployment in-house.
We are using the open-source version of this solution.
Prior to choosing this solution, we evaluated MySQL and PostgreSQL.
I would rate this solution a seven out of ten.
It's a good core database. Scalability and performance are very good. I also like the fact the solution is open-source, so you can use it free of charge.
Some integration with other platforms like design tools, and ETL development tools, that will enable some advanced functionality (like fully down processing, etc.) would be helpful in future releases. Also, if the solution could offer automated creation of DDL statements from power designers, for example, it would be very useful.
It's a software, and like any software, it has some bugs. However, you can add new features to improve it. Overall, our customers, who are big telcos, have been very satisfied with the platform and with its stability and performance
Scalability is simple because it's an MPP database. If you need more processing power or you need more storage, you just add a few more nodes in the cluster. It works on common commodity hardware. You can use any type of server. You don't need to have proprietary hardware. It's fairly flexible.
The solution requires a minimum amount of downtime when scaling. You can even add additional nodes without any downtime at all. I'm not 100% sure, but I think you can just reconfigure it and the background processes, and Greenplum will do the redistribution of the data.
Technical support is very good. The model they are using to fund the development of their open-source product is via revenue from support for enterprise customers, so they are very attentive when issues arise.
The solution is very straightforward to set up and is also easy to administer and develop using other open-source tools.
It's open-source, so it's free to use.
I'm a partner that works mainly with enterprises. Mostly the partners are big telcos and we deal with tens of terabytes of data.
MPP and columnar databases are the future of the analytical landscape. The era of appliances is over, so implementation of an MPP database on-premises or on the cloud is the way to go. Greenplum is definitely one of the leaders in this area.
I would rate the solution eight out of ten. If they improved the integration with other platforms in the landscape I would rate it higher.
We install this solution for our clients. At the moment we are in the middle of an installation for a data warehouse that will be used by a telecommunications company that is based in Lesotho. We have not gone into production yet, but we have used it in a test environment and it works very well.
We are a technology company, so we handle software development, software implementation, data warehousing, and business intelligence.
We are using the on-premise deployment model. In Africa, there isn't much adoption of cloud services, so most of our clients are expecting on-premise implementation.
We chose Greenplum because of the architecture in terms of clustering databases and being able to have, or at least utilize the resources that are sitting on a database.
The installation is difficult and should be made easier. Maybe if the process was simpler it would have a quicker adoption by other developers. This could also be accomplished by providing training aids, such as videos to help with installation or using certain features. There are resources currently available on their website, but you have to search through a lot of documentation.
Our expectation is that the scalability will be good, as it is one of the main reasons that we have invested in this solution.
To this point, I have referenced the material on the website but have not really interacted with technical support.
The initial setup of this solution is not very simple. You need to properly follow the steps in terms of getting the whole architecture put together. We have a team of five people who are working on different aspects of the implementation.
Currently, we are focusing on the data layer. Next will be the ETL layer.
We are using our in-house team to implement this solution for our client.
We have used Oracle and Microsoft SQL, but we haven't had much success. We found that Oracle was not as scalable and we were having some performance bottlenecks. Also, from a licensing perspective, Greenplum was a better choice. For all of these reasons, we have chosen to invest heavily in Greenplum.
I would recommend this solution specifically for the scalability. This solution has a more futuristic technology, as opposed to the old school kind of data warehousing. If people are interested in getting something that is more future-proof, then I would recommend this solution.
So far, we're comfortable with what we've seen. What we have configured is addressing our needs at the moment.
I would rate this solution an eight out of ten.
Asynchronous messaging; supporting data integrations between multiple applications on behalf of our many customers. RabbitMQ allows us to elegantly fan-out data to a variable number of subscribers, with almost zero effort.
We have been able to set up a messaging system that facilitates data integration between the software modules that we sell.
RabbitMQ allowed us to do this quickly so that we could focus on the business requirements, rather than divert our efforts to message broker implementations.
Once the architecture was proven, we were able to return to the RabbitMQ message layer in order to implement an HA cluster with a minimum of problems encountered.
Our business now has a fit-for-purpose information hub that we can apply across our systems. As the customer-base grows, we know that the hub can grow with it.
RabbitMQ is a solid, widely-used messaging system with a low cost-of-ownership. It is open, but with commercial support potentially available from Pivotal if required. (We have never needed it.) There is also a strong online user community.
One crucial feature was guaranteed messaging. We needed a solution that we could trust to not lose data.
Its built-in clustering capability allowed us to configure it as a highly available message broker, so that we can have confidence in the resilience of our architecture.
It can be scaled as well, although we have not tested this.
After almost two years' usage in our production environment, I am impressed by how stable the platform is - even when running on Windows Server 2012. Sure, we have had to tweak our set-up here and there as we have learned a few operational lessons along the way but overall it is very good.
RabbitMQ is clearly better supported on Linux than it is on Windows. There are idiosyncrasies in the Windows version that are not there on Linux.
The documentation for the Windows version is also less plentiful and less accurate.
The online community clearly provides better Linux support, but this naturally follows from the smaller Windows installed base.
There are also some potential concerns about how we maintain high-availability whilst also scaling out.
We have had no stability issues.
We have not used the scalability features yet.
We have not used technical support.
No previous solution was used.
The initial setup was straightforward. The online documentation was adequate and there is minimal initial configuration required to get up and running.
After that, it is simply a matter of experimentation with the various features and learning as you go.
This is an open source solution.
We looked at MSMQ, NServiceBus, Azure Service Bus, and Apache Kafka.
I would recommend that anyone who intends to deploy RabbitMQ on Windows should first consider whether a Linux implementation is a viable option for their situation.
We use it for data warehousing.
For complex queries, which would normally take a long time, and for reporting, it is very efficient. It doesn't take a long time for the execution of any report for the end-user.
The implementation of an upgrade takes a long time. But maybe it's different from one instance to another, I'm not sure.
Also, one of the disadvantages, not a disadvantage with the product itself, but overall, is the expertise in the marketplace. It's not easy to find a Greenplum administrator in the market, compared to other products such as Oracle. We used to work with such products, but for Greenplum, it's not easy to find resources with the knowledge of administration of the database.
If we face any issues they're normal and we open tickets.
It's scalable. I would rate scalability seven out of 10.
We hired one DB admin for Greenplum. If he faces any issues he opens tickets with the vendor, but most of the issues, 90% of them, he is able to solve without support.
We used to other products before, but when we worked with Greenplum, as compared to other products on the market, we found it's a good product.
Before Greenplum, we used Oracle but it was mostly obsolete. So we had to upgrade our tools. We needed to have a database with an API tool.
I'm not a professional in the setup but setup of the environment itself was managed by us. We managed between development, testing, and production servers. We are able to maintain it. I don't think it is complicated.
Most of the issues can be solved without referring back to support. A very small minority of issues required support from the vendor.
Pricing is good compared to other products. It's fine.
We did a comparison among some databases, one of them Greenplum. We assessed features, did a comparison in terms of the price, then we chose Greenplum. And we've retained it. We've found it's a good product, to date.
Oracle Exadata was part of the comparison, as was IBM Netazza. In terms of quality and the price, compared to the other products, we chose Greenplum. Also, to be honest, at that time we got a good offer: Use it for the first year with a minimal price. Then they opened a support contract with us, later. That was one of the advantages.
I give it an eight out of 10. To bring it up to a 10, they need to interact more with customers. They need to explain the features, especially when there are new releases of Greenplum. I know just from information I've found that it has other features, it can be used to for analytics, for integration with Big Data, Hadoop. They need to focus on this part with the customer.
Also they need to enhance integration with other Big Data products. They need to adapt more, give more features, because customers are looking for these things in the market now. They have the product itself already, but they need to integrate with Big Data platforms and to open a bi-directional connection between Greenplum and Big Data. They need to focus on these features more.
But, from my perspective, for what I'm looking for, I can say it's a good product. Most of the features I'm looking for are available.
I am still comparing RabbitMQ and Kafka, but based upon the information I have gathered RabbitMQ is an awesome tool.
RabbitMQ will help to remove a lot of the complexities and create a loosely coupled codebase.
I like the high throughput of 20K messages/sec, and that it supports multiple protocols. The flexible routing is great as well.
The next release should include some of the flexibility and features that Kafka offers.
I have used IBM MQ software, but it was not applicable to this application.
I have evaluated and researched Axon, RabbitMQ, Kafka, and IBM MQ.
It is a very good appliance for data warehouse (DWH) usage.
Before we had Oracle Exadata, some queries would take more than 20 hours of execution. With Greenplum, it take a few minutes.
It will be very useful if we could communicate with other database types from Greenplum (using a database link).
Four years.
No issues.
No issues.
Good. I would give their technical support a seven out of 10.
Yes, Oracle Exadata. Performance was the main criteria for switching to Greenplum.
It was a simple setup.
It is the best product with best fit for price/performance customer objectives.
We evaluated Oracle technology that we used before.
I encourage other customer to try Greenplum, specifically for DWH use. It is a very useful product.

Hi,
I am a real user too and I would say that it depends really on the context. You can consider two generation of brokers, old ones are pure brokers (RabbitMQ, ActiveMQ, ZeroMQ etc.) and new ones are stream oriented (Kafka, Artemis, etc.). The performance difference is huge, around 4000 msg/s for old brokers, around 60,000 msg/s for stream based.
we used RabbitMQ for years and we are moving right now for many reasons:
- RabbitMQ is one of the leading implementation of the AMQP protocol. Therefore, it implements a broker architecture, meaning that messages are queued on a central node before being sent to clients. This approach makes RabbitMQ very easy to use and deploy, because advanced scenarios like routing, load balancing or persistent message queuing are supported in just a few lines of code. However, it also makes it less scalable and “slower” because the central node adds latency and message envelopes are quite big.
- Nevertheless, Using standard AMQP 0.9.1, the only way to guarantee that a message isn't lost is by using transactions -- make the channel transactional, publish the message, commit. In this case, transactions are unnecessarily heavyweight and decrease throughput by a factor of 250. To remedy this, you need to implement confirmation mechanism that challenge a lot the easiness of implementation
- Replication on RabbitMQ 3.6 (the last version supporting AMQP 0.9,1) makes RabbitMQ having deadlocks between nodes and created a lot of issues in production in our systems
- Last, Erlang is a black box and many times RabbitMQ crashes with Erlang errors that were a shame to make us able to diagnose quickly and efficiently.
So my recommendation, don't use RabbitMQ on a transactional path, it remains good for back-office messages as long as you can implement your own transactions in an optimistic way (with retry and message duplication detection on application side)
In my context, we are moving to Kafka that shows performance, scalability and stability.