We changed our name from IT Central Station: Here's why

Message Queue (MQ) Software API Reviews

Showing reviews of the top ranking products in Message Queue (MQ) Software, containing the term API
IBM MQ: API
AA
Unix/Linux Systems Administrator at a financial services firm with 10,001+ employees

We have clients spread all over Africa and they have to process different types of requests, such as credit requests and debit requests. We use the Queue Manager to handle these requests. Our MQ server will accept the request and send it on to our core banking application.

If you imagine the order from left to right, the application is on the left, then the enqueue server is in the middle, and the core banking is on the right. In between the queue server and the banking application, we have APIs and systems in place to understand the XML files.

View full review »
Manager at a financial services firm with 10,001+ employees

We are a bank whose core banking system is not so advanced. It is still running on an AS/400 system. Credit Card system is are deployed on IBM mainframes. About 70 to 80 percent of the bank's core systems rely on IBM AS/400 and mainframes. The enterprise service bus is used in conjunction with MQ to break synchronous web service /TCP calls into asynchronous MQ calls and expose them a web services-based or API-based service for both internal and external customers. 

As part of enterprise architecture principles, we have enforced all connectivity to be service/ interface based by using ESB, MFT or API. Minimize the point to point connectivity.

We are using dedicated IBM power/pure-app servers to run IBM Integration Bus, IBM MQ, and WebSphere Application Server. These are the three components being used for the bank's enterprise service bus.

View full review »
IS
Project Manager/System Architect/Senior Mainframe System Engineer/Integration Specialist at a tech services company with 51-200 employees

They could integrate monitoring into the solution, a bit more than they do now. Currently, they have opened the REST API so you can get statistic and accounting information and details from MQ and build your own monitoring, if you want. IBM can improve the solution in this direction.

View full review »
JL
Lead Architect at a retailer with 10,001+ employees

My advice would be to rethink the cloud strategy. Make sure to have certain components that you can put into the cloud. Think about cloud-first properly so that it scales automatically. It knows how to work with some of the container services that are out there so that it scales better. It has some cloud components that are good but you still have quite a strong on-prem infrastructure to support it.

It's quite a complete solution. They have modules and stuff that they acquire and may add on as features and modules, additional modules, which is a very complete solution. It's been expensive to keep going the way we're going. And the turnaround is a bit slow, slower than we want. The business is changing quite rapidly, being in retail so we need to pivot quite quickly. And so that's why we're looking at seriously moving towards the cloud where we can simplify some of our processes and actually even our maintenance in it and the way we operate.

I would rate IBM MQ a seven out of ten.

View full review »
DevOps Engineer at Integrity

IBM MQ can be used as an integrated bus system in an API for message queuing.

View full review »
VMware RabbitMQ: API
DP
Sr Technical Consultant at a tech services company with 1,001-5,000 employees

One of the issues is that as soon as you go outside of a switch or not in IP address range, the clustering no longer has all the wonderful features so clustering outside of network boundaries is a problem. I'd like to see stream processing as an additional feature. Kafka has a streaming API and I'd like Rabbit to have that too.

View full review »
Red Hat AMQ: API
Sr. Enterprise Architect at Teranet Inc.

Red Hat AMQ supports hybrid cloud deployment, and while we have an on-premises deployment at this time, this feature is important to us because we are planning to transition to the cloud. We expect that if not all, at least the majority of our applications to be on the cloud within five years.

It is very important to us that Red Hat Integration includes transformation, routing, connectors, and a distribution of Apache Kafka, all built to run on Kubernetes. All of these features are supported by Red Hat, which means that the maintenance and currency of this solution are assured. We used to question why we would buy a wrapper over an open-source product, instead of just using the open-source directly. Now, we see that the distinguishing point that gives us value is maintainability. With the full support of Red Hat, it takes much less effort and resources.

Red Hat integration enables developers to serve themselves what they need via APIs and event streams. It's a baseline technology that we build upon it for our use cases. It includes plenty of mechanisms for interacting with other systems.

The toolchain is okay but it's not great. They have enough for developers to start using these technologies effectively. However, in some cases, they are not as mature as we would like. For example, Fuse uses Apicurio. It's a different open-source product that visualizes the flows of the API implementation, including the transformations and everything that's happening inside the component. This is a feature that can be much better.

From the support perspective, they are supporting it to the maximum of their ability, but they do not have any SLAs connected to it yet because it's not their product. They're just using it extensively alongside the whole bundle.

With regards to the toolchain, they do have Fuse plugins for major IDEs. All of the interaction with 3scale goes through the web interface. Configuration-wise, the versioning is okay.

In general, the other products are a bit more mature when it comes to the toolchain. The Red Hat product allows it but requires a bit more seniority from the first adopters to get used to it, and then transfer this knowledge to others. This is in contrast to MuleSoft, where you can take junior to intermediate developers and they will make their way through. Here, it's a little bit different. You definitely need to have good Java experience at the senior level to quickly grasp how to work with all of the tools and technologies.

Also, the graphical presentation of all of the tools and flows is not as mature as you would expect from this type of framework. Because of this, a good developer, intermediate to senior level, will need to get used to these tools, and then, it'll be much easier to transfer the knowledge. That said, it was all good enough for us to get started with.

The other products have much better visualization tools available that integrate well with the platform. This gives you an opportunity to build something visually and then it will convert the code. You see more or less what's going on. With AMQ, you have the same capabilities but you need to write plenty of code.

Using this solution helps us to deliver new services faster. Fuse has prebuilt components that communicate with AMQ, which gives us an upper hand from a productivity perspective.

AMQ has definitely reduced our developer dependency on the IT department. Our DevOps engineers take care of the application infrastructure and they work with developers to resolve whatever issues they have. As an example, consider the case where we use 3scale to configure throttling or other aspects. If you compare the level of effort to configure, deploy, and maintain the API against when we were not using 3scale, it is much less when we use the Red Hat solution. This is true not just for maintaining the API gateway and API management platform, but in general.

From the build perspective, when you know the product well, it will take less time to create API products that are simple to medium level of complexity. From the AMQ perspective, 3scale and Fuse help to eliminate the headache of maintaining the platform that enables your asynchronous messaging.

The event-driven architecture enables us to decouple services from each other, and our developers can do their own integration. This is a benefit to using the event-driven architecture patterns, which can be used with any product that enables asynchronous communication.

View full review »
PubSub+ Event Broker: API
CK
Manager, IT at a financial services firm with 501-1,000 employees
  • Everything is good in this solution. We only use the PubSub feature. We use a minimum of topics to publish and they are consumed through the Solace message broker.
  • We have a standard template for any new configuration, so it's very easy to manage.
  • The topic hierarchy is pretty flexible. Once you have the subject defined just about anybody who knows Java can come onboard. The APIs are all there.
  • Topic filtering is easy to use and easy to maintain. Sometimes we go into a lot of detail on the content and it can be affected at a higher level. So it's very flexible.
View full review »
DN
Enterprise Automation Architect at CIBC

The storytelling about the benefits needs improvement. We have four major lines of business in our company. Our retail, capital markets, and internal corporate center lines of business along with technology operations, which is more of a cost center. Technology operations are not innovators, but more a keep the lights on arm of the business. One of the areas of improvement would be if we could tell the story a bit better about what an event mesh does or why an event mesh is foundational to a large enterprise that has a wide diversity of applications that are homegrown and a small number off the shelf. I wish we were better able to tell the story in a cohesive way to multiple lines of business, but that's more of a statement of our own internal structure and how we absorb or adopt new technology than it is about Solace or the product itself.

It been a bit of a tough slog to try and get everybody to see event meshes are foundational in a multi-data center, multicloud landscape, when we're not there yet. Our company has most of our applications in two data centers that are close to each other. There is no real geo-redundancy, but everything we've ever done has been on-prem with only a small handful of Azure adoptions. Therefore, having folks see the benefit of an event mesh has been tough. I wish we could improve our storytelling a little bit.

We have struggled in a sort of perpetual PoC mode internally. This is no fault of Solace's. It's just that the only executive looking to benefit here is our technology operations team, and they have no money for investments. They're a cost center internally, so they have to be able to make the case that we're going to improve efficiency by leveraging this tech. Thus, the adoption has been slow.

View full review »
SA
Technology Lead at a pharma/biotech company with 10,001+ employees

Another product that I use very much in my current portfolio is MuleSoft. It's an API management platform, and also iPass, which is Salesforce's company now. Both these products have to work together to give an assured-delivery type of middleware platform. We felt that having a connectivity layer or a connector or an adapter already pre-built in Solace for platforms like MuleSoft, Dell Boomi — middleware especially — would be pretty interesting. It would make it a more authentic and credible connector as well.

Today, we have to rely on JMS or a REST-based protocol but we have raised this request with Solace. While connectivity is definitely easier, at the same time, Solace needs to work on some of the connectors for industry-leading applications like Salesforce, Workday — multiple typical distributed applications that we might have. It is pretty good at this point but they can do better on that.

Also, a challenge we currently have is Solace's ability to integrate with single sign-on in our Active Directory and other single sign-on tools and platforms that any company would have. It's important for the platforms to work. Typically, they support only LDAP-based connectivity to our SQL Servers. 

We have one critical step, from an IT security point of view. If there are any SaaS applications or cloud applications which are hosted out of our cloud platform, then the only way that we can do SSO is through a SAML-based or another specific protocol. Solace doesn't support them at this point in time and we have raised this as a platform request. I think it is on their roadmap. But currently, it supports only LDAP. That is an improvement area for them.

View full review »
JC
Managing Director at a financial services firm with 5,001-10,000 employees

We do a lot of pricing data through here, market data from the street that we feed onto the event bus and distribute out using permissioning and controls. Some of that external data has to have controls on top of it so we can give access to it. We also have internal pricing information that we generate ourselves and distribute out. So we have both server-based clients connecting and end-user clients from PCs. We have about 25,000 to 30,000 connections to the different appliances globally, from either servers or end-users, including desktop applications or a back-end trading service. These two use cases are direct messaging; fire-and-forget types of scenarios.

We also have what we call post-trade information, which is the guaranteed messaging piece for us. Once we book a trade, for example, that data, obviously, cannot be lost. It's a regulatory obligation to record that information, send it back out to the street, report it to regulators, etc. Those messages are all guaranteed.

We also have app-to-app messaging where, within an application team, they want to be able to send messages from the application servers, sharing data within their application stack. 

Those are the four big use cases that make up a large majority of the data.

But we have about 400 application teams using it. There are varied use cases and, from an API perspective, we're using Java, .NET, C, and we're using WebSockets and their JavaScript. We have quite a variety of connections to the different appliances, using it for slightly different use cases.

It's all on-prem across physical appliances. We have some that live in our DMZ, so external clients can connect to those. But the majority, 95 percent of the stuff, is on-prem and for internal clients. It's deployed across Sydney, Hong Kong, Tokyo, London, New York, and Toronto, all connected together.

View full review »
MO
Senior Project Manager at a financial services firm with 5,001-10,000 employees

We're a capital markets organization, so we primarily use it for our trading algos order management, streaming market data, and general application messaging. Those are our key use cases. 

Our other use cases are for more guaranteed messaging-type or things where we absolutely need to have the resiliency of every message for higher performance streaming market data, meaning, millisecond latency-sensitive algorithm operations that are running as well.

We also use it for general messaging and to displace some of our legacy messaging applications such as MQ, EMS, and things of that sort. We are standardized on Solace PubSub+; it's an architectural standard at our company.

View full review »
NK
Head of Enterprise Architecture & Digital Innovation Lab at a tech vendor with 10,001+ employees

We use Apache Kafka, which is more of an API gateway. For us, events is a new concept. We do more request/reply, API-based integration patents. We also have typical event-driven architecture. This is still a new concept for us that we are trying to evolve.

View full review »
Head of Infrastructure at Grasshopper

The ease of management could be approved. The GUI is very good, but to configure and manage these devices programmatically in the software version is not easy. For example, if I would like to spin up a new software broker, then I could in theory use the API, but it would require a considerable amount of development effort to do so. There should be a tool, or something that Solace supports, that we could use for this, e.g., a platform like Terraform where we could use infrastructure as code to configure our source appliances.

Monitoring needs improvement. There is no way to get useful systems to test out the machine without having to implement our own monitoring solution.

I would like to see improvement in the message promotion rate for software-based brokers.

View full review »
Apache Kafka: API
JJ
Technology Lead at a computer software company with 10,001+ employees

Our company provides services and we use Apache Kafka as part of the solution that we provide to clients.

One of the use cases is to collect all of the data from multiple endpoints and provide it to the users. Our application integrates with Kafka as a consumer using the API, and then sends information to the users who connect. 

View full review »
Senior Big Data Developer | Cloudera at Dilisim

Kafka has a good storage layer on its side. I can store this data if it's streaming, and, if we do encounter any error, for example, on the network or server, we can later use the data to do some analytics on it using the Kafka server.

Kafka provides us with a way to store the data used for analytics. That's the big selling point. There's very good log management. 

Kafka provides many APIs that can be flexible and can be placed or expanded using the development life cycle. For example, using Java, I can customize the API according to our customers' demands. I can expand the functionality according to our customer demands as well. It's also possible to create some models. It allows for more flexibility than much of the competition.

View full review »
KS
Solution Architect at a manufacturing company with 10,001+ employees

MQ messaging systems are not my core strength but for any integration platform where we have a large number of APIs and events, to integrate with an IoT platform, for example, I found Kafka is better than ActiveMQ.

I'm not getting into in MQTT or other things but comparatively, when you compare ActiveMQ and Kafka, Kafka has done better.

View full review »
DP
Sr Technical Consultant at a tech services company with 1,001-5,000 employees

The most valuable features are the stream API, consumer groups, and the way that the scaling takes place. 

View full review »
Aurea CX Messenger: API
BG
Enterprise Architect at a government with 5,001-10,000 employees

The solution needs to improve support for new, more recent protocols on the API.

View full review »
MC
Integration Engineer Leader at a retailer with 10,001+ employees

Aurea CX Messenger could improve by making better use of the new APIs

View full review »