A primary benefit is high reliability. They have very good price performance and configuration options. Being able to configure them in different ways, for different node types, was something we needed.
Senior Director of Research at a consultancy with 51-200 employees
Has the flexibility to run dual CPU nodes or add GPUs to other nodes.
Pros and Cons
- "Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world"
- "What's coming out in Gen 10 is very strong in terms of additional security."
How has it helped my organization?
What is most valuable?
In referring to the Apollos, what we liked about them was:
- A combination of the density
- The flexibility to run dual CPU nodes or add GPUs to other nodes
- Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world
- Being able to connect those in large quantities
- In Bridges, we have 800 Apollo 2000 nodes, and they have been running extremely well for us
What needs improvement?
I think it's on a good track. What's coming out in Gen 10 is very strong in terms of additional security. Overall, I think those are well architected. They're a very flexible form factor for scale-out. Assuming ongoing support for the latest generation CPUs and accelerators, that will be something we'll keep following for the foreseeable future.
In Bridges we combine the different node types to create a heterogeneous, ideal system. Rather than wishing we had more features in a given node type, we integrate different types. We choose different products from the spectrum of HPE offerings to serve those needs optimally, rather than trying to push any given node in a direction it doesn't belong.
What do I think about the stability of the solution?
Stability has been extremely good. Jobs run for days to many weeks at a time. We recently supported a campaign for a research group in Oklahoma, who were forecasting severe storms, doing this for 34 days. They were running on 205 nodes.
The example we're featuring was a breakthrough in artificial intelligence where an AI first beat the world's best poker players. And for that one, we ran 20 days continuously, and of course, the nodes had to be up because players are playing the games and we were running that on 600 nodes of Apollos. That was just as seamless, and it was a resounding victory. So, I think that's the strongest win through Apollos in our system so far.
Buyer's Guide
HPE Apollo Systems
December 2025
Learn what your peers think about HPE Apollo Systems. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.
What do I think about the scalability of the solution?
Scalability for us is limited only by budget. Using Omni-Path, we can scale our topology out with great flexibility. And so, scaling out workloads across Apollos has been seamless. We're running various protocols across them. We're running a lot of MPI, and they do spark their workloads. So the scalability has just been limited only by the size of our system.
How are customer service and support?
We have an arrangement with HPE technical support. As our system does call on them on occasion, but the stability has been very high. Over the past year and four months that we've been running bridges, I think we have only had under 70 calls on the whole system.
Which solution did I use previously and why did I switch?
We knew we had to invest in a new solution as we were looking at designing a system to serve the national research community. We knew what their application needs are, and what their scientific goals will be. So we were imagining what that system would have to deliver to meet those needs. So that's when they told us the kinds of servers we needed in the system. We have the Apollos, we have the L580s, with three terabytes of RAM, we have Superdome integrity with 12 terabytes of RAM, and we have a number of GL360 and other service nodes.
But it was really looking at the users requirements and looking at where high performance computing, high performance data analytics and artificial intelligence are going through about 2019, that that's what caused us to select the kinds of servers that we did, the ratios we did, and the topology we chose to connect them in.
How was the initial setup?
It was the first Omni-Path installation in the world, so people were very careful. With that caveat, I think it was straightforward.
Which other solutions did I evaluate?
We always look at all vendors before reaching a conclusion. I don't want to name them here, but we're always aware of what's in the market. We evaluate these for each procurement. We pick the solution that's best. The competitive edge for HPE involves several things. These are not in any specific order, as they are hard to rank.
- HPE's strategic position in the marketplace. Being a close partner with Intel, we trust them when there's a new CPU. We can get it in an HPE server very early on.
- When something new comes out, like Omni-Path, it was brand new then. We trusted that HPE would be able to deliver that in a validated product very, very early.
- We are always pushing the time envelope. Their strategic alliance with other strong partners, gave us trust that we would be able to deliver on time, and we were. That's unusual in this field.
- They uniquely had very large memory servers so the Superdomes, and the bandwidth in those servers, was extremely good compared to anything else on the market. We wanted that, especially, for large scale genomics. Putting that in the solution was a big plus. I'd say these items together were the strongest determining factors, from a technical perspective.
What other advice do I have?
I think the advice is to look at the workload very closely, understand what you want it to do, look at the product spectrum that's available here, and do the mix and match like we did. Build them together. There are software frameworks now that actually make it easier than when we did it, to stand up this sort of collection of resources, and to just go with what the workload needs.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Head of Industrial Automation & Modeling at a mining and metals company with 1,001-5,000 employees
Stable solution for management and monitoring.
What is most valuable?
It's a stable product; very reliable. It is a good basis upon which to build further. You see some evolution, but not too much. If you go to their events every year, you see an incremental evolution which is normal in that road.
How has it helped my organization?
I'm just a general manager and I’m not really technical. However, it gives you a nice, better flavor of the monitoring. I have heard that it provides better management and you can see the possibilities.
What needs improvement?
OpenView is a new product which does not support older versions of the hardware. This is an issue. That's why we cannot switch to the newer one. We continue using the older product, and that's working fine. I would like to see a bit more integration. This is the major topic.
What do I think about the scalability of the solution?
It is stable and scalable, but the new product has some advantages which we like. However, we cannot switch because we have an issue between non-supported and supported devices.
What other advice do I have?
When choosing another vendor, we look at the overall product and then the software product on top of that. Switching to another vendor is always a big step. We normally don't do that because it presents issues. Every solution will migrate to the same functionality. There is not a great difference between various solutions, but only an incremental one.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
HPE Apollo Systems
December 2025
Learn what your peers think about HPE Apollo Systems. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.
Network Administrator at a tech services company with 1,001-5,000 employees
The storage area density is the best thing about them. Outside connectivity needs to keep pace with network improvements.
What is most valuable?
We are running Apollo with SL-series servers and the best thing about them is the density of the storage area available. Regarding TCO, total cost of ownership, per terabyte, they are now the best on the market.
What needs improvement?
Connectivity to the outside of the server needs to be improved at the same time the network is improving. This would give us more IO. Of course, this is a firmware lifecycle management issue; there is work to do. Vendors should test the firmware before they are delivered to customers.
What do I think about the stability of the solution?
Stability is good enough.
What do I think about the scalability of the solution?
Scalability is fine because with this kind of service we can easily scale horizontally. We are more or less satisfied.
How are customer service and technical support?
The technical support in Finland is fine.
Which solution did I use previously and why did I switch?
We made a transformation from enterprise storage to an open-source distributed storage architecture. We switched because the pricing is better.
How was the initial setup?
The initial setup was business as usual. It's not so complicated, but of course it takes time.
What's my experience with pricing, setup cost, and licensing?
The price is not significantly lower than the competition, but it's lower than the standard price.
Which other solutions did I evaluate?
We looked at Dell and Super Micro. They are both on the market in Finland.
What other advice do I have?
You should run the stable firmwares on a test platform for about a month before you roll them out. This is something we have to do that right now.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Development Manager at a tech company with 10,001+ employees
It supports our network requirements for network captures at high data rates. We're looking for faster disk-write capability.
What is most valuable?
We're using the Apollo 4200 as a data capture system. The most important things for us are the amount of storage on there, the ability to configure it, and change the configuration so we could do the network captures we need at very high data rates. It meets our network requirement of being able to capture up to 40-gig with a small form factor.
How has it helped my organization?
We are moving from existing 10-gig environments to a 40-gig environment. The ability to capture those high data rates is really important to us. We need to know what's going on in the network. We need to be able to explain to our customers any issues or problems, and where they might have occurred.
What needs improvement?
We're looking for faster capability to write to drives. We're fully loaded with all the small form factor drives loaded into the system. It is practically at the limit of the capability supported by the architecture. So we need new solutions, new types of drive capabilities, and faster bus speeds.
What do I think about the stability of the solution?
It is good in terms of stability. We are struggling a little bit with some of the configuration we need to do, particularly with write capability to drives. That's the only part where we struggle with getting the solution going; but we've had significant conversations with HPE, and worked through a load of issues. We are actually getting the solution that gets to our capabilities.
What do I think about the scalability of the solution?
We tend to only use a single rack-mount server for what we're trying to do. The ability to keep it small, reduce the footprint and reduce costs are the most important things that the Apollo 4200 gives us.
How are customer service and technical support?
Technical support has been very good. We've been given access to senior HP personnel in America. They've given us lots of guidance and help in actually configuring the system.
Which solution did I use previously and why did I switch?
We were previously using the older DL 380's with MSA drives. We knew their limitations using the fiber channel in terms of the transfer rates we could get out of it, for example, but we needed something that would work with the move to a 40-gig network environment.
How was the initial setup?
The initial setup was fairly straightforward. What we're trying to do with the solution added to the complexity; so we needed some guidance, mainly on how to configure the way the drives and everything were allocated to enable us to actually do the captures. From that initial build to where we are now, it's taken a little while to get there; but it is a fairly complex system.
Which other solutions did I evaluate?
We looked at four or five different vendors. Some of them were talked about very expensive solutions. The HPE solution cost about one-third less. Taking into consideration the cost, HPE gave us the ability to actually do what we wanted to do. Also, the relationship and being able to talk to them was important in our decision. Getting access to their technical people is very important to us. We've been an HP user for many years.
What other advice do I have?
Not many companies will have a similar type of requirements as we do. But if you need a low cost solution with a low footprint, then the Apollo 4200 is an ideal system for that.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Manager of IT Infrastructure at a computer software company with 501-1,000 employees
Using it with Scality, we migrated away from traditional NAS.
What is most valuable?
We actually install Scality on the Apollo servers and so we have a ring, a Scality ring, where we store our customers' documents. That allowed us to migrate away from traditional NAS with a cost effective solution whose architecture is both scalable for the future and able to handle the PB scale of document content that we deal with.
How has it helped my organization?
Just not having to manage traditional NAS has made a big difference. Not having to manage traditional volumes and aggregates and LUNs and things like that. Being able to be flexible when it comes to that, and Apollo has made that possible.
What needs improvement?
We're pretty happy with the Apollo line of servers. It would be interesting to see the new hyper-converged DL380s. It would be cool to see if that type of same thinking about hyper-convergence was applied to the Apollo line of servers as well. It would be interesting, not on the storage-dense model of Apollo servers but on the compute-dense models of Apollo servers, to see kind of a hyper-converged solution running in those chassis that can have multiple compute nodes all in one. So that would be interesting to see if HPE could do something like that. It would make a compelling argument for them in their hyper-converged space. It would really complement the DL380 hyper-converged solution that they're providing now and would be I think a good choice for lots of people who are looking at hyper-converged.
What do I think about the stability of the solution?
We deployed our first ring on Apollo servers towards the end of last year so it's been running for eight or 10 months or so and it has had zero downtime.
What do I think about the scalability of the solution?
With the Apollo systems, we initially expected it to be of PB size. The great thing about the Apollo servers using Scality is that if we need to add more disks to those existing systems, that disk will instantly be usable to the ring. If we need to add more servers to have more compute power and more storage, we can do that as well.
How are customer service and technical support?
We've only contacted them to help replace drives when drives go bad as they do, but nothing beyond that.
Which solution did I use previously and why did I switch?
So for a long time, we were storing our documents on a traditional NAS, through NetApp, and that got to the point where NetApp couldn't handle PB scale affordably. We're talking about tens of millions of dollars in order to buy a NetApp that could do PB scale on the number of IOPS that we needed. And on top of that, it was cost prohibitive to be able to scale out on traditional NAS, so the Apollo line became the clear choice, I guess. And deciding that we had to go to something like an object storage, that decision was made long before we decided on Apollo. It turns out that Apollo fit our decision to go to object store.
How was the initial setup?
The initial setup was pretty straightforward. The Apollo series that we use is basically the guts of a ProLiant DL380, which we've used many, many times in the past, but then allows us to put double the disk capacity of a traditional DL380 in that line of Apollo servers. And so setting it up was pretty easy because we've done Apollo servers in the past. The iLO functionality made it pretty straightforward and had no problems getting things deployed.
Which other solutions did I evaluate?
We spent a long time looking, actually, at doing the Scality ring on just commodity hardware from someone like Supermicro, and we found out that, in terms of reliability, supportability, ease of management, that having all our servers under the same contract through HPE, made the decision to use Apollo was apparently clear. Even though it was marginally a little bit more expensive up front, the total cost of ownership of having to manage those many servers was lower. This made the decision really easy.
What other advice do I have?
If someone came with a similar storage need, the Apollo servers do make a lot of sense, especially when you're talking about scale out object storage-type implementations. That Apollo line, it makes perfect sense from my perspective and I would recommend that.
Our first batch of Apollo servers that we got were so new that it was just hard to know kind of what to expect from HPE and what they wanted to deliver to us. The first batch of servers that we got were missing an iLO and that may have been a confusion between what we ordered and we thought we ordered or what we thought we would've had. But anyways, that way it was resolved quickly and the iLo modules were shipped out and there was no problem there. But just because it was so new when we first got it that there was just some speed bumps when we first ordered them. Otherwise, they're a very solid server.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Senior Unix Performance Analyst with 10,001+ employees
It allows us to use a few nodes as possible for storing log-file data so that we have as much direct space capacity as possible.
What is most valuable?
Apollo's most valuable features for us are its density and storage capabilities.
How has it helped my organization?
We're trying to keep all log files in our Hadoop server, which amounts to several terabytes a day of locked data that we need to analyze. Apollo allows us to use as few nodes as possible for this so that we have as much direct space capacity as possible. It gives us much more space per gigabyte.
What needs improvement?
It's a very good system when you need a lot of disk capacity. But it's unclear whether the performance of the IO will be sufficient when calculating the theoretical amount of time to read all the disc space. If the workload is not purely sequential, then performance in the IO is less than optimal because it's optimized for streaming processing.
What was my experience with deployment of the solution?
We have no issues with deployment.
What do I think about the stability of the solution?
We installed it in place about a week ago, and it's been running without problems.
What do I think about the scalability of the solution?
We have probably some 6,000 or 7,000 physical cells already and are planning more.
How are customer service and technical support?
We have technical account managers who work with us. It's pretty much a direct line to HP without having to dial the general support number.
Which solution did I use previously and why did I switch?
We previously used the DL380s. Compared to those, Apollo has roughly four times the amount of space per server, which means we can really do a lot. We technically could have four DL380s, but the licensing cost would have been significantly more.
How was the initial setup?
The initial setup was straightforward, and we've been happy about it.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Research Support at a university with 1,001-5,000 employees
It's a dense product, meaning we can fit several servers into our rack space.
Valuable Features
For us, the most valuable features are the price and density. We have very limited space and we're able to fit four servers into our data center's rack space. Although I think a lot of the servers from different vendors are going to be very similar because they all use Intel chips, making them essentially the same, it's the HP management software that makes it better than the competition.
Improvements to My Organization
The biggest benefit for us is a physical benefit in that we can save our very limited space. Again, it's a dense product, meaning we can fit several servers into our rack space.
Room for Improvement
The licensing could be greatly improved, I think. We have a very hard time tracking it because we have to get a license for every server and machine. We have to click in our email, then go to the site, then login to HP, then download the license, then we have to do it all again for each server and machine, and we have to know which server or machine the license is for and give the license to the installer. It's inefficient, overly complicated, and should be simpler and pain free.
Deployment Issues
We haven't had any issues with deploying it.
Stability Issues
It's been stable so far, but we've only had it a few weeks.
Scalability Issues
We have six racks and we can fit another. At the moment, we have sixteen Apollo servers and we're going to put 40 in as we have the space for that.
Customer Service and Technical Support
We've signed up with a third-party management service. They've been really good so far.
Initial Setup
The initial setup was simple for us. HP came in, they racked and stacked it, and the software guys came in. This took a day or two and they were all done with the image. The whole process including hardware and software stack took about two weeks.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Executive Vice President with 501-1,000 employees
It gives us the density of a blade without the issue of shared IO, but it needs direct integration with software.
What is most valuable?
It gives us the density of a blade without the issue of shared IO, and a good price point for object storage.
How has it helped my organization?
It's allowed us to compete with cloud storage providers like AWS to put together a scalable on-premises solution of more than 20PB at a similar pricepoint.
What needs improvement?
Direct integration with software (Cleversafe, Scality, Ceph) for a purpose-built object store appliance. Stay closer to the current rev of processors. I know it is a heating/cooling issue, but being a couple of revs back is problematic when comparing consolidation of workloads with standard intenl servers running the latest chips.
For how long have I used the solution?
We have implemented this for a few client over the past three years.
What do I think about the stability of the solution?
Big stability issues with the CPU on the first generation which made them virtually unusable. HP has done a better job of regression testing against software (hypervisors and big data platforms specifically) in the recent generations.
How are customer service and technical support?
It's got better in the past year and in line with other major manufacturers (Cisco, EMC).
Which solution did I use previously and why did I switch?
Standard Proliant servers (DL380s) with internal storage. We also looked at SAN and NAS solutions, as well as VSAN technologies from VMware, HP, and Citrix. None could hit the pricepoint to compete with AWS S3.
How was the initial setup?
Standard server technology. Some initial issues with flashing FW, but the rest was straightforward.
What about the implementation team?
We were the vendor.
What other advice do I have?
Great solution for object stores. Consolidation ratio on compute doesn’t make it a great alternative for virtualization hosts, but could be a decent hyperconverged platform. HP is utilizing SL technology for their CS-250 Hyperconverged appliance.
Disclosure: My company has a business relationship with this vendor other than being a customer. HP Platinum Partner.
Buyer's Guide
Download our free HPE Apollo Systems Report and get advice and tips from experienced pros
sharing their opinions.
Updated: December 2025
Popular Comparisons
Dell PowerEdge R-Series
HPE ProLiant DL Servers
Lenovo ThinkSystem Rack Servers
IBM Power Systems
Cisco UCS C-Series Rack Servers
Intel Server System
Oracle SPARC Servers
Dell PowerEdge XE-Series
Dell PowerEdge C- Series
HPE Moonshot
Dell PowerEdge XR-Series
Huawei FusionServer X Series
Lenovo High-Density Servers
HPE ProLiant Compute
HPE Cray Supercomputing
Buyer's Guide
Download our free HPE Apollo Systems Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links









