IT Central Station is now PeerSpot: Here's why

HPE Apollo OverviewUNIXBusinessApplication

HPE Apollo is #2 ranked solution in top Density Optimized Servers and #6 ranked solution in top Rack Servers. PeerSpot users give HPE Apollo an average rating of 8.6 out of 10. HPE Apollo is most commonly compared to HPE ProLiant DL Servers: HPE Apollo vs HPE ProLiant DL Servers. HPE Apollo is popular among the large enterprise segment, accounting for 61% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 26% of all views.
HPE Apollo Buyer's Guide

Download the HPE Apollo Buyer's Guide including reviews and more. Updated: July 2022

What is HPE Apollo?

The HPE Apollo high-density server family is built for the highest levels of performance and efficiency. They are rack-scale compute, storage, networking, power and cooling – massively scale-up and scale-out – solutions for your big data analytics, object storage and high-performance computing (HPC) workloads. From water-cooling that’s 1,000X more efficient than air, to “right-sized scaling” with 2X the compute density for workgroup and private cloud workloads, the HPE Apollo line is a dense, high-performance, tiered approach for organisations of all sizes.

HPE Apollo was previously known as HP Apollo Systems, HP ProLiant SL, HP Apollo.

HPE Apollo Customers

Weta Digital

HPE Apollo Video

Archived HPE Apollo Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
Principal Engineer at a manufacturing company with 10,001+ employees
Real User
Powerful enough that we may only need half the number of GPUs in our next unit
Pros and Cons
  • "We're going to buy another Apollo 6500. We may configure it with half the number of GPUs because that may be all we need. In a sense, we can see the Apollo 6500 being so powerful that we only need half the GPU capability that we have now."
  • "I would want to see the flexibility of being able to run various network protocols including InfiniBand, Fibre Channel, as well as iSCSI, with iSCSI going up to 100 gigabytes per second -that would be outstanding."
  • "We could, perhaps, use more GPUs in the future, go from eight to 16 GPUs per instance. That could run head-to-head against the DGX-1, the DGX-2 that NVIDIA has developed in their own chassis. That would be interesting to see."

What is our primary use case?

We've only been using it for about a month so far. This is a system that's on loan to us from HPE. It's a Gen10 version with eight NVIDIA V100 GPUs and four nodes. We have already purchased the unit. This is on loan to us until we receive the Apollo 6500 that we ordered.

For storage we're using a Seagate SSD Array, all-flash array, as well as EL4000. The Apollo 6500 is for machine-learning, specifically for wafer generation, wafer analysis, for one of our operations sites in Minnesota.

What needs improvement?

I would want to see the flexibility of being able to run various network protocols including InfiniBand, Fibre Channel, as well as iSCSI, with iSCSI going up to 100 gigabytes per second -that would be outstanding. That in conjunction with what Mellanox offers would provide us with a very high-speed networking interface.

The other thing is we may could, perhaps, use more GPUs in the future, go from eight to 16 GPUs per instance. That could run head-to-head against the DGX-1, the DGX-2 that NVIDIA has developed in their own chassis. That would be interesting to see.

For how long have I used the solution?

Less than one year.

What do I think about the stability of the solution?

Excellent product. It's extremely reliable so far. The loan-er model we have is excellent. We have had no problems with it.

Buyer's Guide
HPE Apollo vs. PowerEdge C
July 2022
Find out what your peers are saying about HPE Apollo vs. PowerEdge C and other solutions. Updated: July 2022.
621,593 professionals have used our research since 2012.

What do I think about the scalability of the solution?

In some ways, we think it may go beyond what we need moving forward. We don't know yet. We're going to buy another Apollo 6500. We may configure it with half the number of GPUs because that may be all we need. In a sense, we can see the Apollo 6500 being so powerful that we only need half the GPU capability that we have now. But that's what we think we're going to end up seeing as we continue to go through this process of machine-learning.

How are customer service and support?

Tech support has been outstanding. In fact, what HPE is doing is helping us develop the software stack for us to be able to move forward with this whole approach. Our intent is to develop a machine-learning and inference capability within all of Seagate operations, which include eight sites around the world.

My expectation is that this is going to be a rather huge improvement in our operations process. It takes about six months for us to build a single hard drive, and we sell millions of them per year. So you can imagine how important it is for us to develop an analytics capability that HPE is offering us. So it isn't just the Apollo 6500, it's also the software stack that runs on top of it.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user784050 - PeerSpot reviewer
System Engineer at Mr Green
Real User
When we moved to the Apollo and all flash drives, we gained a lot of performance
Pros and Cons
  • "When we moved to the Apollo and all flash drives, we gained a lot of performance."
  • "We have tried to used standardization using Ubuntu Linux and it's been hard. They had some difficulties getting the RAID configuration up and running because there are no drivers for it. It's not supported by HPE."

What is our primary use case?

We use three Apollo 2600 enclosures with a total of 12 servers as a Splunk cluster for all our log handling. 

How has it helped my organization?

In the beginning we used Splunk in a virtual environment and the performance was quite hard on that system. So when we moved to the Apollo and all flash drives, we gained a lot of performance on that.

What is most valuable?

It is quite simple when you get it going. I like the blade concept that makes is so much easier to handle the servers.

What needs improvement?

We unfortunately have tried to used standardization using Ubuntu Linux and it's been hard. They had some difficulties getting the RAID configuration up and running because there are no drivers for it. It's not supported by HPE.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

Very much a stable solution. No downtime yet. I think it's a configuration issue on our end but we have burned through quite a lot of the NVM system drives. The system does some swapping somewhere, so that has caused some issues.

What do I think about the scalability of the solution?

It will meet our needs, definitely, going forward.

How is customer service and technical support?

They have been very responsive and knowledgeable. As I say, we have mostly had trouble with the drives, and we have received the help and the replacement parts that we need.

What other advice do I have?

From my end, I like that we get everything from HPE. So it's quite easy to point at HPE if something breaks. We have the switches from HPE, we have the storage from HPE, the service from HPE. So it's quite easy to get their help when something breaks, because they are responsible for all the parts in our datacenter.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
HPE Apollo vs. PowerEdge C
July 2022
Find out what your peers are saying about HPE Apollo vs. PowerEdge C and other solutions. Updated: July 2022.
621,593 professionals have used our research since 2012.
it_user784038 - PeerSpot reviewer
IT Architect
Real User
We integrated it once and can use it for several technologies: Hadoop, Ceph, and more
Pros and Cons
  • "It's pretty flexible. You can choose how much storage you put on the server. You can have one to three nodes, depending on whether you want more CPU or storage."
  • "we can use the same platform for several use cases: Hadoop, Ceph, and we are considering the server for another use case right now. It's a single solution, we only have to integrate it once and we can use it for several technologies."
  • "There is a shared battery for all cache controllers in the node. When you have to replace that element, you have to take down all three nodes and not just one."

What is our primary use case?

We're using it for big data and storage servers. So mostly Hadoop for big data, Hadoop elastic search, and Ceph storage for our OpenStack private cloud.

The Apollo is performing fairly well. We've run into minor issues, but overall it does the job and we feel it's a good product for the money. 

How has it helped my organization?

It's allowed us to benefit from IP-based storage instead of using only fiber channel SAN storage. Also, I don't think we could have afforded that quantity of storage in a SAN array.

What is most valuable?

It's pretty flexible. You can choose how much storage you put on the server. You can have one to three nodes, depending on whether you want more CPU or storage. And we can use the same platform for several use cases: Hadoop, Ceph, and we are considering the server for another use case right now. It's a single solution, we only have to integrate it once and we can use it for several technologies.

What needs improvement?

There should be truly independent nodes for your rack, which can contain three different servers. I like to make sure when a component fails, I don't have to take down all three nodes. This is especially true as we usually have replication between these nodes. It would be a great asset to be able to contain the downtime to one of the nodes.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

It's pretty stable. We've only had very minor issues with it. No major downtime. 

The only issues we've really run into so far is that there is a shared battery for all cache controllers in the node. When you have to replace that element, you have to take down all three nodes and not just one. That's something of a design flaw, but it's the only real issue we've had so far.

How is customer service and technical support?

Yes, we've called tech support. Mostly for hardware faults.

What other advice do I have?

When selecting a vendor the most important criteria include

  • overall trust in the company
  • the financial side, of course, the price of the hardware 
  • the quality of the support we can expect.

I rate it at eight out of 10. As I said, true independence between the nodes would be an improvement. At least make sure that the nodes aren't dependent on each other. Also, we've had a few difficulties integrating it at first, so I'll stay with an eight.

Test the solution and do a proof of concept until it works with your own integration procedures, the way you install systems, that kind of thing.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Senior Account Manager
Real User
Certified for use with Linux, it enables us to easily implement software defined solutions
Pros and Cons
  • "It enables us to implement software defined solutions very easily, because Apollo servers are certified for use with Linux systems"
  • "Apollo Systems provide stuff that standard services do not. More HTDs, more compute power, at very reasonable pricing."
  • "We would like to see improved cooling because that is quite an issue. If you put that much compute power into a single rack, cooling really becomes an issue. And there is room for improvement there."

What is our primary use case?

We primarily use it for high-performance computing. Our customers really do like it because of the density they can achieve in the racks. Apollo provides so much compute power and storage as well.

It's performing extremely well.

How has it helped my organization?

It enables us to implement software defined solutions very easily, because Apollo servers are certified for use with Linux systems, which is really a big thing for us.

What is most valuable?

High compute density and high storage density at a reasonable cost

What needs improvement?

Obviously I would like to see the cost go down. That speaks for itself. 

We would like to see improved cooling because that is quite an issue. If you put that much compute power into a single rack, cooling really becomes an issue. And there is room for improvement there.

What do I think about the stability of the solution?

Extremely reliable. We've been using it for three years now, and it's been in production without any downtime yet.

What do I think about the scalability of the solution?

Especially if you use software defined storage, for instance, scalability is just great.

How are customer service and technical support?

We have not use HPE support. We have our own engineers, so we're really proficient enough. And it's really easy to use. So it's not a big deal.

Which solution did I use previously and why did I switch?

We actually had a business case. We were looking to address this business case with standard IT storage solutions but they were way too pricey for us. So we figured we needed a way to use a standard service, make the most of these standard services, and came across Apollo Systems. Apollo Systems provide stuff that standard services do not. More HTDs, more compute power, at very reasonable pricing.

How was the initial setup?

It was straightforward.

Which other solutions did I evaluate?

We do look to Super Micro whenever price is king. But if we are looking for reliability, then HPE is the way to go.

What other advice do I have?

Our most important criterion when selecting a vendor is reliability. We need a vendor to be there for us, even when the product is already three or four years old. That's a big thing for us.

I give it an eight out of 10. It does what we expect it to do. As I said, cooling is still an issue, you really have to keep that in mind if you implement the solution. But aside from that, we're really happy with it.

Talk to a partner who has implemented a solution with HPE Apollo, talk to customers who have actually used it in the field. It's really simple to do.

Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
it_user784059 - PeerSpot reviewer
Data Center Manager at Maples And Calder
Real User
Helped me address a need for DPM, to back up to a specific location in my datacenters
Pros and Cons
  • "It's very reliable. I haven't had a single failure at all in the year and a half; not the slightest problem with it."
  • "One drawback which I had: When I needed to expand storage on the Apollo, I had significant problems getting disks for it. It was a very long wait-time. So, if I were to give any advice in regards to improving this product, I would say make more of the 8TB disks available quicker."

What is our primary use case?

I specifically purchased it to address a need I have for DPM. I needed DPM to back up to a specific location in both of my datacenters that I have in Ireland. I needed just a lump of slow storage, but that was big, to take 30-day disk backups before they were offloaded to tape. In that sense, it ticked all the boxes and it's been working fine for that.

Now, I'm moving on to StoreOnce, but I'm going to repurpose the Apollos after this. I don't know what I'm going to use them for after this, because DPM is gone. Moving on to Veeam and StoreOnce.

What is most valuable?

It's really very clever the way it manages to hide the disks away. This idea of pulling out the little trays, I just think that's really, really clever. It's very reliable. I haven't had a single failure at all in the year and a half; not the slightest problem with it. It's been a pretty good product so far.

What needs improvement?

One drawback which I had: When I needed to expand storage on the Apollo, I had significant problems getting disks for it. It was a very long wait-time. So, if I were to give any advice in regards to improving this product, I would say make more of the 8TB disks available quicker. I ended up having a few issues because I ran out of space. There was a huge lead time while I waited for new disks to arrive here. It left me a bit exposed there for awhile.

But that's the only criticism. Other than that, I think it's a great product. It's really good. Really reliable. Very cleverly designed and I can't think of what better way they could pack more disks into such a small space, so all around it's a good product.

For how long have I used the solution?

One to three years.

How is customer service and technical support?

If I get through to the right person, support is very, very good. If I don't get through to the right person, it can be irritating and it can be cumbersome. So to me, the key is getting straight through to the person that's going to be able to help. I don't ring up for Mickey Mouse things. I just ring up when I need something bulked. I try my best to automate as much of the call logging because I have a lot of calls; it's much easier for me to do that online.

So that element generally works quite well, and generally I like the way it works. If I get a call logged online, it usually goes through to the right person, and I usually get a call back. I get actions done pretty quickly on that.

If, however, for whatever reason I have to ring up, I might get through to the wrong section. I've had some hit and miss affairs that have just irritated me. But when I do get through to the right person, I've found in the past, they're very good, generally speaking.

How was the initial setup?

The Apollo was very straightforward. That was nice and easy. Some of my other products, my 3PARs and so on, a lot more complex. But the Apollo, that was nice and straight, easy.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user784011 - PeerSpot reviewer
Network End Data Center Architect at a tech services company with 1,001-5,000 employees
Consultant
A compact system with a powerful CPU and powerful hard drives, perfect for our branches
Pros and Cons
  • "We usually use three blades for two-rack units, and with enough storage, it's really a small system with a powerful CPU, powerful hard drives, powerful disks."
  • "We would like to see SimpliVity on top of the Apollo."

What is our primary use case?

We use the Apollo system for most of our branch offices. Our roadmap is to implement Apollo in all our branch offices by the end of 2018. So we will have something like 50 branch offices with Apollo.

We performed a PoC. We were very happy with it, so we decided to implement it in all the branches.

What is most valuable?

It's a compact system. We usually use three blades for two-rack units, and with enough storage, it's really a small system with a powerful CPU, powerful hard drives, powerful disks. So it provides enough performance in terms storage value. And the internal network, we are also very happy with it. So, for the branches for us, it's perfect.

How has it helped my organization?

The benefit is, as I said, we are compressing everything. In the past, we used StorageWorks P2000, plus SAN switches, plus three or four servers and so on. Now, we have two-rack units for everything. 

For a branch it's perfect because it's simplifying our life.

What needs improvement?

We would like to see SimpliVity on top of the Apollo.

What do I think about the stability of the solution?

Touch wood, it's perfect until now. Nothing to complain about.

What do I think about the scalability of the solution?

We are not using it in that manner. We are not using it for the scalability. So the size, one Apollo for each branch, is perfect for us. We are not thinking about scalability.

How are customer service and technical support?

As usual, with HPE, we are very happy with the support. Honestly, we used it only once for the Apollo system, but all our kits are HPE. So we use their support often and we haven't noticed any difference between Apollo versus C7000 or DL servers. So it's in line with the standard HPE support and we are happy with that.

Which solution did I use previously and why did I switch?

We have a strong relationship with HPE. So HPE was proactive in proposing this solution. We had a PoC, as I said, and we were happy with it and decided to implement it. It satisfies all our needs and is the perfect solution.

How was the initial setup?

It was straightforward.

We always have an HPE engineer on our site, close to us. But usually, we prefer to do this kind of setup, at least the first time, to put our hands on the device itself, by ourselves. So the setup was done 95% without the support of this engineer. And maybe 5% for optimization with the support of this guy.

What other advice do I have?

Our most important criteria when selecting a vendor include, of course, the experience of the technician, then the support. With HPE as I said, we have a strong relationship. So there is a priority channel for HPE versus other vendors. We always perform a PoC, we compare the vendors. But we were happy with HPE so we have no reason to change right now.

I rate it eight out of 10 right now. It will be a 10 when SimpliVity will be on top of it.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user683202 - PeerSpot reviewer
Professor at a university with 5,001-10,000 employees
Real User
Enables us to do the world's leading superhuman AI research.
Pros and Cons
  • "It's going to meet our needs moving forward, it is scalable."
  • "Lustre seems to be just a little bit unstable overall."

How has it helped my organization?

We have been working with the Pittsburgh Supercomputing Center for around ten years. They are picking the hardware and they had picked this hybrid system. It has several different kinds of components in the system and we had worked with them for a long time. We knew that they were picking the stake of that stuff so that's why we selected this solution.

What is most valuable?

It's very hard for a professor to amass the supercomputing resources, so I've been very fortunate to have that level of supercomputing at our disposal and that has really enabled us to do the world's leading superhuman AI research. That is what we did, we actually beat the best heads up in all Texas, holding human players in the world this January. So, we're at a superhuman level in the strategic reasoning.

What needs improvement?

One thing that we are looking for is the better stability of the Lustre file system, it could be improved. I have heard that they are coming out with a better memory bandwidth, so that's good or maybe, it's already there in System 10.

In that case, of course, then there is need for more CPUs, more storage and all of that.

What do I think about the stability of the solution?

It has been fairly reliable. In the beginning, of course not, but then we were a “baiter customer”, so in the beginning, there was nothing, literally there was nothing in the racks. We've been with it from the beginning and of course, in the beginning, it was less stable. However, it became more stable over time.

If there's anything that hasn't been that stable, then it is the Lustre file system. I would say that they have made some improvements with that but this is not just a problem with bridges. We have computed the other supercomputing centers like San Diego Supercomputing Center in the past as well and Lustre seems to be just a little bit unstable overall.

What do I think about the scalability of the solution?

It's going to meet our needs moving forward, it is scalable. Having said that, our algorithms are very compute-hungry and storage-hungry, so more is more and there's no limit as to how much our algorithms can use. The more compute and the more storage they have, the better they will perform.

How is customer service and technical support?

I would support the Pittsburgh Supercomputing Center (PSC) support; they gave us the support and their support has been awesome. We don't directly contact HPE, they contact HPE if needed.

How was the initial setup?

The PSC installed everything, i.e., both hardware and software. So we didn't do any of that; from our perspective, it has been easy to use.

What other advice do I have?

Whilst looking for a vendor, we do not look at the brand name at all. Instead what we look for are just reliability and raw horsepower.

It has been great. The Pittsburgh Supercomputing Center guys have been great in supporting us very quickly and sometimes even at night or on weekends. I've been very fortunate as a professor to get this level of supercomputing, so we've been able to do the world's leading research in this area. The only things that I would improve are the ones that I have mentioned before, i.e., the Lustre file system, and maybe, the memory access from the CPU.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user680184 - PeerSpot reviewer
Senior Director of Research at PSC
Consultant
Has the flexibility to run dual CPU nodes or add GPUs to other nodes.
Pros and Cons
  • "Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world"
  • "What's coming out in Gen 10 is very strong in terms of additional security."

How has it helped my organization?

A primary benefit is high reliability. They have very good price performance and configuration options. Being able to configure them in different ways, for different node types, was something we needed.

What is most valuable?

In referring to the Apollos, what we liked about them was:

  • A combination of the density
  • The flexibility to run dual CPU nodes or add GPUs to other nodes
  • Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world
  • Being able to connect those in large quantities
  • In Bridges, we have 800 Apollo 2000 nodes, and they have been running extremely well for us

What needs improvement?

I think it's on a good track. What's coming out in Gen 10 is very strong in terms of additional security. Overall, I think those are well architected. They're a very flexible form factor for scale-out. Assuming ongoing support for the latest generation CPUs and accelerators, that will be something we'll keep following for the foreseeable future.

In Bridges we combine the different node types to create a heterogeneous, ideal system. Rather than wishing we had more features in a given node type, we integrate different types. We choose different products from the spectrum of HPE offerings to serve those needs optimally, rather than trying to push any given node in a direction it doesn't belong.

What do I think about the stability of the solution?

Stability has been extremely good. Jobs run for days to many weeks at a time. We recently supported a campaign for a research group in Oklahoma, who were forecasting severe storms, doing this for 34 days. They were running on 205 nodes.

The example we're featuring was a breakthrough in artificial intelligence where an AI first beat the world's best poker players. And for that one, we ran 20 days continuously, and of course, the nodes had to be up because players are playing the games and we were running that on 600 nodes of Apollos. That was just as seamless, and it was a resounding victory. So, I think that's the strongest win through Apollos in our system so far.

What do I think about the scalability of the solution?

Scalability for us is limited only by budget. Using Omni-Path, we can scale our topology out with great flexibility. And so, scaling out workloads across Apollos has been seamless. We're running various protocols across them. We're running a lot of MPI, and they do spark their workloads. So the scalability has just been limited only by the size of our system.

How are customer service and technical support?

We have an arrangement with HPE technical support. As our system does call on them on occasion, but the stability has been very high. Over the past year and four months that we've been running bridges, I think we have only had under 70 calls on the whole system.

Which solution did I use previously and why did I switch?

We knew we had to invest in a new solution as we were looking at designing a system to serve the national research community. We knew what their application needs are, and what their scientific goals will be. So we were imagining what that system would have to deliver to meet those needs. So that's when they told us the kinds of servers we needed in the system. We have the Apollos, we have the L580s, with three terabytes of RAM, we have Superdome integrity with 12 terabytes of RAM, and we have a number of GL360 and other service nodes.

But it was really looking at the users requirements and looking at where high performance computing, high performance data analytics and artificial intelligence are going through about 2019, that that's what caused us to select the kinds of servers that we did, the ratios we did, and the topology we chose to connect them in.

How was the initial setup?

It was the first Omni-Path installation in the world, so people were very careful. With that caveat, I think it was straightforward.

Which other solutions did I evaluate?

We always look at all vendors before reaching a conclusion. I don't want to name them here, but we're always aware of what's in the market. We evaluate these for each procurement. We pick the solution that's best. The competitive edge for HPE involves several things. These are not in any specific order, as they are hard to rank.

  • HPE's strategic position in the marketplace. Being a close partner with Intel, we trust them when there's a new CPU. We can get it in an HPE server very early on.
  • When something new comes out, like Omni-Path, it was brand new then. We trusted that HPE would be able to deliver that in a validated product very, very early.
  • We are always pushing the time envelope. Their strategic alliance with other strong partners, gave us trust that we would be able to deliver on time, and we were. That's unusual in this field.
  • They uniquely had very large memory servers so the Superdomes, and the bandwidth in those servers, was extremely good compared to anything else on the market. We wanted that, especially, for large scale genomics. Putting that in the solution was a big plus. I'd say these items together were the strongest determining factors, from a technical perspective.

What other advice do I have?

I think the advice is to look at the workload very closely, understand what you want it to do, look at the product spectrum that's available here, and do the mix and match like we did. Build them together. There are software frameworks now that actually make it easier than when we did it, to stand up this sort of collection of resources, and to just go with what the workload needs.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user568143 - PeerSpot reviewer
Head of Industrial Automation & Modeling at a mining and metals company with 1,001-5,000 employees
Vendor
Stable solution for management and monitoring.

What is most valuable?

It's a stable product; very reliable. It is a good basis upon which to build further. You see some evolution, but not too much. If you go to their events every year, you see an incremental evolution which is normal in that road.

How has it helped my organization?

I'm just a general manager and I’m not really technical. However, it gives you a nice, better flavor of the monitoring. I have heard that it provides better management and you can see the possibilities.

What needs improvement?

OpenView is a new product which does not support older versions of the hardware. This is an issue. That's why we cannot switch to the newer one. We continue using the older product, and that's working fine. I would like to see a bit more integration. This is the major topic.

What do I think about the scalability of the solution?

It is stable and scalable, but the new product has some advantages which we like. However, we cannot switch because we have an issue between non-supported and supported devices.

What other advice do I have?

When choosing another vendor, we look at the overall product and then the software product on top of that. Switching to another vendor is always a big step. We normally don't do that because it presents issues. Every solution will migrate to the same functionality. There is not a great difference between various solutions, but only an incremental one.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user364197 - PeerSpot reviewer
Network Administrator at CSC Finland
Consultant
The storage area density is the best thing about them. Outside connectivity needs to keep pace with network improvements.

What is most valuable?

We are running Apollo with SL-series servers and the best thing about them is the density of the storage area available. Regarding TCO, total cost of ownership, per terabyte, they are now the best on the market.

What needs improvement?

Connectivity to the outside of the server needs to be improved at the same time the network is improving. This would give us more IO. Of course, this is a firmware lifecycle management issue; there is work to do. Vendors should test the firmware before they are delivered to customers.

What do I think about the stability of the solution?

Stability is good enough.

What do I think about the scalability of the solution?

Scalability is fine because with this kind of service we can easily scale horizontally. We are more or less satisfied.

How are customer service and technical support?

The technical support in Finland is fine.

Which solution did I use previously and why did I switch?

We made a transformation from enterprise storage to an open-source distributed storage architecture. We switched because the pricing is better.

How was the initial setup?

The initial setup was business as usual. It's not so complicated, but of course it takes time.

What's my experience with pricing, setup cost, and licensing?

The price is not significantly lower than the competition, but it's lower than the standard price.

Which other solutions did I evaluate?

We looked at Dell and Super Micro. They are both on the market in Finland.

What other advice do I have?

You should run the stable firmwares on a test platform for about a month before you roll them out. This is something we have to do that right now.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user568107 - PeerSpot reviewer
Development Manager at Thomson Reuters
Vendor
It supports our network requirements for network captures at high data rates. We're looking for faster disk-write capability.

What is most valuable?

We're using the Apollo 4200 as a data capture system. The most important things for us are the amount of storage on there, the ability to configure it, and change the configuration so we could do the network captures we need at very high data rates. It meets our network requirement of being able to capture up to 40-gig with a small form factor.

How has it helped my organization?

We are moving from existing 10-gig environments to a 40-gig environment. The ability to capture those high data rates is really important to us. We need to know what's going on in the network. We need to be able to explain to our customers any issues or problems, and where they might have occurred.

What needs improvement?

We're looking for faster capability to write to drives. We're fully loaded with all the small form factor drives loaded into the system. It is practically at the limit of the capability supported by the architecture. So we need new solutions, new types of drive capabilities, and faster bus speeds.

What do I think about the stability of the solution?

It is good in terms of stability. We are struggling a little bit with some of the configuration we need to do, particularly with write capability to drives. That's the only part where we struggle with getting the solution going; but we've had significant conversations with HPE, and worked through a load of issues. We are actually getting the solution that gets to our capabilities.

What do I think about the scalability of the solution?

We tend to only use a single rack-mount server for what we're trying to do. The ability to keep it small, reduce the footprint and reduce costs are the most important things that the Apollo 4200 gives us.

How are customer service and technical support?

Technical support has been very good. We've been given access to senior HP personnel in America. They've given us lots of guidance and help in actually configuring the system.

Which solution did I use previously and why did I switch?

We were previously using the older DL 380's with MSA drives. We knew their limitations using the fiber channel in terms of the transfer rates we could get out of it, for example, but we needed something that would work with the move to a 40-gig network environment.

How was the initial setup?

The initial setup was fairly straightforward. What we're trying to do with the solution added to the complexity; so we needed some guidance, mainly on how to configure the way the drives and everything were allocated to enable us to actually do the captures. From that initial build to where we are now, it's taken a little while to get there; but it is a fairly complex system.

Which other solutions did I evaluate?

We looked at four or five different vendors. Some of them were talked about very expensive solutions. The HPE solution cost about one-third less. Taking into consideration the cost, HPE gave us the ability to actually do what we wanted to do. Also, the relationship and being able to talk to them was important in our decision. Getting access to their technical people is very important to us. We've been an HP user for many years.

What other advice do I have?

Not many companies will have a similar type of requirements as we do. But if you need a low cost solution with a low footprint, then the Apollo 4200 is an ideal system for that.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user332961 - PeerSpot reviewer
Manager of IT Infrastructure at a computer software company with 501-1,000 employees
Vendor
Using it with Scality, we migrated away from traditional NAS.

What is most valuable?

We actually install Scality on the Apollo servers and so we have a ring, a Scality ring, where we store our customers' documents. That allowed us to migrate away from traditional NAS with a cost effective solution whose architecture is both scalable for the future and able to handle the PB scale of document content that we deal with.

How has it helped my organization?

Just not having to manage traditional NAS has made a big difference. Not having to manage traditional volumes and aggregates and LUNs and things like that. Being able to be flexible when it comes to that, and Apollo has made that possible.

What needs improvement?

We're pretty happy with the Apollo line of servers. It would be interesting to see the new hyper-converged DL380s. It would be cool to see if that type of same thinking about hyper-convergence was applied to the Apollo line of servers as well. It would be interesting, not on the storage-dense model of Apollo servers but on the compute-dense models of Apollo servers, to see kind of a hyper-converged solution running in those chassis that can have multiple compute nodes all in one. So that would be interesting to see if HPE could do something like that. It would make a compelling argument for them in their hyper-converged space. It would really complement the DL380 hyper-converged solution that they're providing now and would be I think a good choice for lots of people who are looking at hyper-converged.

What do I think about the stability of the solution?

We deployed our first ring on Apollo servers towards the end of last year so it's been running for eight or 10 months or so and it has had zero downtime.

What do I think about the scalability of the solution?

With the Apollo systems, we initially expected it to be of PB size. The great thing about the Apollo servers using Scality is that if we need to add more disks to those existing systems, that disk will instantly be usable to the ring. If we need to add more servers to have more compute power and more storage, we can do that as well.

How are customer service and technical support?

We've only contacted them to help replace drives when drives go bad as they do, but nothing beyond that.

Which solution did I use previously and why did I switch?

So for a long time, we were storing our documents on a traditional NAS, through NetApp, and that got to the point where NetApp couldn't handle PB scale affordably. We're talking about tens of millions of dollars in order to buy a NetApp that could do PB scale on the number of IOPS that we needed. And on top of that, it was cost prohibitive to be able to scale out on traditional NAS, so the Apollo line became the clear choice, I guess. And deciding that we had to go to something like an object storage, that decision was made long before we decided on Apollo. It turns out that Apollo fit our decision to go to object store.

How was the initial setup?

The initial setup was pretty straightforward. The Apollo series that we use is basically the guts of a ProLiant DL380, which we've used many, many times in the past, but then allows us to put double the disk capacity of a traditional DL380 in that line of Apollo servers. And so setting it up was pretty easy because we've done Apollo servers in the past. The iLO functionality made it pretty straightforward and had no problems getting things deployed.

Which other solutions did I evaluate?

We spent a long time looking, actually, at doing the Scality ring on just commodity hardware from someone like Supermicro, and we found out that, in terms of reliability, supportability, ease of management, that having all our servers under the same contract through HPE, made the decision to use Apollo was apparently clear. Even though it was marginally a little bit more expensive up front, the total cost of ownership of having to manage those many servers was lower. This made the decision really easy.

What other advice do I have?

If someone came with a similar storage need, the Apollo servers do make a lot of sense, especially when you're talking about scale out object storage-type implementations. That Apollo line, it makes perfect sense from my perspective and I would recommend that.

Our first batch of Apollo servers that we got were so new that it was just hard to know kind of what to expect from HPE and what they wanted to deliver to us. The first batch of servers that we got were missing an iLO and that may have been a confusion between what we ordered and we thought we ordered or what we thought we would've had. But anyways, that way it was resolved quickly and the iLo modules were shipped out and there was no problem there. But just because it was so new when we first got it that there was just some speed bumps when we first ordered them. Otherwise, they're a very solid server.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user368157 - PeerSpot reviewer
Senior Unix Performance Analyst at Amadeus IT Group
MSP
It allows us to use a few nodes as possible for storing log-file data so that we have as much direct space capacity as possible.

What is most valuable?

Apollo's most valuable features for us are its density and storage capabilities.

How has it helped my organization?

We're trying to keep all log files in our Hadoop server, which amounts to several terabytes a day of locked data that we need to analyze. Apollo allows us to use as few nodes as possible for this so that we have as much direct space capacity as possible. It gives us much more space per gigabyte.

What needs improvement?

It's a very good system when you need a lot of disk capacity. But it's unclear whether the performance of the IO will be sufficient when calculating the theoretical amount of time to read all the disc space. If the workload is not purely sequential, then performance in the IO is less than optimal because it's optimized for streaming processing.

What was my experience with deployment of the solution?

We have no issues with deployment.

What do I think about the stability of the solution?

We installed it in place about a week ago, and it's been running without problems.

What do I think about the scalability of the solution?

We have probably some 6,000 or 7,000 physical cells already and are planning more.

How are customer service and technical support?

We have technical account managers who work with us. It's pretty much a direct line to HP without having to dial the general support number.

Which solution did I use previously and why did I switch?

We previously used the DL380s. Compared to those, Apollo has roughly four times the amount of space per server, which means we can really do a lot. We technically could have four DL380s, but the licensing cost would have been significantly more.

How was the initial setup?

The initial setup was straightforward, and we've been happy about it.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user363225 - PeerSpot reviewer
Research Support at a university with 1,001-5,000 employees
Vendor
It's a dense product, meaning we can fit several servers into our rack space.

Valuable Features

For us, the most valuable features are the price and density. We have very limited space and we're able to fit four servers into our data center's rack space. Although I think a lot of the servers from different vendors are going to be very similar because they all use Intel chips, making them essentially the same, it's the HP management software that makes it better than the competition.

Improvements to My Organization

The biggest benefit for us is a physical benefit in that we can save our very limited space. Again, it's a dense product, meaning we can fit several servers into our rack space.

Room for Improvement

The licensing could be greatly improved, I think. We have a very hard time tracking it because we have to get a license for every server and machine. We have to click in our email, then go to the site, then login to HP, then download the license, then we have to do it all again for each server and machine, and we have to know which server or machine the license is for and give the license to the installer. It's inefficient, overly complicated, and should be simpler and pain free.

Deployment Issues

We haven't had any issues with deploying it.

Stability Issues

It's been stable so far, but we've only had it a few weeks.

Scalability Issues

We have six racks and we can fit another. At the moment, we have sixteen Apollo servers and we're going to put 40 in as we have the space for that.

Customer Service and Technical Support

We've signed up with a third-party management service. They've been really good so far.

Initial Setup

The initial setup was simple for us. HP came in, they racked and stacked it, and the software guys came in. This took a day or two and they were all done with the image. The whole process including hardware and software stack took about two weeks.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user321114 - PeerSpot reviewer
Executive Vice President with 501-1,000 employees
Vendor
It gives us the density of a blade without the issue of shared IO, but it needs direct integration with software.

What is most valuable?

It gives us the density of a blade without the issue of shared IO, and a good price point for object storage.

How has it helped my organization?

It's allowed us to compete with cloud storage providers like AWS to put together a scalable on-premises solution of more than 20PB at a similar pricepoint.

What needs improvement?

Direct integration with software (Cleversafe, Scality, Ceph) for a purpose-built object store appliance. Stay closer to the current rev of processors. I know it is a heating/cooling issue, but being a couple of revs back is problematic when comparing consolidation of workloads with standard intenl servers running the latest chips.

For how long have I used the solution?

We have implemented this for a few client over the past three years.

What do I think about the stability of the solution?

Big stability issues with the CPU on the first generation which made them virtually unusable. HP has done a better job of regression testing against software (hypervisors and big data platforms specifically) in the recent generations.

How are customer service and technical support?

It's got better in the past year and in line with other major manufacturers (Cisco, EMC).

Which solution did I use previously and why did I switch?

Standard Proliant servers (DL380s) with internal storage. We also looked at SAN and NAS solutions, as well as VSAN technologies from VMware, HP, and Citrix. None could hit the pricepoint to compete with AWS S3.

How was the initial setup?

Standard server technology. Some initial issues with flashing FW, but the rest was straightforward.

What about the implementation team?

We were the vendor.

What other advice do I have?

Great solution for object stores. Consolidation ratio on compute doesn’t make it a great alternative for virtualization hosts, but could be a decent hyperconverged platform. HP is utilizing SL technology for their CS-250 Hyperconverged appliance.

Disclosure: My company has a business relationship with this vendor other than being a customer: HP Platinum Partner.
PeerSpot user