Buyer's Guide
Converged Infrastructure
November 2022
Get our free report covering Dell Technologies, Hewlett Packard Enterprise, FlexPod, and other competitors of Dell VxBlock System. Updated: November 2022.
656,862 professionals have used our research since 2012.

Read reviews of Dell VxBlock System alternatives and competitors

FlexPod Architect
Real User
Great for mission-critical workloads, very resilient, and reduces data center costs
Pros and Cons
  • "The solution's granular scalability and broad application support helps us to meet the needs of diverse workloads."
  • "We’ve seen an improvement in application performance."
  • "Since 2018 or 2019, maybe due to COVID and the chipsets, my DIMMs are dying left and right."

How has it helped my organization?

The way FlexPod has set up our servers has helped our organization. Their OS is on Netapp.

We’ve seen an improvement in application performance. I don't know the percentage off the top of my head, however, after migrating a lot of data from physical servers to virtual servers and putting them on there, it's just amazing.

It increased staff productivity. I was running most of the locations alone. With this solution, I was able to help take care of other problems instead. We ran it and didn't have any problems.

The solution streamlines our IT admin. For the most part, once I get the system set up and put in, adding the VLAN is very easy. Then, users are just adding in VMs. It goes smoothly. I just had to set the solution up and let it run.

What is most valuable?

Regarding the solution's private, hybrid, and multi-cloud environments, it works well if the communication stays up. The solution’s infrastructure enables us to run demanding and mission-critical workloads.

The solution's ability to manage from edge to core, to cloud, and support their data and computer requirements is pretty good.

The solution is innovative when it comes to computing, storage, and networking. Once it comes together, it's pretty easy to manage. The only action I’ve got to really do is manage everything from my cores or from my distributed switches.

The solution's granular scalability and broad application support helps us meet the needs of diverse workloads. For instance, two years ago, I had a client that was in a building. They had seventeen floors, and I was able to diverse each one of the floors, which were different companies inside FlexPod, and manage it using VCF or VMware. They had their own clusters and it was easy to manage.

This solution is very resilient. For the most part, since about 2018, my servers have had no problems. I still have servers that have been up for years without any issues.

The solution reduces the time required to deploy a new application in some ways. UCS itself is just a hardware platform. VMware is actually where that question is more tied to. 

The solution reduced data center costs. At one of my locations, we had about twenty-four racks full of physical servers. I ended up migrating everything to virtual platforms and putting it inside. It was a decrease of 68% in the total cost of energy. That included the A/C units always running and the power being used for the servers themselves. 

The solution has saved us money. I wouldn't even know how much precisely. I would say $100,000, however, that's likely really low.

What needs improvement?

I have seventy-six B200 servers. If a server goes down, I have a lot of problems. I’ve also been having random DIMM errors. 

Their DIMMs have been terrible recently. This is a new product that I got about a year ago. My DIMMs are dying left and right and server blades are not being able to function. However, with that being said, when those go down, I have a set of spares that I can put in, and everything works without a hitch, with no problem.

I've had problems with the remote data centers going down due to the connection dropping, and they were not aware that the communication is down. When a link went down previously, the systems didn't know. It then fixed itself. As long as the connections stay up, it works. If the connection fails, it won't.

In my experience with the validated designs, I've always had to go inside and adjust them. I understand that some of them are a base, however, some customers believe that they are 100% proof, and they try to implement it. Then I get called in to correct the errors and correct some of the layouts to include some of the newer features.

Since 2018 or 2019, maybe due to COVID and the chipsets, my DIMMs are dying left and right. That's the only problem I have. My boards are fine. The servers are working fine. 

A feature I would like to have in the next release is an application desktop that talks to it, so I don't have to go to the web GUI as much. Besides that,  it's pretty bulletproof. I use UCS over HP and Dell, nine times out of ten.

I'd like to see a little bit more versatility with the C220s and the C240s, to see the expansion ports on those servers grow. Besides that, everything has been pretty amazing.

For how long have I used the solution?

I've been using the solution since 2012.

What do I think about the stability of the solution?

In the past, stability was 100%. Recently, it's been terrible due to the DIMMs.

What do I think about the scalability of the solution?

It's greatly scalable. It just relies on what FIs you have for your interconnects.

How are customer service and support?

In terms of support, I call in when there's a bug. I’ve had problems with the memory as well. I had a server that was DOA, and it came down to the fact that we didn't even know what the problem was. That took almost a year to resolve. Then, with the DIMMs, it's taken me about two months due to the testing they want to do. That said, in 2018 it was phenomenal.

Technical support needs a little bit of work when it comes to hardware. In terms of software, they're not too bad.

How would you rate customer service and support?


Which solution did I use previously and why did I switch?

I used HP and Dell. I was having a lot of problems with HP and Dell was getting expensive. I had a little extra cash to buy the UCS when it first came out. There was a little bit of a learning curve, however, once I got that down, it worked well. I'm a big supporter of Cisco.

How was the initial setup?

Currently, we're using the 4.2.1 with M5 servers, B200 M5s. This is my third one, as far as the firmware updates, and driver push-outs.

The initial requirements for getting it up and running are very complex. There's a lot of note keying, and all of that had to come into play. We had to have a good foundation of server networking. It's not something anyone can just throw a user into and say, "Here's a gear," and set it up.

What was our ROI?

We have seen an ROI.

What's my experience with pricing, setup cost, and licensing?

In terms of the cost, the last bill I saw was about $3.5 million from the latest contract. That might have been for a five-year contract. That was with the licensing for the ports for the FIs, along with the tax support and the software assurance with Cisco tech.

What other advice do I have?

We have not used the solution to integrate advanced cloud services. I’m working on a VCF project currently. I have not used Intersight, Active IQ, or CSA yet. That's actually on my to-do list for my current project.

At this point in time, we do not use this solution to power any AI machine learning applications.

UCS is more network-driven than it is server-driven, which is what Dell and HP, drive on. Once we set up the basic server parts, the rest of it is network base. It is a mind changer. When I handed it off to server admins, they were worried about a lot of issues that they used to deal with on a Dell, HP, or even IBM. They don't have to worry about that with the UCS.

I'd advise new users to understand where they’re putting their ports and know which ones are going to be fiber channel ports, FIs, and make sure they have a distributed switch and are not connected directly to the core switches, the 7Ks or 9Ks. I've seen people take down their whole environment when somebody added a VLAN or added a network for the UCS network that was already on the core. It took the whole thing down.

I'd rate the solution ten out of ten. If the DIMM problems were removed, by far the solution deserves a perfect ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Converged Infrastructure
November 2022
Get our free report covering Dell Technologies, Hewlett Packard Enterprise, FlexPod, and other competitors of Dell VxBlock System. Updated: November 2022.
656,862 professionals have used our research since 2012.