What is our primary use case?
We started full-scale deployment in 2013 exclusively with Dell PowerEdge. We have the fourth generation of servers. We went from 720, 730, 740, and now 760. I have all the models.
We use PowerEdge to connect directly to electronics and readout systems so it is all deployed on-prem, on a private network with no connection to the outside.
I work in a research environment. We use these 400 huge clusters to connect to the electronics of readout detectors to read the data out of the homemade front-end electronics and then assemble the data files to be analyzed. It's moving data between electronics and the IT world of storage grid analysis, etcetera. I'm the service manager only for the readout part and that is already enough of a job.
How has it helped my organization?
Our servers are on the private network completely disconnected from the outside world. It's less of an issue than it would be for a front-end server to the Wild West. The whole firmware handling what Dell is doing through OAK Management Enterprise or other tools from the iDRAC is also important so that we get signed from the packages we can trust that were tested and don't render our server into a brick. When I get a maintenance window, it's once every four, or five months for four hours. I don't have time to do something, figure out if it doesn't work, and undo it. It has to work with the first job. Resilience in total globally is also important. I can install firmware and updates and have trust that it will work.
Dell PowerEdge Rack Servers are good for power consumption compared to other brands. When we buy our Servers, we always take platinum, it's the best quality power supply. Maybe the others consume more power but I can certainly compare them to other platforms or vendors, and they are absolutely on the low side. Now with the 16th generation that came out, there's another huge step in the positive direction compared to the previous generations. In general, every time there's a new generation, I see that we get the same computing power for less electrical power. It moves in a good direction. We cut our energy consumption by half. This has saved us costs because we are running 300 days a year 24/7 on a full load. Thanks to PowerEdge we can lower our energy consumption lower than the competitors.
In our case, we figured out how much computing power we needed and bought more from the start. However, I'm spending my days buying memory and plugging additional memory. The change increases network bandwidth by changing onboarding functions. The memory capacity you can plug in is absolutely impressive for the bigger form factor. We can go through terabytes of memory without a problem. This is something impressive for me as we can open the chassis, and see how it's engineered inside so that we can easily access all the components, and change components without hours and hours of dismantling it. It is a bottom-up engineered product.
PowerEdge Rack Servers' impact on our sustainability goals is important but hidden. Based in Switzerland, we have to abide by the Swiss sustainability laws, which now forbid heat waste. If we build a data center now, we have to recuperate the heating and do something with it. We're no longer allowed to blow out the energy.
For example, we are building new data centers now that are collecting it, and there's a residential area around it, and we heat the houses through hot water heating. We are legally obliged to do that. The times when we put a chiller outside and blow energy out are gone. That's the core context. If, when we started with the PowerEdge already at the time, we were 100 kilowatts for one room, we moved the junk out and put the first PowerEdge Servers in, and the power consumption went down to 80. So we recovered 20 percent for more computing. That was already more than ten years ago. by now, we have gone down to less than 50 with the latest generation of PowerEdge. So we are going down in power consumption while at the same time providing more computing power to the users. This is very important because the infrastructure to recuperate the heat costs money as well. The less I consume, the less I have to pay to recuperate it afterward.
What is most valuable?
There are two valuable features for me. Number one, we are always at the edge of technology, Dell is a great partner, so going to Austin to the HBC Lab, so we can test machines and platforms before they go commercial so we can adapt our developments because we plug homemade cards into the Servers. We have to make sure they're compatible. That's the development side.
On the operations side, the value of service is always underestimated in terms of debugging, availability of spare parts, and the fact that they don't explode weekly. The quality of the products, the monitoring, the managed enterprise, pro support, TechDirect, self-dispatch of spare parts, etcetera, all of this allows us to run in the order of 500 servers if you take other clusters together. It is a tenth of a human host of a full-time equivalent. This means that if you would scale up linearly, one person could take care of 3000 Servers. That's quite impactful. Google published three people for 1000. We are doing one person for 3000. That allows us to free human resources for other tasks. The last time somebody had to go to the data center for support was five months ago. We could do many functions remotely, so there was no hardware fault. We could do daily maintenance remotely through iDRAC. Its remote management capabilities are one in a lifetime. I don't know any other platforms that offer the same level of manageability.
What needs improvement?
The world market for CPUs is excited. We have new ideas popping up left and right like new accelerator cards because there are GPUs. Dell is already working with Kalray, for example, that builds accelerator cards. There are new FPGA cards available on the market that are not in the portfolio. The last time I looked was when buying the new 760 Servers because it changes every couple of months, and not all Intel CPUs were certified. The adoption speed of new CPU and GPU technologies has room for improvement.
However, I know coming from the technical side this is a lot of work, and it's difficult. I know they are working on it and it's not something you do in five minutes. We have to stay fair. It would be nice but I know that now Intel and AMD come up with new CPUs twice a year. In the past, there was a new CPU generation every two years and now it's every six months. So the developers of servers and all the devices and programs are running to keep up with the speed itself.
Buyer's Guide
Dell PowerEdge Rack Servers
May 2025
Learn what your peers think about Dell PowerEdge Rack Servers. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
853,831 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Dell PowerEdge Rack Servers for 11 years.
What do I think about the stability of the solution?
Dell PowerEdge Rack Servers are stable.
What do I think about the scalability of the solution?
Dell PowerEdge Rack Servers are scalable.
How are customer service and support?
The support is efficient. We have access to the spare parts catalog, We can diagnose a file and send it to them, and within two hours we have somebody on the phone. They are reactive and the problems were solved in the first attempt. This allows us to pay a little bit more for the initial investment but over the lifetime of the equipment, we have zero manpower costs to keep them running. The total cost over the lifetime of the equipment, we are largely positive compared to other brands where we have to pay a team of ten people to manage the equipment.
How was the initial setup?
During the deployment, the amount of rails turned out to be a disaster. We asked for two fewer rails and I had the whole team come over and install the rails for PowerEdge. I had a colleague, who did the Rack all by himself, he was so fascinated with it. From the mechanical aspect, it's extremely easy and solid.
There is also the engineering of the chassis down to the rail and how to put it in was well thought out. It was not some rails that were purchased out of a catalog from Taiwan.
I use OAK Management Enterprise to control the whole cluster. Once we plug in the cable, the service tag is realized, and the server is deployed by itself. From plugging in a new machine to installation ready if there are not too many firmware updates, it's ten minutes. It's easy, requires little manpower, and is fast.
When we had to install five hundred servers in the rack, there was a discussion about whether we should call on the professional deployment services from Dell to do that, and then we decided not to. Because we are two people, we could install 25 servers in two hours. It was done in two weeks.
What about the implementation team?
We do the implementation in-house.
What was our ROI?
We have the equipment itself, but Dell also provides the full software framework infrastructure around it. That is crucial in monitoring in terms of what's going on and also that one allows for example one person to have a view over thousands of machines. I have CloudIQ on my phone so I can see what my servers are doing. I can predict its failure so we can react before something explodes. We can react before things fail. That, of course, reduces downtime, and increases happiness, on the customer side.
As scientists, we want as much data as possible. In the experiment equipment running, there are no official numbers, but we are in millions per hour. Therefore, half an hour of downtime is costly. Not everything will be down, but if the data is unusable because there's one piece missing, it's as if everything will be lost.
What other advice do I have?
I would rate Dell PowerEdge Rack Servers ten out of ten. We never have problems and if there is an issue we get help, and the issue is resolved quickly. I never found myself sitting around, not knowing what to do or not getting help. With other full-brand vendors, I couldn't get a support person to look at an issue. These vendors always underestimated support and the follow-up with the customer afterward. In my experience, Dell is the best.
I also spend a lot of time on the advisory board for the Dell HPC Community. I spent time with the developers in most of the service. Ten years ago, the Servers had two power supplies and they were both on the same side. Now on the new ones, one of the power supplies has moved to the other side. That was something we injected seven years ago into the development process because it makes installations in the Rack much more efficient.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.