It allows immediate access to the server management and immediate detection of the access logs.
It provides a secure access to the console and reliable administration.
It allows immediate access to the server management and immediate detection of the access logs.
It provides a secure access to the console and reliable administration.
Having implemented this solution, it has enabled to have remote management of the equipment problems, to identify the power for reviewing the status of errors without having to be on-site, but remotely from anywhere required.
I would prefer to have changes in the compatibility of the blade servers with the new ones designed by HPE, as the top team's version does not have it.
I have used this solution for seven years. I have used the following versions of the solution:
We have not had any problems with the implementation.
The technical support team has very good answers to our concerns and when cases are opened, escalations are done in a timely manner.
I have not used a different solution.
The solution was implemented by the provider, as indicated it was in a simple way.
I'm the infrastructure manager; with regards to the prices, they need to adapt to the current needs of the country. Licensing has always been timely and it is a prompt solution.
The most valuable feature, of course, is its size as I can build a huge compute resource on it.
A couple of those HPE BladeSystem Enclosures can give you a stable and distributed compute resource for a virtual environment.
First of all, there should be a change in the disk bay. Currently, in the case of a disk failure there is a need to remove the whole bay and as a result, to disconnect all the other disks.
I have used this solution for maybe more than four years.
I have encountered a major issue with VMware on Gen8. There is no support for NetQueue, that resulted in network issues with the VMs.
There were no scalability issues.
I was not satisfied with the support. It seems that the support team does not know their products in depth. Their main approach is to upgrade the firmware/drivers and replace the hardware. They are struggling in giving any type of technical explanation for resolving issues. But, there are not many issues that were not addressed by the support team and I always received a solution this way or another.
I have used this HPE Enclosure as a part of the design; we are using this solution from the beginning and have not switched to it from any another solution.
The setup is not simple but if the low-level design is correct, then it is a straightforward implementation.
Their licensing program is pretty simple.
We evaluated other products such as Dell and Cisco Blade Servers.
Pay attention to the HPE's management solution as they are securing the management interfaces of their servers. You need to implement it correctly, otherwise, in a case of a failure, for example, an incorrect network configuration may result in complete loss of the management.
I think the most valuable features that my management usually worries about are price, reliability, and its ability to be repaired and/or debugged.
What would make it better from my point of view is if HPE spent more time on testing with the actual built-in Red Hat Linux drivers, as opposed to always trying to say, "Use our driver."
The stability is pretty good.
It's scaling where we need to go.
The technical support sucks, would be understating it. Because the first line and the second line support tend to give out stupid suggestions that are completely useless, and they aren't listening to anything. It takes a lot of time to get through them, and that is every call I've been on with them. Oftentimes, I've got a very low expectation of HPE, and they go below my expectation a few times.
Initial setup was relatively straightforward.
When it comes to the BladeSystem, what we love about it most is being able to actually manage it using OneView. It's one feature that allows us to fully manage all of our infrastructure using just one application.
We were able to deploy a lot of different operating systems such as VMware and Red Hat Linux, Oracle, Oracle Solaris; also Microsoft's Windows server. All of these are fully supported within the HPE BladeSystem. It allows us to be able to implement and deploy different operating system using one HPE BladeSystem.
I would like OneView to go over the current limit of 40 instances.
It's very, very, stable. We've got over 40 HPE BladeSystems and so far we've had very, very few hardware problems. Whenever we have a hardware problem, HPE call us right away about our problem, and somebody works on that problem within four hours of generating a call for any type of hardware or software problem.
You cannot really scale a BladeSystem. If I were using it in conjunction with VMware, then we are able to upgrade or get a higher CPU or memory on a virtual machine or move a virtual machine in a different blade that has a higher CPU and memory. If it comes to that, yes, using other software, scalability is very good.
Technical support is very good. I've opened a lot of calls over the web or by phone with HPE, and I would say that 99% of the time, they respond to the ticket within an hour of opening an issue.
There is no complication at all when setting it up, either setting it up as an experienced user like myself or having HPE set it up for you using their services. No problem at all.
We were based on different hardware vendors. We selected HPE due to the cost of the hardware; also for the scalability of the materials, and the different models that could be inserted or interchanged in a chassis; also the easiness of the deployment. That's how we selected HPE BladeSystem. We also considered Dell, Cisco, IBM, and Oracle.
It's because we've been using it for so many years now. It's been very reliable for us. I would say consult your hardware vendor and discuss with them your needs. Sit down with them. Elaborate what services do you need and decide together. That's how I would say it.
Before we introduced the solution, we had 24 cabinets, filled with classic rack servers. We had continuous issues with cooling capacity, power consumption for the data center, high availability, and redundancy.
After implementing the BladeSystem environment, we went down to four cabinets only for servers, since it's a perfect platform to host a high-end VMware farm.
Coupled with HP 3PAR SAN devices and peer persistence, I managed to create a 99.99999% uptime environment.
Currently, we have enjoyed an increase in price/performance of 500%, compared to several years ago.
I have used it since 2011.
We have not had any stability issues. We haven't had one instance of downtime due to hardware issues of the BladeSystem itself.
We have not encountered any scalability issues. It's extremely scalable. If you run out of resources, just get another blade server and you've added another x amount of RAM and CPU to your environment.
Technical support is good and quick. The engineers sometimes need to consult with experts. I wish the experts would be the front-line support.
This is the first time we have used BladeSystems.
The initial setup was complex because of our HA requirements. The installation of the BladeSystem itself is easy and straightforward.
The modules are hot-pluggable. OA and iLO are easy to configure.
The most complex part was configuring the Virtual Connect module with VLAN tagging, shared uplink sets, and general network configuration.
The web UI is good, but it lacks tips and it's a bit complicated.
For first time users, only buy two BladeSystems and fill them up. They are expensive. Apart from that, you get more than you paid for.
We didn't evaluate others, as we were forced to buy this solution by governmental policies. We are part of the Ministry of Health.
Get to know the product. Spend time studying its ins and outs.
You will be surprised by its capabilities. I would not recommend a touch and go strategy, since that won't bring the systems to optimal capability.
Modular Design: Everything is modular and redundant. Nothing is built-in, from the PSUs to the fans to the modular VCs and SAN modules
I especially value the higher consolidation ratio.
This server comes with up to 2TB of memory which allows us to run more virtual machine on single server. We can leverage it for a higher consolidation ratio.
It would be better if the boot time during POST would be reduced.
I have used it for eight years.
I did not encounter any stability issues. It’s a good product.
I did not encounter any scalability issues.
I would give them a rating of 9/10.
We used to deploy IBM Blade Servers. The switch was due to company policy, although IBM products are also good.
The setup is quite easy once you configure the Integrated Lights-Out (iLO) server.
It depends on the order size of other services we select during the procurement phase.
We evaluated Cisco UCS.
It’s advisable to use FlexFabric Interconnect for a converged network.
I really value the FlexFabric interconnects.
HPE BladeSystem was introduced by me as an architect to boost the performance/server footprint, especially with VMware virtualization.
The storage blades could be improved.
I have used it for eleven years.
There were stability issues in the early versions, Blades G1.
I did not encounter any scalability issues.
I would give technical support a rating of 8/10.
We used rack mounted servers.
The initial setup was straightforward using a wizard.
The customer has to decide and evaluate the tradeoff between CAPEX and OPEX.
We evaluated the IBM Blades System.
Examine your infrastructure KPIs. This will typically include analyzing a reduction in OPEX, ease of operation, ease of troubleshoot, decreasing cabling, and increasing footprint performance.
HPE BladeSystem c7000 is a complex piece of engineering.
You can view a Bladesystem like a modern car. The first thing you see is the body and the glossy paint, but under the hood, a bladesystem is essentially a group of servers (multi-core processors, RAM, Buses, storage, etc ), with redundant variable speed cooling blowers, redundant power suppliers, a large set of redundant connectivity options, and a big quantity of temperature and power consumption sensors, all of those connected and administered from a redundant administration module with many configuration parameters that you can accommodate in a bunch of ways to satisfy many different requirements.
Everyone of this modules are an appliance (a complete computer itself), and you can have a duplicate of the OA and the VC, just for redundancy and high availability purposes.
Before this brief description, probably you would agree that this is a complex architecture.
But the most attractive part of that , is that you deal with this complexity through a web portal that concentrate all of the configurations options, easy this tasks and guide the user with several wizards.
To manage all the parameters related to the Enclosure or Chassis, security access, and monitoring, you have to enter to the "onboard administrator module" (OA).
To manage all the aspect about LAN or SAN connectivity to the server blades you have to jump into the "virtual connect Module" (VC), but don't desperate, you have an hyperlink from the OA, that open the VC portal, to give you a seamless navigation between modules.
At last, but not least, you have the blades servers itself. You can have up to sixteen of those servers, with processors, memory, Out of Band management processor (Insight Lights-Out or iLO) and I/O cards (NICs, HBAs, CNAs, etc)
All of those have several Firmware (BIOS, NIC firmware, Power Regulator Firmware, HBA Firmware, iLO Firmware, Onboard Administrator Firmware, Virtual Connect Firmware, etc ) and you need to solve incompatibility issues between all of those.
The best part is that HPE give you an utility (HP Smart Update manager) that can manage all those firmware in a consolidated way.
HPE works hard to provide a centralized administration and good experience with the software, and if you are an advanced user, also can use an add-on to access all the configuration parameters using powershell (the administrator's task programming language that come in every windows operating system).
The first goal was to use blades. This stemmed from a space problem in our data center. We needed to add more servers, but the space became short quickly. The first consolidation approach was a blades server.
We have been using this solution since 2008. We began mounting an 8 node VMware cluster. We began with one enclosure, a cabinet with 16 blade servers. We now have more than 18 of them distributed in different locations around the world.
We did not have any stability issues. The quality of the server itself enhances stability. Once the server is running, it runs for a long time.
We don’t have issues with computer power scalability. We just add more blades, configure, install and go, or add more memory to an existing blade.
HPE also supports mixing several blade models in the same cabinet.
You can have, for instance, BL460c G7, BL460c Gen8, and BL460c Gen9 working smoothly in the same C-Class enclosure.
In my country, the level of support is quite good. I recommend that you buy the server with a three or give year care pack to receive the manufacturer’s warranty.
I used rack form factor servers and switched to blades to gain consolidation ration. I also wanted to have better management control over the hardware infrastructure.
The setup is easy if you install only one cabinet, and you know what to do and what to expect from the platform.
When you plan to grow your infrastructure to more than 16 blades, it becomes a little bit complex. You need to think about how to manage Virtual Connect Domains, MAC virtualization, and WWN virtualization.
If you design your platform based on that, everything will go fine. You will know what to do when a problem arises.
We have only used HPE infrastructure. Previous to blades, we were using Standalone Rack form factor servers like the DL380 model.
If I were brand agnostic, I would probably select CISCO UCS, but this didn't exist when we decided to use HPE blades.
Now, with Synergy Composable Systems, HPE probably takes a leap forward in technology and puts itself at the forefront. Please keep in mind that technology.
