It is flexible. You can build, construct, and deconstruct your workload via software and through programming.
It increases efficiency and gives more time for IT personnel to do their work. You build it once and use it forever, or modify it slightly.
It saves time to design, install, and manage workloads.
The interface seems pretty wide and flexible.
You can use any programming language or enterprise interface like Puppet or Chef. It is really agnostic in terms of the virtualization, so it is very flexible.
Our IT department will be more efficient in terms of faster time to market, and being able to respond to businesses faster than before.
The software design infrastructure makes it open to the API to enterprise products such as Chef, Puppet, Ansible, and Docker. As I mentioned before, the ability to support other hardware other than HPE will make it a more universal product.
The modular infrastructure is part of the converge system for scalability. If all of us are sharing the infrastructure, then discovery happens automatically. It increases the efficiency of the IT department. The dream of any IT staff member is to no longer worry about firmware. That's one of the high level high points, in that it is all embedded in the templates. You don't need to worry about firmware, as it is part of the workload.
I would like to see support for other vendors’ hardware.
It is still too early to tell, but so far, stability has been good.
Scalability seems very promising, due to the architecture. We only have one frame, but from what I understand having gone through a lot of training, it seems like it's going be a good, scalable product.
We haven’t used technical support yet
We knew that we needed to invest in this solution, because of way the industry is going. You need to transform development in your IT departments. They need fast solutions. They need new development. They need a way where they can get their code and their workload to work fast.
Previous to this, our customers put in the OS and middle-ware to get the application running. With this solution, all of this can be imaged once, streamed to the hardware, and it's done in minutes instead of spending days or weeks on it.
When choosing a vendor, I look for the reliability of the technology. I look for openness of the solution and finding out who has worked with the vendor in the past, to bring in new recommendations from customers.
The initial setup was straightforward.
I have not evaluated other vendors. For me, HPE is in my DNA.
I would highly recommend learning and reading about it. They should ask for the vendor to come and explain it, or I can explain it, as I am certified to do that. In today's world, there isn't a better solution to create a private cloud for them. Getting this solution is a no brainer.
I wanted to pose an update.
As technology moves forward copper and two fiber strand Ethernet cables should have 10/25 Gbps as the min speed with auto-sensing solutions. As finding auto-sensing optics is proving to be a problem, even if you do to manual configured as 10 or 25 Gbps would mean designing the blades be 25 Gbps with 50 Gbps by 2020 and providing options of 12 or 24 strand OM4 fiber connectors that would allow between two fiber links of 10,25,50 while offering 40, 100, and 250 Gbps uplinks by 2020. Adding focus on NVMe over Fabrics to expand storage beyond the blade at a faster design than normal storage solutions support.
Between 2022-2025 the chases should make power and fabric connections easier with the fabric may be GenZ based. GenZ may require cable plants to be single mode and may have a different mechanical connector justified by the eight times the speed of PCIe v3 we use today and being a memory addressable fabric and not just a block/packet forwarding solution.
The biggest issue to me in blades is lock-in as the newest tech and most options are shipped in rack configurations not in the OEM (think HPE or Dell) blade form factor. While the OEM are at risk of being displaced for commodity gear by the ODM (they supply the OEM) using components specified by the Open Compute Project (OCP), the impact of CPU flaws could trip up the industry. Some ARM vendor may step in with a secure low cost container compute platform in an OCP compliant form factor using GenZ to make computing and storage fabrics that are by design software defined.
In 2016 worldwide the 2 socket server was the most shipped, but 60% of them shipped with 1 CPU/socket. By 2020 the core counts of Intel and AMD should make it a world where 90% of systems shipped will be one socket systems. The high CPU capacity and PCIe v5 or GenZ will more radically change what we will be buying in beginning of the next decade which makes buying a blade enclosure today that you want to get 5-8 years of functional life like testing the law of diminishing returns. While the OEM may provide support and pre-2022 parts, post 2022 you will be frozen in technology time. So enclosures that fully populated with 2019 gear may provide value any empty slot/s will be at risk of being lost value.
While I wait for better blade enclosures to be designed for the problems of the next decade not the last decade, I think that buying rack mount servers for enterprises that buy capacity on a project by project funding basis is the best solution for this gap in blade value to design limitations. As the costs of using rack servers will be direct per project, the re-hosting/refactoring in the next decade to the next great hosting concept will be easier to account for while minimizing the orphaned lagging systems that tend to move slower than the rest of the enterprise.