We are running 3,000 VMs spread out over five such units.
The initial Unity x50 series, even all-flash, were quickly driving the CPU to near 100% on as little as 180TB, having data reduction on all volumes in place.
XT came in to support and sizes better on our infra, due to more CPU power. The system doesn't seem to have a module to offload data reduction, but in the end, does a great job of getting the data reduction to come around at higher capacity without oversubscribing the CPU (%). In the end, the SDD media cost more than the array/storage processors. So basically you want to reduce data as much as the system can take.
Good in-built monitoring tools from the System|Performance Section Tab is valuable. From CloudIQ you can reach out to vCenter as well. ESRS (Call Home) on the service delivery part is valuable.
Remote Code update support (interactive or not) is free of charge, as you wish, nonetheless you are free to do it yourself as updates are cumulative and retained on each new code level.
The uemcli is not an object-oriented CLI and the more object-rich PowerCLI has been discontinued. Only people with bash experience possibly can operate it. Still, nowadays, feeding object from one command into another is still a burden with such CLI. When adding a few disks to a cluster, the CLI is actually standing in the queue for one disk to be added to all, requiring multiple scans on each member host, before proceeding with the second and scanning all hosts once again. One could add all disks at once and stand in the queue once for a rescan all.
There isn't a means to add volume groups or host groups. A feature that any solution I worked with so far has. It's a burden to assure each host has the same LUN ID on each host in this manner. As of the June 2021 release, code OE 5.1, seems to offer the option to have host groups in the end!
===> Review 01/2023: Unity OE 5.1 came out with the notion of a host group
The integration with vCenter comes with a side effect, in that it will take control of the vSphere scan process, moreover, every esx host is scanned multiple times. It takes easily a few hours to add a few LUNS to a few hosts. This is rather painful. Even when adding LUNs using the Unisphere GUI, you can keep up with the pace of your script.
Support Responsiveness and time to fix bugs should be improved. Over the past 1.5 years, we had occasional controller reboots and we went all the way from OE 4.5 over 5.02 to 5.03, 5.1, and 5.2.1, and eliminated the most common causes. We still face a stress-triggered cache merge issue and though we provided the dumps and engineering acknowledged the bug, it has been told that addressing the bug requires substantial code rewriting and the problem will be fixed in the next major code release (OE 6.x). We are now two years later, still no fix, but fortunately, face the condition occasionally, and among even other bug checks.
===> Review update 01/2023
There was also a problem of a Storage Processor Panic condition that could unveil after uptime in days had been reached. We had two such crashes and the uptime of our five units (ten controllers) showed they all bypassed the uptime, which had the potential to even crash the remaining eight controllers. Without much explanation on the cause (Typical at Dell EMC), it seems like a memory leak issue to me. We decided to reboot them all as a quick reply and later on to patch them on a more convenient maintenance window.
It was only until this summer that the issue is known and formalized to the public and listed as DTA 205836: Dell Unity: Storage Processors Running 5.1.X Code May Panic After 275-300 Days of Runtime (User Correctable)
All Unity systems running Unity Operating Environment (OE) version 5.1.X, but primarily Unity XT systems (480, 680, or 880, including F models), may experience SP panics after 275-300 days of runtime.
==> Review 29.01.25 codes 5.3 & 5.4 evaluation
We had no controller reboots anymore, not on the 650f nor on the 880f arrays. We use the solution soley for block/FC. It has been all around a very viable solution for our (exclusively) block workloads for 5 years. We are about to decide to refresh our 5 arrays towards the newer Powerstore. We can only hope for what Dell EMC delivered with this "Unity". Not ever had we had a complete downtime (only single node panics) for 5 such arrays , the oldest went into its 6th year recently.
I discovered that the RESTAPI has lots to offer and there are more metrics then in Unisphere & Apex Observabiity (ex CloudIQ or your on remote infra presented Perf collections and Alerts).
I have been using Unity (XT) for 15 months.
OE 5.03 was a rather mature and stable code, without to say that it will address all. Some bugs are stress/load triggered and rather exceptional but might be easily recurring if the same conditions are met again.
===> Review 01/2023
Code 5.1 has no improvement, quite on the contrary, there was initially also an issue with Veeam in the sense that DELL EMC unilaterally deprecated some commands, which caused Veeam to no longer be able to interface with it for storage-based snapshots of ESXi VMs. There came a code/OE specifically to address this, but it took a while, likewise, the solution from Veeam to replace their integrated and deprecated UEM CLI interface took even longer to accommodate DELL Unity product engineering changes.
Code 5.1 flaw-
All Unity systems running Unity Operating Environment (OE) version 5.1.X, but primarily Unity XT systems (480, 680, or 880, including F models), may experience SP panics after 275-300 days of runtime.
The XT scales better than its predecessor.
===> Review 01/2023
The Unity product will not survive as its own brother in the same low-end midrange, being PowerStore, is in all aspects a better product (latency and scalability-wise).
It is not the most responsive support, we have a Service Account Manager and reporting in place now and keep the pressure to get answers. They have very bad post-bug/incident follow-up. You only get a record that the problem is either known or recorded by engineering, but not when it got corrected retroactively, nor any prospected date when it will be corrected.
Also some fixes require substantial rewriting , others are fairly simple but again, once the bug is recorded, you are not retroactively informed and need to read the release notes ffor fixes and hope for your issue to be listed.
Despite this, bugs are luckily rare if one considers these units operate 24/24 hrs 365/365d, so some arrays get clean sheet but not each year , another one faces a reboot once or twice a year. All considered, its like DELL states, its a surely a rare bug and you have 2 nodes instead of 1 for such issues.
The system is easy to install and you might be able to do it on your own on the 2nd attempt.
The first three systems were set up by a reseller, the Unity XT by myself. It's rather straightforward if you have FC/Block or Eth/NFS Storage array experience.
Ease of use, Ease of setup , Price Quality, vSphere Integration, Remote CloudIQ data (Performance data & Alerts), and ESRS/SCG (Call Home integration).
The setup is rather straightforward.
I have compared Unity x00/x50f versus Unity XT x80f.
Midrange solution for SMBs up to large enterprises too, if you spread the load on many units.