Disaster recovery to a secondary datacenter. In order to do that, all of our server infrastructure runs on Simplivity.
We now have much lower RPOs, from hours to seconds. It is now much simpler for us to deploy additional workloads, since we do not have to take care anymore of storage provisioning. The Simplivity datastores are so efficient that the never seem to fill up, or at least not that fast anymore.
Data replication, backup and recovery of VMs and disks. Data replication happens in the background once the policy is set and takes few seconds. The fact that the data is deduped inline makes this possible, as only the changes not already replicated blocks are written, which is extremely efficient.
We use the Omnicubes to replicate our data to a second datacenter.
By having our company data on the Omnicubes, we ensure that all of our data is constantly replicated within the defined intervals to the remote site. This is the reason why we did choose Omnicubes. We are able to replicate our data in a very simple manner (as this happens in the background) and comply to our business needs.
Simplivity Omnicube is able to replicate data in a simple and reliable manner. There is nothing to be done for the IT Admins and very few to control, as the technology is very stable. The replication is way faster than expected, due to the advantages of the deduplication and compression. Currently we see 3.4 dedup ration 1nd 1.5 compression.
When we manually backup (or automatically replicate by policy) one VM to the remote location, the job is completed within seconds or few minutes in the worst case. The storage footprint is minimal. There are no ‘vmware snapshots’ involved in this technology (which could be left over and create issues, as it could happen with other Backup Technologies).
As a plus of using Omnicubes, we have discovered that we now can perform almost instantaneous recovery of VMs. Cloning a VM is done in few seconds (also across sites), same applies to recovery of VMS or single drives (Vmdk). In one occasion we needed to recover a 2-TB partition of a file server. We completed the recovery in about 5 seconds, the data was recovered from the remote location. We could basically move our whole workloads across datacenters in few minutes, if needed.
We definitely want to see more of the CLI commands come up to the GUI, and it is a legitimate question, if we are going to be happy with the integration in the vsphere web client, which is awfully slow. While this is responsibility of Vmware (having killed the c-client), the question is legitimate, because the client is what you need to restore your data in the end, and in such situation you do not have time to waste.
No. When scaling you need to consider that scaling an hyperconverged infrstructure is different from a traditional server stacks, because basically you are tied to adding one building block which adds server, cpu, ram, disk all at once. in a traditional stack you will look at each component constantly and scale them up indipendently. Another aspect is that the indicator which will tell you when it is time to scale might be different from your expectations. In my experience we did in the past scale traditional stacks when the storage was getting full. After implementing Simplivity, my indicator is now the disk latency. The storage itself will almost never get full, but after adding additional workloads for 2 years I have learned that although the disk is not full, you want to look at certain thresholds in your disk latency, or in certain cases at RAM availability on the appliances.
Customer Service:
The level has been very good until now, I know that after HPe taking over some users have encountered delay in being services. This did not happen to me. Update: it's a pity, that no email support anymore is available. Now we have to dial in and create a support case. Before the acquisition, creating a support case was a breeze. I hope HPe will fix this.
Technical Support:
Excellent. Update: Well, good to very good. there is room for improvement.
we did not have disaster recovery in place before.
not at all. Set up takes a couple of hours, and then it's important to have a guideline about which data replication (data protection) policies you want to have in place. Once they are defined (in a matter of minutes), it's done.
All-in license, simple and fair.
Nutanix, and other traditional storage/server architecture options.
Focus on the right sizing for your application tier. The solution is very simple to administer.