Share your experience using Cohesity C6000 Series

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 83,000+ reviews on PeerSpot:

PeerSpot user
CTO Enterprise Cloud at Amanox Solutions (S&T Group)
Real User
Top 5Leaderboard
What you might not know about Nutanix that makes it so unique
Pros and Cons
  • "Nutanix has several unique capabilities to ensure linear scalability."
  • "There is a need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads."

What is our primary use case?

As a systems integrator we use Nutanix on a daily basis since 2013 as our main, strategic and only infrastructure solution for virtualization and it's our related storage component. We can offer most use cases today on Nutanix including VDI, server virtualization, big data and mission critical.

How has it helped my organization?

As a system integrator, Nutanix offers a highly standardized solution that can be deployed in a timely fashion compared to legacy three-tier, generation one converged, and most competing hyper-converged solutions. This allows us to move quickly with a small team of architects, and implementation specialists for large projects.

What is most valuable?

Some years ago when we started working with Nutanix the solution was essentially a stable, user-friendly, hyper-converged solution offering a less future-rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers, it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches.

Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot:

Upgradeable with a single click

This was introduced a while ago, I believe around version 4 of the product. At first, it was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, and the disk firmware and also to upgrade sub-components of the Acropolis OS. There is, for example, a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgraded throughout the cluster with a single click independent of AOS. The one-click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a patch release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three-tier architecture (including converged generation 1) you do have a much simpler and well-tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after the upgrade (I love that fact since bigger clusters tend to have some hiccups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mention the network impact.

Linearly scalable

Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed metadata layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there are a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. 

The second key point is the fully distributed metadata database. Every node holds a part of the database (the metadata belonging to its current local data for the most part and replica information from other nodes). All metadata is stored on at least three nodes for redundancy (each node writes to its neighbor nodes in a ring structure, there are no metadata master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a metadata update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master nodes there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so-called Vdisks) but the objects are split into small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extent will have to be fetched anyhow then why not save it locally and serve it directly from the local SSD going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere DRS migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self-healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild, all nodes are potentially used as sources and targets to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore, a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster.

Predictable

Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above, the architecture doesn't read all data constantly over the network and uses metadata master nodes to track where everything is stored. Looking at other hyper-converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and metadata updates going over the network. With Nutanix a VM can't take over the whole cluster's performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD) your VMs are allowed to consume remote SSD with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly how much virtual disk uses local (and possibly remote) data, you get full transparency there as well.

Extremely fast

I think it is known that hyper-converged systems offer very high storage performance. There is not much to add here but to say that it is extremely fast compared to traditional storage arrays. And yes, a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from your local SSD and don't have to traverse the network/SAN to get it (that and of course all other hyper-convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater metadata complexity since you need to track all these small entities throughout the cluster. To my understanding, Nutanix invested a great deal of engineering to make their metadata layer extremely efficient to be able to even beat the performance of an object-based implementation. As a partner, we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using a one-click upgrade of course).

Intelligent

Since Nutanix has full visibility into every single virtual disk of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case, we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes their resources (local, remote disks). What block size is used random/sequential, working set size (hot data), and lots more. All with single virtual disk granularity. At some point, they were even thinking of making a tool that would look inside your VM and tell you what files (actually sub-file level) are currently hot because the data is there and just needs to be visualized.

Extensible

If you take a look at the upcoming functionality I wrote about further down you can see just some examples of what is possible due to the very extensible and flexible architecture. Nutanix isn't a typical infrastructure company but is more comparable to how Google, Facebook, and others engineer and build their data centers. Nutanix is a software company following state-of-the-art design patterns and uses modern frameworks. Something I was missing when working with traditional infrastructure. For about a year now they heavily extended what they call the app mobility fabric which comes on top of the distributed storage fabric I mentioned above. This layer allows moving workloads between local hypervisors (currently KVM<->ESXi) and soon between private and public clouds as well. You can for example use KVM-based Acropolis Hypervisor clusters for all your remote offices to get rid of high vSphere licensing costs without losing the main functionality and replicate the VMs to a central vSphere-based cluster. The replicated VMs can then be started on vSphere and Nutanix takes care of the conversion. The hypervisor is a commodity just like your x86 servers.

Visionary

When Nutanix released version 1 of its hyper-converged product in 2011 it was a great idea and a good implementation of the same. Most people in IT didn't however expect that it will become the approach with the highest focus throughout the industry. Today the largest players in IT infrastructure push their hyper-converged products and solutions more than any other and while there are still other less radical approaches (e.g. external all-flash storage), it is foreseeable that they will be less and less important for the big part of IT projects. Nutanix is the leader in the hyper-convergence space but having converged storage within your x86 commodity compute layer is by far not the only thing Nutanix has done since then. Their own included hypervisor is a pretty interesting alternative for all those who don't want to spend lots of dollars on vSphere licenses. While it will not yet suit all of your use cases you might actually be surprised at how much of the functionality vSphere offers today (distributed switch, host profiles, guest customization, HA etc.) you care about is already included out of the box with the added value of greatly reduced complexity (yes I am calling vSphere complex compared to Nutanix Acropolis Hypervisor).

Standardized

Since Nutanix is purchased solely as an appliance solution (even though they are only making the software on top). You are always dealing with a pretested, preconfigured solution stack. You do have a choice when it comes to memory, CPU, disk, and GPU and you get to select from three hardware providers (Nutanix directly, DELL, and Lenovo) but they are all predefined options. This allows to guarantee a high level of stability and fast resolution of support cases. As a Nutanix partner this is worth a lot since the experience we get from one customer is valid for any other customer as well. It also allows us to be very efficient and consistent when implementing or expanding the solution since we can put standardized processes in place to reduce possible issues during implementation to a minimum. Once the Nutanix hardware is rack mounted at the customer their software automatically installs the hypervisor of choice (KVM, Hyper-V or ESXi) and configures are necessary variables (IP addresses, DNS, NTP etc.). This is done by the cluster itself, the nodes stage each other over the local network.

And last but not least: With outstanding support

The support we get from Nutanix is easily the best from all vendors we work with. If you open a case you directly speak to an engineer who can help quickly and efficiently. Our customers sometimes open support cases directly (not through us) and so far the feedback was great. One interesting aspect is the VMware support we receive from Nutanix even if the licenses are not sold by them directly. They analyze all ESXi/vCenter logs we send them. If the bug isn't storage related we also open a case with VMware to continue investigating. They do have the possibility to directly engage with VMware by opening a support case directly (Nutanix->VMware) which we saw on multiple occasions. The last case we witnessed was a non-responsive hosted process (vCenter disconnects) where the first log analysis by Nutanix pointed out a possible issue with the Active Directory Integration Service. We then opened a VMware case which was handled politely but after two weeks when there wasn't much progress other than collecting logs and more logs we remembered what the Nutanix engineer suggested and there was our solution. Disabling Active Directory Integration did the trick. I wouldn't say VMware support isn't good as well but we are always glad that Nutanix takes a look at the logs as well because at the end of the day, you are just happy if you can move on and work on other things, not support cases. 

Note: I strongly encourage you to take a look at the Nutanix Bible (nutanixbible.com) where all mentioned aspects and many more are described in great detail.

What needs improvement?

Nutanix has the potential to replace most of today's traditional storage solutions. These are classic hybrid SAN arrays (dual and multi-controller), NAS Filers, newer All-Flash Arrays as well as any object, big data etc. use cases.

For capacity, it usually comes down to the price for large amounts of data where Nutanix may offer higher-than-needed storage performance at a price point that isn't very attractive. This has been addressed in the first step using storage-only nodes which are essentially intelligent disk shelves (mainly SATA) with their own virtual SDS appliance preinstalled. Storage nodes are managed directly by the Nutanix cluster (the hypervisor isn't visible and no hypervisor license is necessary). While this is going in the right direction, larger storage nodes are needed to better support "cheap, big storage" use cases. For typical big data use cases today's combined compute and storage nodes (plus optionally storage-only nodes) are already a very good fit! 

The Nutanix File Services (Filer with active directory integration) are a very welcomed addition customers get with a simple software upgrade. Currently, this is available as a tech preview to all Acropolis Hypervisor (AHV) customers and will soon be released to ESXi as well. This is one example of a service running on top of the Nutanix distributed storage fabric, well integrated with the existing management layer (Prism) offering native scale-out capabilities and One-Click upgrade like everything else. The demand from customers for a built-in filer is big, they are looking to not depend on legacy filer technology any longer. We are looking forward to seeing this technology mature and offer more features over the coming months and years.

Another customer need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads. These could include bare metal systems as well as non-supported hypervisors (e.g. Xen Server etc.). This functionality (called Volume Groups) is already implemented and available for use by local VMs (e.g. Windows Failover Cluster Quorum) and will soon be qualified for external access (already working from a technical point of view including MPIO multi-pathing with failover). It will be interesting to see if Nutanix will allow active-active access to such iSCSI LUNs (as opposed to the current active-passive implementation) with the upcoming release(s). Imagine if you upgraded your Nutanix cluster (again this would be a simple One-Click software upgrade) and all of sudden you have a multi-controller, active-active (high-end) storage array. (Please note that I am not a Nutanix employee and that these statements describing possible future functionality are to be understood as speculation from my side which might never become officially available.)

For how long have I used the solution?

We have been using this solution for three to five years.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are a partner for six years based in Switzerland. The author of this review previously worked five years at a large storage vendor as System Engineer specialized in Storage, Virtualization and VCE converged infrastructure.
Ravikumar Korada - PeerSpot reviewer
Technical Recruiter at Covalense
Reseller
Easy to use and offers a simple web interface
Pros and Cons
  • "Stability-wise, I rate the solution a ten out of ten."
  • "There needs to be an increase in the supported memory and hard disk space, as it is an area where the product currently has certain shortcomings."

What is our primary use case?

I use the solution in my virtual environment for storage purposes.

What is most valuable?

The most valuable feature of the solution is vROps, along with the monitoring part.

What needs improvement?

There needs to be an increase in the supported memory and hard disk space, as it is an area where the product currently has certain shortcomings. Only 32 TB RAM is supported in vSAN Ready Node R740 and vSAN Ready Node R750.

For how long have I used the solution?

I have five years of experience working with VMware vSAN.

What do I think about the stability of the solution?

Stability-wise, I rate the solution a ten out of ten.

What do I think about the scalability of the solution?

The scalability offered by the product is very simple. If you face any hardware issue, you can simply remove that part and purchase a new one, which may even include parts like the hard disk.

The scalability features offered by the product are highly used and are very popular nowadays. I rate the product's scalability a ten out of ten.

How are customer service and support?

The solution's technical support was very good. The product has a lot of documentation on its website. I rate the technical support an eight out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I have experience with HPE ProLiant Servers, which are installed through manual configurations.

How was the initial setup?

The product's installation phase is easy.

What's my experience with pricing, setup cost, and licensing?

The product's price is not high. The tool is available at a normal price.

Which other solutions did I evaluate?

VMware vSAN is better than its competitors.

What other advice do I have?

The greatest impact of the tool on my operational efficiency stems from the fact that it serves as a software-based storage solution, specifically for Dell computers, and integrates well with VMware vCenter while offering availability and pre-configured DRS functionalities. VMware vSAN also integrates with vRealize Suite and VMware SRM for monitoring and DR.

I have manually integrated VMware vSphere with VMware vSAN. I have also manually integrated VMware vSAN with Dell and HPE servers. Dell has a pre-configured set of tools with VMware vSan, like vSAN Ready Node R740 and vSAN Ready Node R750.

The integration of VMware vSphere with VMware vSAN has benefited our company's IT infrastructure since it has made it faster than many other storage solutions. The performance offered is also very high.

The reliability offered by the product is very high since everything comes under one system. With the storage solution, you can easily set up a huge number of virtual desktops in your infrastructure.

I recommend the product to those who want a reliable storage system to store their data, specifically in the cloud.

There needs to be an increase in the capacity offered by the product, considering that thousands of products are used in the corporate sector.

It is easy to use the product. You can also use it online through its web interface.

I rate the tool an eight out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Flag as inappropriate