Share your experience using Oracle Transportation Management

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 83,000+ reviews on PeerSpot:

PeerSpot user
Oracle ACE - Specialized in Systems Technologies at Telecom Argentina
Real User
I've worked with different flavors of Unix, but I chose Solaris. I like the constant innovation in the software and hardware.

What is our primary use case?

I work with Solaris Operating systems since a lot of years, every day, as technical support specialist 

How has it helped my organization?

I've worked with different flavors of Unix, but I chose Solaris. I like the constant innovation in the software and hardware.

I've worked with servers E10k, E25k, T7-2, T5, M5, M5-32 and some other older servers. All of them have excellent performance in virtualization, zones, and LDOMs.

Solaris lets you isolate zones and migrate them to other servers. You can also move old releases of OS's from obsolete hardware to containers installed in new hardware.

What is most valuable?

The most valuable features for me are:

Virtualization (Containers, Zones, Security, PDOM's, LDOM's)

Performance, ZFS, Debugging with Dtrace

What needs improvement?

There are some areas that could use some improvement. As with Solaris 10, you can install Solaris 11 on SPARC and x86 systems, but the number of non-Oracle x86 systems certified up to this point is less that with the previous version. In spite of that, you can still install Solaris 11 on a varied number of systems as 'bare metal' or you can resort to virtualization via many of the softwares available for that in the market. The certification of third-party hardware is usually a lengthy process and requires a lot of resources, so it would be understandable if this takes a long time.

For how long have I used the solution?

More than 20 years.

What do I think about the stability of the solution?

Solaris is very stable, and most of the "panics" are caused by third-parties, generally when information security applications add modules into the kernel or when some hardware failure occurs.

What do I think about the scalability of the solution?

Oracle Sparc servers are the best for scalability. With Solaris, for example ZFS, it's a filesystem of 128 bits that allows storage of 256 trillion zettabytes, metadata are assigned dynamically, so it's not necessary to assign nodes beforehand or to limit the filesystem scalability when it's created. The directory can have up to 256 billion entries and there isn't a limit to the number of filesystem or file that may be in ZFS.

How are customer service and technical support?

Customer Service:

Customer Service is available 24 hours a day, 7 days a week, by phone and web services where you can open a case, upload files, and an engineer can be assigned in less 3 hours depending on severity of the case.

Technical Support:

The technical staff and field engineers who interact with customers are really professional, capable, have very good dispositions and they work with a high level of excellence.

Which solution did I use previously and why did I switch?

I worked on various Unix systems, but I feel very comfortable working on Solaris. I'm aware of the evolution of Linux systems in the world because of the cost, but I don't feel the need to change for the time because this OS offers me compatibility and scalability that the company needs where I work.

How was the initial setup?

When I decided the work on the Solaris platform, it was a personal decision. I didn't stop other Unix systems becuase of the complexity of these OS's, but rather by a timely challenge I had to build a cluster between 2 nodes of SunFire 6800. After that the E25k servers arrived and then the virtualization , and I liked working on Solaris more each time.

What about the implementation team?

When we do an implementation, we work together with an Oracle team and my colleague, Nicolas, and I start by connecting the power cords to the installation and configure the OS. We also provide support to development teams to this applications.

About the level of technicians, the level is excellent and they all provide great value with their experience

What was our ROI?

The economic investment is not my area of expertise, but I can talk about investment if I think about everyday learning working on this OS which return me the invest time on the initial installation and the low administrative maintenance, so I can spend less time to solve problems that software and hardware can have.

What's my experience with pricing, setup cost, and licensing?

I can't talk about prices. Solaris is free for final users, and in the case of OEM licenses, you should visit www.oracle.com.

Which other solutions did I evaluate?

I always evaluate other options with Sparc. I analyze if one server is more convenient than another or what cards to add. At my company, one specific area evaluates the costs of an implementation and then it decides the direction to take, so when the road leads to Solaris, my evaluation can help them to make a decision.

What other advice do I have?

I always recommend Solaris because of its robustness, high availability, scalability, virtualization, excellent support, security and very good hardware.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
CTO Enterprise Cloud at Amanox Solutions (S&T Group)
Real User
Top 5Leaderboard
What you might not know about Nutanix that makes it so unique
Pros and Cons
  • "Nutanix has several unique capabilities to ensure linear scalability."
  • "There is a need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads."

What is our primary use case?

As a systems integrator we use Nutanix on a daily basis since 2013 as our main, strategic and only infrastructure solution for virtualization and it's our related storage component. We can offer most use cases today on Nutanix including VDI, server virtualization, big data and mission critical.

How has it helped my organization?

As a system integrator, Nutanix offers a highly standardized solution that can be deployed in a timely fashion compared to legacy three-tier, generation one converged, and most competing hyper-converged solutions. This allows us to move quickly with a small team of architects, and implementation specialists for large projects.

What is most valuable?

Some years ago when we started working with Nutanix the solution was essentially a stable, user-friendly, hyper-converged solution offering a less future-rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers, it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches.

Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot:

Upgradeable with a single click

This was introduced a while ago, I believe around version 4 of the product. At first, it was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, and the disk firmware and also to upgrade sub-components of the Acropolis OS. There is, for example, a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgraded throughout the cluster with a single click independent of AOS. The one-click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a patch release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three-tier architecture (including converged generation 1) you do have a much simpler and well-tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after the upgrade (I love that fact since bigger clusters tend to have some hiccups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mention the network impact.

Linearly scalable

Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed metadata layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there are a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. 

The second key point is the fully distributed metadata database. Every node holds a part of the database (the metadata belonging to its current local data for the most part and replica information from other nodes). All metadata is stored on at least three nodes for redundancy (each node writes to its neighbor nodes in a ring structure, there are no metadata master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a metadata update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master nodes there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so-called Vdisks) but the objects are split into small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extent will have to be fetched anyhow then why not save it locally and serve it directly from the local SSD going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere DRS migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self-healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild, all nodes are potentially used as sources and targets to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore, a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster.

Predictable

Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above, the architecture doesn't read all data constantly over the network and uses metadata master nodes to track where everything is stored. Looking at other hyper-converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and metadata updates going over the network. With Nutanix a VM can't take over the whole cluster's performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD) your VMs are allowed to consume remote SSD with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly how much virtual disk uses local (and possibly remote) data, you get full transparency there as well.

Extremely fast

I think it is known that hyper-converged systems offer very high storage performance. There is not much to add here but to say that it is extremely fast compared to traditional storage arrays. And yes, a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from your local SSD and don't have to traverse the network/SAN to get it (that and of course all other hyper-convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater metadata complexity since you need to track all these small entities throughout the cluster. To my understanding, Nutanix invested a great deal of engineering to make their metadata layer extremely efficient to be able to even beat the performance of an object-based implementation. As a partner, we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using a one-click upgrade of course).

Intelligent

Since Nutanix has full visibility into every single virtual disk of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case, we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes their resources (local, remote disks). What block size is used random/sequential, working set size (hot data), and lots more. All with single virtual disk granularity. At some point, they were even thinking of making a tool that would look inside your VM and tell you what files (actually sub-file level) are currently hot because the data is there and just needs to be visualized.

Extensible

If you take a look at the upcoming functionality I wrote about further down you can see just some examples of what is possible due to the very extensible and flexible architecture. Nutanix isn't a typical infrastructure company but is more comparable to how Google, Facebook, and others engineer and build their data centers. Nutanix is a software company following state-of-the-art design patterns and uses modern frameworks. Something I was missing when working with traditional infrastructure. For about a year now they heavily extended what they call the app mobility fabric which comes on top of the distributed storage fabric I mentioned above. This layer allows moving workloads between local hypervisors (currently KVM<->ESXi) and soon between private and public clouds as well. You can for example use KVM-based Acropolis Hypervisor clusters for all your remote offices to get rid of high vSphere licensing costs without losing the main functionality and replicate the VMs to a central vSphere-based cluster. The replicated VMs can then be started on vSphere and Nutanix takes care of the conversion. The hypervisor is a commodity just like your x86 servers.

Visionary

When Nutanix released version 1 of its hyper-converged product in 2011 it was a great idea and a good implementation of the same. Most people in IT didn't however expect that it will become the approach with the highest focus throughout the industry. Today the largest players in IT infrastructure push their hyper-converged products and solutions more than any other and while there are still other less radical approaches (e.g. external all-flash storage), it is foreseeable that they will be less and less important for the big part of IT projects. Nutanix is the leader in the hyper-convergence space but having converged storage within your x86 commodity compute layer is by far not the only thing Nutanix has done since then. Their own included hypervisor is a pretty interesting alternative for all those who don't want to spend lots of dollars on vSphere licenses. While it will not yet suit all of your use cases you might actually be surprised at how much of the functionality vSphere offers today (distributed switch, host profiles, guest customization, HA etc.) you care about is already included out of the box with the added value of greatly reduced complexity (yes I am calling vSphere complex compared to Nutanix Acropolis Hypervisor).

Standardized

Since Nutanix is purchased solely as an appliance solution (even though they are only making the software on top). You are always dealing with a pretested, preconfigured solution stack. You do have a choice when it comes to memory, CPU, disk, and GPU and you get to select from three hardware providers (Nutanix directly, DELL, and Lenovo) but they are all predefined options. This allows to guarantee a high level of stability and fast resolution of support cases. As a Nutanix partner this is worth a lot since the experience we get from one customer is valid for any other customer as well. It also allows us to be very efficient and consistent when implementing or expanding the solution since we can put standardized processes in place to reduce possible issues during implementation to a minimum. Once the Nutanix hardware is rack mounted at the customer their software automatically installs the hypervisor of choice (KVM, Hyper-V or ESXi) and configures are necessary variables (IP addresses, DNS, NTP etc.). This is done by the cluster itself, the nodes stage each other over the local network.

And last but not least: With outstanding support

The support we get from Nutanix is easily the best from all vendors we work with. If you open a case you directly speak to an engineer who can help quickly and efficiently. Our customers sometimes open support cases directly (not through us) and so far the feedback was great. One interesting aspect is the VMware support we receive from Nutanix even if the licenses are not sold by them directly. They analyze all ESXi/vCenter logs we send them. If the bug isn't storage related we also open a case with VMware to continue investigating. They do have the possibility to directly engage with VMware by opening a support case directly (Nutanix->VMware) which we saw on multiple occasions. The last case we witnessed was a non-responsive hosted process (vCenter disconnects) where the first log analysis by Nutanix pointed out a possible issue with the Active Directory Integration Service. We then opened a VMware case which was handled politely but after two weeks when there wasn't much progress other than collecting logs and more logs we remembered what the Nutanix engineer suggested and there was our solution. Disabling Active Directory Integration did the trick. I wouldn't say VMware support isn't good as well but we are always glad that Nutanix takes a look at the logs as well because at the end of the day, you are just happy if you can move on and work on other things, not support cases. 

Note: I strongly encourage you to take a look at the Nutanix Bible (nutanixbible.com) where all mentioned aspects and many more are described in great detail.

What needs improvement?

Nutanix has the potential to replace most of today's traditional storage solutions. These are classic hybrid SAN arrays (dual and multi-controller), NAS Filers, newer All-Flash Arrays as well as any object, big data etc. use cases.

For capacity, it usually comes down to the price for large amounts of data where Nutanix may offer higher-than-needed storage performance at a price point that isn't very attractive. This has been addressed in the first step using storage-only nodes which are essentially intelligent disk shelves (mainly SATA) with their own virtual SDS appliance preinstalled. Storage nodes are managed directly by the Nutanix cluster (the hypervisor isn't visible and no hypervisor license is necessary). While this is going in the right direction, larger storage nodes are needed to better support "cheap, big storage" use cases. For typical big data use cases today's combined compute and storage nodes (plus optionally storage-only nodes) are already a very good fit! 

The Nutanix File Services (Filer with active directory integration) are a very welcomed addition customers get with a simple software upgrade. Currently, this is available as a tech preview to all Acropolis Hypervisor (AHV) customers and will soon be released to ESXi as well. This is one example of a service running on top of the Nutanix distributed storage fabric, well integrated with the existing management layer (Prism) offering native scale-out capabilities and One-Click upgrade like everything else. The demand from customers for a built-in filer is big, they are looking to not depend on legacy filer technology any longer. We are looking forward to seeing this technology mature and offer more features over the coming months and years.

Another customer need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads. These could include bare metal systems as well as non-supported hypervisors (e.g. Xen Server etc.). This functionality (called Volume Groups) is already implemented and available for use by local VMs (e.g. Windows Failover Cluster Quorum) and will soon be qualified for external access (already working from a technical point of view including MPIO multi-pathing with failover). It will be interesting to see if Nutanix will allow active-active access to such iSCSI LUNs (as opposed to the current active-passive implementation) with the upcoming release(s). Imagine if you upgraded your Nutanix cluster (again this would be a simple One-Click software upgrade) and all of sudden you have a multi-controller, active-active (high-end) storage array. (Please note that I am not a Nutanix employee and that these statements describing possible future functionality are to be understood as speculation from my side which might never become officially available.)

For how long have I used the solution?

We have been using this solution for three to five years.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are a partner for six years based in Switzerland. The author of this review previously worked five years at a large storage vendor as System Engineer specialized in Storage, Virtualization and VCE converged infrastructure.