System Engineer at a non-tech company with 10,001+ employees
Real User
2020-04-14T06:13:00Z
Apr 14, 2020
We are using Calm to deploy a new server. We have four blueprints: the first one is to bring the network; the second one is to configure the elements; the third and the fourth ones are for deploying new servers.
Tech Lead Platform Services | Infrastructure Consultant at a logistics company with 1,001-5,000 employees
Real User
2021-10-25T10:29:00Z
Oct 25, 2021
We use Calm as an automation engine for deployment of the cluster software over our network. We are also using it to deploy standardized workloads on the Nutanix clusters. We also use it to create a "self-service shop," where we can select to deploy standardized workloads and choose a certain profile for a particular server, and the Calm engine will integrate with other solutions like our IP database and CDB. Everything is fully automated. In addition to standardized workloads, we also can say, "Give us a generic virtual machine."
Leader of Environments and Automation at a financial services firm with 1,001-5,000 employees
Real User
2021-04-22T15:04:00Z
Apr 22, 2021
We are currently using Calm to automate our infrastructure and platform provisioning, including going into infrastructure-as-code, standing up environments, and triggering deployment processes. We aren't looking for it to automate application management to a single platform because we are spread across Azure Pipelines and Octopus Deploy and multiple methods of automating our application deployments. In the last year, we have standardized what we are doing with Calm in terms of infrastructure automation. We haven't stepped into application life cycle management with Calm. We are mostly focusing on leveraging Calm as our platform and infrastructure provisioning orchestrator. It is based on-premises on our Nutanix cluster.
Learn what your peers think about Nutanix Cloud Manager (NCM). Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
Project Manager at a healthcare company with 501-1,000 employees
Real User
2020-09-30T08:03:00Z
Sep 30, 2020
One goal was to automate things. We had a lot of tools, but we needed a centralized tool. Calm helps us to centralize the deployments of our VMs. We have a subsystem installed on Nutanix and we have blueprints for setting up this subsystem very easily. Also, for Kubernetes clusters, we use now CaaS from SUSE and we also create Kubernetes clusters with Calm. Our strategy is to make blueprints for all the virtual machines environments. It's an ongoing process.
We provide Test-VMs to users. Currently, we deploy only Windows-VMs from Windows 10 1803 up to 20H2 and Server 2012 R2 to Server 2019. The blueprints consist of a base Windows Image (which is used as a template for the VM to be) and several tasks you can define and use remote PowerShell to get whatever you need to get done, like install additional software, set registry keys - you name it. Each task is then executed in the defined order and results can be reviewed even during execution time. Hardware specs can be made configurable, so users can adjust the amount of RAM or CPU core count but can also be set to static. We recently set the machines up to configure customary passwords and give users an email notification when the machine is ready to use. Also we differentiate machine networks based on the users department to separate machines.
Head of Operations at a university with 1,001-5,000 employees
Real User
2020-06-18T08:15:00Z
Jun 18, 2020
We wanted to find a way to start getting our academics used to paying for compute without having to actually pay, but still to do it for real in the cloud. We use the self-service portal within Nutanix for them to deposit some funds, which is a cost charge, not a credit card, and then we say, "Okay, based on that, you have bought X amount of CPUs, Y amount of memory, and Z amount of storage." They can then go in and say, "Okay, well, I know I've got a pool of 10 BCPs for a month. I want to spin up three of them to process this data, which I'll then tear down afterwards." We use it for our neurological psychology department where they do a lot of brain scans. They want to upload them to a place where they can compute the output of those scans and then they want to tear down their compute afterwards, because they don't need to be running all the time. Another area uses it for looking at weather data, which is typically quite a large amount of data. They only need to process once and then they can destroy it because they don't need to look at it again, once they've done analytics on it. Those are our typical use cases: to allow our research areas to spin up their resources against a pricing model that they've secured funding for, and not have to engage the IT teams to provide the resources for them. It also allows them not to go beyond their budgets and stay within predefined lanes. We have it on-premise. We built our own private cloud and we host it on there for our academics to consume and spin up their own resources. We know that we could burst up to Azure, AWS, and GCP, but we don't. We keep it all within our private cloud at the moment.
Nutanix Cloud Manager (NCM) is a cloud management tool that drives consistent governance across private and public clouds for its users. The solution brings simplicity and ease of use to managing and building cloud deployments by providing a unified multicloud management that addresses common cloud adoption challenges. Nutanix Cloud Manager offers four key value drivers:
Intelligent operations: They include monitoring, insights, and automated remediation.
Self-service and orchestration:...
We use the solution as a private cloud management tool in our company.
We primarily use the solution for cloud management. We use it as a private cloud for internal client infrastructure.
We create Windows Testmachines for our Test Department from Win7 up to Server 2019
We are using Calm to deploy a new server. We have four blueprints: the first one is to bring the network; the second one is to configure the elements; the third and the fourth ones are for deploying new servers.
We use Calm as an automation engine for deployment of the cluster software over our network. We are also using it to deploy standardized workloads on the Nutanix clusters. We also use it to create a "self-service shop," where we can select to deploy standardized workloads and choose a certain profile for a particular server, and the Calm engine will integrate with other solutions like our IP database and CDB. Everything is fully automated. In addition to standardized workloads, we also can say, "Give us a generic virtual machine."
We are currently using Calm to automate our infrastructure and platform provisioning, including going into infrastructure-as-code, standing up environments, and triggering deployment processes. We aren't looking for it to automate application management to a single platform because we are spread across Azure Pipelines and Octopus Deploy and multiple methods of automating our application deployments. In the last year, we have standardized what we are doing with Calm in terms of infrastructure automation. We haven't stepped into application life cycle management with Calm. We are mostly focusing on leveraging Calm as our platform and infrastructure provisioning orchestrator. It is based on-premises on our Nutanix cluster.
One goal was to automate things. We had a lot of tools, but we needed a centralized tool. Calm helps us to centralize the deployments of our VMs. We have a subsystem installed on Nutanix and we have blueprints for setting up this subsystem very easily. Also, for Kubernetes clusters, we use now CaaS from SUSE and we also create Kubernetes clusters with Calm. Our strategy is to make blueprints for all the virtual machines environments. It's an ongoing process.
We provide Test-VMs to users. Currently, we deploy only Windows-VMs from Windows 10 1803 up to 20H2 and Server 2012 R2 to Server 2019. The blueprints consist of a base Windows Image (which is used as a template for the VM to be) and several tasks you can define and use remote PowerShell to get whatever you need to get done, like install additional software, set registry keys - you name it. Each task is then executed in the defined order and results can be reviewed even during execution time. Hardware specs can be made configurable, so users can adjust the amount of RAM or CPU core count but can also be set to static. We recently set the machines up to configure customary passwords and give users an email notification when the machine is ready to use. Also we differentiate machine networks based on the users department to separate machines.
We wanted to find a way to start getting our academics used to paying for compute without having to actually pay, but still to do it for real in the cloud. We use the self-service portal within Nutanix for them to deposit some funds, which is a cost charge, not a credit card, and then we say, "Okay, based on that, you have bought X amount of CPUs, Y amount of memory, and Z amount of storage." They can then go in and say, "Okay, well, I know I've got a pool of 10 BCPs for a month. I want to spin up three of them to process this data, which I'll then tear down afterwards." We use it for our neurological psychology department where they do a lot of brain scans. They want to upload them to a place where they can compute the output of those scans and then they want to tear down their compute afterwards, because they don't need to be running all the time. Another area uses it for looking at weather data, which is typically quite a large amount of data. They only need to process once and then they can destroy it because they don't need to look at it again, once they've done analytics on it. Those are our typical use cases: to allow our research areas to spin up their resources against a pricing model that they've secured funding for, and not have to engage the IT teams to provide the resources for them. It also allows them not to go beyond their budgets and stay within predefined lanes. We have it on-premise. We built our own private cloud and we host it on there for our academics to consume and spin up their own resources. We know that we could burst up to Azure, AWS, and GCP, but we don't. We keep it all within our private cloud at the moment.