There are many use cases. It's a concept of microservices-based architecture. You will find that Kubernetes is the most reliable solution. I work for a digital advertising company, for example. When you have advertisements that are served on the top of a website, or a sidebar or something, you fill those spaces with digital advertisements. It's a complete market product, and our end customers are media houses and advertisement agencies.
We are using 600 or 700 or more microservices on microservice-based architecture, and, in order to run the microservices, we use the container-based technique as it is a much more reliable platform. It's more secure due to the use of isolation techniques. Currently, we are running an almost 190 node cluster. That is a very big cluster.
This is how it is used in an advertising context: if there is a cricket game being streamed on a web portal, which has a very high viewership, a lot of companies will want to promote their ads while this particular match is playing. The portal itself is responsible for managing its streaming activity. At the same time, our company is there to display the ads on the sidebars. In such a scenario, where a high volume of people are working on some content and to handle the advertisement from the various media outlets, we need a very good auto-scaling structure. Kubernetes works well for this. At any given point in time, there is a concept for a horizontal port auto scaler based on CCP utilization. Kubernetes itself tries to increase the number of ports, which means it'll try to increase the number of instances, which are running.
Another example of how we use Kubernetes is in a banking environment. In this case, they have an on-prem version. They do not have a cloud solution at all. Occasionally, there is a high volume of transactions happening. They need flexibility. They need high availability and the very beautiful thing about the Kubernetes is that, behind the scenes, these companies are doing their own development of their own applications.
At any given point of time, if version one of the application is currently running in their data centers in form of Kubernetes, it is very easy for them to launch version two. If version one is running, and another version is running slowly, we can divert all the requests, which are coming to version one over to version two. The moment a customer accepts that particular product, we remove version one, and version two is ready. There is no downtime and no complexity.
Kubernetes OverviewUNIXBusinessApplicationPrice:
Kubernetes Buyer's Guide
Download the Kubernetes Buyer's Guide including reviews and more. Updated: March 2023
What is Kubernetes?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
Kubernetes was previously known as K8.
Kubernetes Customers
China unicom, NetEase Cloud, Nav, AppDirectKubernetes Video
Kubernetes Pricing Advice
What users are saying about Kubernetes pricing:
Kubernetes Reviews
Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
- Date
- Highest Rating
- Lowest Rating
- Review Length
Search:
Showingreviews based on the current filters. Reset all filters
Learning Manager at a educational organization with 11-50 employees
Offers security, scalability, and high availability
Pros and Cons
- "The product is highly scalable."
- "They need to focus on more security internally."
What is our primary use case?
What is most valuable?
The deployment strategy is great. If we look into any other framework, we do not have a good deployment strategy here. The Kubernetes framework itself gives you fantastic deployment strategies with rolling updates.
We can completely decouple solutions, which means we can scale as much as we want. Technically there are no limitations. The way you can scale up and scale down your cluster with very few commands is amazing.
With the high availability, I can put some intelligence on the top of it. We're capable of handling any type of application nowadays. While there were limitations in previous versions, we don't need to maintain the previous state of the application. The moment our application restarts, we are not required to remember what we have used before. We do not require memory.
The product is highly scalable.
Security-wise, there are a lot of frameworks that are available.
The product offers security, scalability, high availability deployment, and scheduling mechanisms. These are all features that people are passionate about.
What needs improvement?
There are a lot of complexities. They're a lot of components that are working together internally. If you look into the installation methods nowadays, it's better, however, previously, it was a very complex process. It's improving. It could still be better. Currently, we do have a very simple method in order to install Kubernetes.
They need to focus on more security internally. The majority of the security is coming from external frameworks which means I need to deploy a third-party framework to improve the security. For example, there's Notary, OPPI, or KubeCon. Basically, there are some areas where I need to take the help of a third party.
The solution requires networking dependence. Kubernetes does not have its own networking component. Once again, I need to work with a third party. It is fully integrated, no doubt about that, however, I need to be dependent on third-party components to make it work. I want Kubernetes to improve security-wise and make their own stack available inside the core Kubernetes engine to make the secure implementation. If they can integrate the networking component inside the core component that would be best. With dependency removed it would give more choice to the customer.
Currently, they're improving immutable structures and a lot of things. They're coming out with version 1.21 in order to reduce some security issues. They are removing the direct dependency from Docker. There are many areas they're working on.
A policy enforcement engine is something people are really looking for, which could be part of the four component vertical port auto scaler. A horizontal port auto scaler is already available, however, a vertical port auto scaler should be available.
If there was a built-in solution for login and a monitoring solution, if they can integrate some APIs or drivers where I can attach directly any monitoring tool, that would be great.
For how long have I used the solution?
I've worked with the solution for almost six or seven years. I've worked on this particular product rigorously. Earlier, I used to work with on-premises solutions which involved deploying the Kubernetes cluster with the hardware in a cube spray, which is the latest method.
Buyer's Guide
Kubernetes
March 2023

Learn what your peers think about Kubernetes. Get advice and tips from experienced pros sharing their opinions. Updated: March 2023.
690,226 professionals have used our research since 2012.
What do I think about the stability of the solution?
The performance completely depends on the user. Typically, it's stable. 1.20 is a quite stable product as they have improved in many areas. Currently, that is the one stable version. Technically, yes, they are making their products stable. No doubt about that. That said, stability is an ongoing process. They are trying to improve the product in different areas.
Performance-wise, it completely depends upon how you define and how you design your cluster. For example, what are the components you are using? How have you made your particular cluster, and under what type of workload? I've worked on medium to large scale workloads, and, if you rate out of five, I'd give it a 4.5. It's got a very good performance.
What do I think about the scalability of the solution?
I would recommend this solution to large enterprises. That said, small enterprises still have very simple options available to them which are reliable and secure. It is very easy to manage. Still, it's more suitable for a large-scale company or maybe something that's in the mid-range, and for a small organization, I do not recommend it.
The scalability is quite impressive in this product.
How are customer service and support?
The major setback of the product is the technical support. They might provide some sort of email support, however, you cannot rely on it.
You never know when you are going to get the response and unfortunately, when it comes to having a third-party component that you can use to build your Kubernetes cluster, those are also open source, and there is often no technical support, no email support, no chat support. Many have community-based support, which you can depend on.
This is a major setback for the user. It's the reason customers need to hire a consultant who is rigorously working with the product. In my case, as a consultant, 24/7 I'm using the Kubernetes container and OpenShift.
Due to the lack of support, other companies take advantage. For example, Red Hat. Red Hat says, they'll give support for Kubernetes, however, you have to use their product, which is called OpenShift. If you look into the OpenShift, OpenShift is basically Kubernetes. There's only one more abstraction layer provided by Red Hat. However, Red Hat will say, I will give you the support, and it's a product made by them, so they know the loopholes. They know the way to troubleshoot it. They know what to debug. They can provide support - if you use them. Rancher is another company that does this. It's basically a Kubernetes product, with Rancher as the abstraction layer, and they will provide support to their clients. Cloud providers also have jumped onto this particular approach. If I get something directly from the Cloud provider and the Cloud provider is taking responsibility, then I don't have to worry about troubleshooting and support at all. What I need to worry about is only my client or workers and my application, which is running on the top of a particular stack. That's it.
How was the initial setup?
Previously, the initial setup was complex however, right now It's pretty simple.
Nowadays, deployment will take ten to 15 minutes, depending upon the number of clusters you want. If I talk about the single master and a simple testing purpose, it's ten to 15 minutes. A multi-master technique will take possibly one hour or maybe less. It's pretty fast. In previous versions, it would take an entire day to deploy. There used to be a lot of dependencies.
A lot of maintenance is required in terms of image creation. Maintenance is required as well as far as the volume is concerned as space is one of the main challenges. Network support is necessary which means continuous monitoring and log analysis are needed.
If I set up the cluster as well as operational maintenance activity, I need proactive monitoring and proactive log analysis. I need someone who can manage the users, authorization, and authentication mechanisms. Kubernetes does not have an authorization authentication mechanism. I need to depend on a third-party utility. Sometimes a developer will ask you to create a user and give some provisional space. There are many activities, daily activities, that need to be covered.
In the world of management, Kubernetes does not have its own mechanism. That's why there has to be some administrator who can provide the volume to the Kubernetes administrator and the Kubernetes administrator can decide to whom they give the space. If an application is required, they will try to increase the space.
What about the implementation team?
I work as a freelancing consultant. I am actually providing consulting for the company, which I work for. I help my end customers who are service providers. I work as an independent consultant for this particular product.
What's my experience with pricing, setup cost, and licensing?
Even though the solution is open-source, one major service we need to pay for is storage. Normally we are using the storage from EMC or NetApps or IBM. These companies created their own stack of provisions and if I want to use their storage for my Kubernetes clusters, these are the license stacks that I need to purchase.
Storage is the major component, as the licensing is based on that. Technically, there's an operating system license, which is something that I need to pay by default for every node, that I'm using. Other than that, with any other framework now, OPPA is completely free. Calico is completely free. A lot of frameworks are available. A framework is going to make sure that our entire Kubernetes cluster is based on compliance and is compliance-specific. Whichever customer I'm handling, I always look for ways to save them money because at end of the day, as they're investing in a lot of operational costs. I try to seek out mostly open-source products which are stable and reliable. Still, even if I do that, storage is an area where people need to pay the money.
What other advice do I have?
The company I am working for is just a customer and end-user.
1.20 is a quite stable version at this moment, however, Kubernetes does have another more recent version of 1.24.
For us, 40% of customers are working on the cloud and 60% of customers that have compliance policies are deployed in their own cluster and are not using a managed service from the cloud.
There are a lot of caches available. Using the cloud-based instances as one of the nodes in the Kubernetes cluster is acceptable. The question would be how many people are using manage services by any cloud provider for Kubernetes, and that is 30% or 40% of customers. They said they don't want to manage their cluster on their own. They don't want to have the headache of managing the cluster. They are focused on their business application and their business. This is what they want. That's why they are going for managed services. They don't have to do anything at all. Everything can be controlled by the cloud provider.
On the other hand, 60% of people are looking for something that offers full control. That way, at any given point of time, if they want to upgrade Kubernetes, they can. For example, there is an open policy agent, which is a policy enforcement utility or framework, which is available on the top of Kubernetes. By default, if I want to use policy enforcement on the top of the cloud, I do have multiple choices on the top of the cloud. There are some restrictions, however. With on-premises, people want everything to be their hand so they can implement anything.
One of the major things I would recommend to users is that whenever they are doing capacity planning if they are looking at deploying the Kubernetes on top of their on-prem solution, it will likely require the purchase of hardware. In those cases, I recommend they make sure they understand what type of workload they are putting on the top of their cluster, and calculate that properly. They need to understand how much consumption is in order to understand their hardware requirements in order to get the right sizing on the one-time purchase. They need to know the number of microservices they are using and the level of power consumption in terms of CPU and memory. They will also want to calculate how much it'll scale.
Kubernetes will provide all the scalability a company needs. You can add the node and remove the node quickly. However, if you miscalculate the hardware capacity itself the infrastructure may not be able to handle it. That's why it is imperative to make sure that capacity planning is part of the process. I'd also advise companies to do a POC first before going into real production.
I'd rate the solution at a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.

DevOps Lead at Adidas
Shared platform service that provides orchestrated deployment for different applications
Pros and Cons
- "The autoscaling feature is the most valuable. Kubernetes itself is an orchestration tool. It automatically detects the load, and it automatically spins up the new Pod in the form of a new microservice deployment."
- "I'm expecting more improvement on the UI development side, which can be reflected in each object that is part of Kubernetes, like the Pod, deployment set, ReplicaSet, ConfigMap, Secrets, and PersistentVolume."
What is our primary use case?
Currently, I'm working with Adidas. They are working with a third party called Giant Swarm. We take care of the Kubernetes installation, like the infra site. Everything else is handled on AWS.
They have utilized different EC2 instances in order to create Kubernetes nodes, the master node, and a couple of worker nodes. My company doesn't use Elastic Kubernetes services. That is an inbuilt, AWS-provided, AWS-managed service. It's an on-premises managed cluster.
We have multiple applications and different Docker images that are used as part of different projects. Some of the projects use Java-based microservices, and some of the projects use TIBCO as a middleware application server.
The end product is the Docker images, and the ultimate use of Kubernetes is to have an automated deployment job created on Jenkins to deploy those Docker images and Kubernetes clusters. Kubernetes is an orchestrated way of deployment for different applications. It's a shared platform service.
We're deploying the latest version. It's deployed on an AWS public cloud.
It's difficult to count end users because we generally deploy the application in production. Adidas itself has end users with their e-commerce website. The number could be in the millions.
What is most valuable?
The autoscaling feature is the most valuable. Kubernetes itself is an orchestration tool. It automatically detects the load, and it automatically spins up the new Pod in the form of a new microservice deployment.
Autoscaling is a very important feature. It never interacts with deployment because once any application is deployed in the Kubernetes cluster based on load, it uses the existing application in a different Pod and creates a replica of the deployed application.
What needs improvement?
There are some UI services available for Kubernetes, but it's not very user friendly if we deploy multiple applications that can be viewed on the UI itself.
I'm expecting more improvement on the UI development side, which can be reflected in each object that is part of Kubernetes, like the Pod, deployment set, ReplicaSet, ConfigMap, Secrets, and PersistentVolume.
Those could be visible for the authorized user from the UI itself. It would help to interact and check the status of these objects if there's an issue with the data or memory.
For how long have I used the solution?
I've been using Kubernetes for three years.
What do I think about the stability of the solution?
I would rate the stability as five out of five.
What do I think about the scalability of the solution?
I would rate the scalability as five out of five.
How are customer service and support?
If we have a problem, we raise a ticket and they respond immediately. Technical support is very fast.
Which solution did I use previously and why did I switch?
Compared with Docker Swarm, Kubernetes is far better. Docker provides an enhanced orchestration tool, but it's very unstable. You cannot scale or utilize that tool in production. Kubernetes is far better and has a lot of excellent features.
How was the initial setup?
I would rate deployment as two out of five because it's not easy.
It took four to five days to finish deployment. If we start certain deployments from scratch, we have a DevOps team that works on the deployment scripts and creates Helm charts in order to create different Kubernetes services like the deployment set, the ConfigMap, and Secrets. Everything is set up by the DevOps team.
There were about five people involved in implementation, but it depends on the workload. If we needed to create the deployment setup for a single microservice, one person is enough because we have a standard template to use in order to create the standard deployment set. Once the Helm chart is ready, it's just a matter of triggering the deployment.
We created the automation setup using Bitbucket, Jenkins, Helm and Kubernetes. We created a Helm chart first, then placed it in the Harbor repository. It was already automated with the Bitbucket pull request job. In case of any change in microservices, a respective development team creates the pull request to merge the code.
It automatically triggers Jenkins, compiles the microservices, and creates the Docker images. Once the Docker image has been created, it pushes other respective emails in the Harbor repository or Artifactory, which is just like a Docker repository.
There is another job in Jenkins. Once the new email is created, the deployment is a script that's also managed by a different Jenkins pipeline. It automatically triggers and does the deployment in respective Kubernetes services using a Helm chart.
Everything is well-automated. It's pretty simple after setup is completed. Setup is a one-time activity, but it takes a lot of effort because it's very complex.
A third party takes care of maintenance. We don't have access to the cluster level.
What about the implementation team?
Deployment was done by Adidas itself. The cluster setup was done by a third party. The cluster availability was provided by a third party. The deployment team then deployed the microservices Docker images to Kubernetes.
A third party manages the Kubernetes cluster, and it's quite complex. I have experience with creating clusters. As soon as we started using EKS, Elastic Kubernetes Services, which is managed by AWS itself, it was very simple. We don't need to take care of the cluster stability or cluster scaling.
For example, microservice itself is a micro application. The whole activity takes about five minutes.
What's my experience with pricing, setup cost, and licensing?
Kubernetes is open-source. It's free, but we're charged for AWS utilization.
What other advice do I have?
I would rate this solution as 10 out of 10.
Kubernetes is an excellent tool with many rich features. I would definitely recommend it. From a learning perspective, users should start with Minikube.
It's a single-node Kubernetes cluster that shows how Kubernetes utilizes their main component, which hosts different elements like the Kube Controller Manager, SCD database, and scheduler.
Everything is a very compact Minikube. You can start with the Minikube deployment, and as soon as you feel comfortable, you can extend your deployment to the main Kubernetes cluster with different nodes because it's very helpful for autoscaling. There's node level and Pod level scaling. Both of these features are available in Kubernetes, so it's very flexible.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Dec 16, 2022
Flag as inappropriateBuyer's Guide
Kubernetes
March 2023

Learn what your peers think about Kubernetes. Get advice and tips from experienced pros sharing their opinions. Updated: March 2023.
690,226 professionals have used our research since 2012.
Azure DevOps and Cloud Lead at a consultancy with self employed
Offers valuable scaling features and is an excellent platform for hosting microservices
Pros and Cons
- "The Desired State Configuration is a handy feature; we can deploy a certain number of pods, and the tool will ensure that the state is maintained in our desired configuration."
- "The solution has some issues regarding availability during high loads. Worker nodes are sometimes unavailable, affecting the overall availability of the applications. This is a bug or underlying problem with the tool, and Azure and other providers are looking into improving this by releasing new versions of Kubernetes that fix some of the platform's issues."
What is our primary use case?
Our organization has an extensive online platform available to our customers, who are geographically spread between the United States, Japan, and other parts of the Far East. The platform's backbone comprises around 120 microservices, and we use Kubernetes to host most of them.
What is most valuable?
The Desired State Configuration is a handy feature; we can deploy a certain number of pods, and the tool will ensure that the state is maintained in our desired configuration.
The features regarding scalability are also valuable. As part of our DevOps, I am involved in some enhancements where we plan to use pod scaling and the available AKS node scaling features. These are available native to AKS, but we do have to set up some matrices to control scaling and define scaling rules. The fact that we can achieve that dynamically is a significant part of why we use the solution.
Kubernetes is an excellent platform for hosting microservices, especially container-based microservices.
What needs improvement?
The solution has some issues regarding availability during high loads. Worker nodes are sometimes unavailable, affecting the overall availability of the applications. This is a bug or underlying problem with the tool, and Azure and other providers are looking into improving this by releasing new versions of Kubernetes that fix some of the platform's issues.
We usually encounter a few bugs, and as part of our partnership with Microsoft, we tend to share that data and receive active support from them. They are constantly improving the product.
Many options are available from third-party vendors and open-source providers that build upon AKS, or Kubernetes in general, especially regarding monitoring and telemetry. Perhaps incorporating similar features into the native solution would be a good improvement. However, the solution, with the core engine and the supporting ecosystem of open-source projects and other available features, covers the entire spectrum of what we need to do.
For how long have I used the solution?
I've worked on different projects using Kubernetes as an application hosting platform for two or three years.
What do I think about the stability of the solution?
The product is stable; it has benefited from a few years of worldwide production-level experience and customer feedback. That's the base, open-source version of Kubernetes. There are numerous vendors with their own flavors of the solution, like AKS and Amazon, which are also pretty stable. Rancher isn't open source, but it has many features that make it easy to maintain, so it's also stable.
What do I think about the scalability of the solution?
We have around 2000 total users, including end users and DevOps users.
How are customer service and support?
I have contacted technical support on a couple of occasions.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
We used a version of Rancher Kubernetes to manage an on-premise instance of the solution. I'm very familiar with the tool, but I'm not up to date with any of the new offerings available with Rancher.
How was the initial setup?
AKS and other managed Kubernetes instances are quite easy to set up. However, depending on the project requirements, it can become more complex.
For example, a previous project I worked on had some stringent rules around networking policies, traffic routing, etc. The tight security policies meant we had to use a highly customized virtual network upon which the AKS instances were hosted. We went with a Kubernetes networking model, which might have been called a container networking model. This model required each pod to be provided with an IP that was part of the actual IP range within a network, so pods had real IP addresses. This kind of implementation becomes more complex.
In terms of native setup, Kubernetes has its own internal networking system and cluster IPs, which facilitates easy pod scaling, so native implementation is relatively easy. When projects have higher security requirements, the implementation gets a little more complex, but it's still much more straightforward than a self-hosted cluster.
An entirely self-hosted Kubernetes cluster is the most complex. We have to set up every aspect, including the master nodes, worker nodes, and networking, which requires dedicated Kubernetes administrator resources. We previously implemented an on-premise Kubernetes cluster, and it takes significant effort and dedicated resources to manage that sort of cluster.
What's my experience with pricing, setup cost, and licensing?
I would say the solution is worth the money, but it depends on the required workloads, the type of workload, and the scaling requirements etc.
Ultimately, we're using the computing power on the nodes, so they need to be appropriately scaled according to the workload. With intensive workloads requiring large machines, I'm curious to know how much savings one would have purely in hardware cost compared to using standalone VMs.
What other advice do I have?
I would rate the solution an eight out of ten.
The solution is deployed on a private virtual network belonging to our organization and in the Azure cloud. The interconnections with on-premise are purely through VPN gateways and so on.
Regarding POC-type projects, I recommend using a trial version of Kubernetes with Rancher or a very lightweight configuration of AKS. It's essential to consider the factors involved in analysis and precisely what you want to find out. Based on that, tests can be conducted to determine the solution's available benefits. It also depends on the kind of workload; if that consists of microservices that can be easily containerized, then it's worth investing some time and effort into AKS. POCs can generate some numbers regarding costs, performance, scalability etc.
If the setup is well designed and the appropriate workloads are shifted to Kubernetes, there's a lot of flexibility available for DevOps to scale their applications. There are also many available monitoring, telemetry, service discovery, and service mesh features. If the architecture is well-planned and devised, the Kubernetes platform can provide significant benefits.
Which deployment model are you using for this solution?
Private Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Oct 25, 2022
Flag as inappropriateDirector, Engineering at a tech services company with 51-200 employees
Reliable with good clustering but needs more transparency
Pros and Cons
- "It's scalable."
- "Having a thread dump and memory dump, and seeing how many objects were created would be useful."
What is our primary use case?
Our setups are all Kubernetes-based. Orchestration and all of that is done through Kubernetes.
What is most valuable?
The clustering is the most valuable aspect of the solution. Reviewing all the servers and hardware from one common place is great. That is the best part of it.
The solution is stable and reliable.
It's scalable.
What needs improvement?
Maybe it's not the scope of this product, however, some analytics information could be more available through this. Otherwise, we have to integrate Dynatrace or some kind of tool. When it has all the servers maybe it's a different scope and it wouldn't work. Some analytics would be so great, however. We'd like insights on the services and their uses, which are very limited. We have to use a third party and paid services like Dynatrace or AppDynamics.
Sometimes what happens is, if we find, let's say, OutOfThread or OutOfMemory, where our threads are blocked. If you are doing real-time analysis, you can find them. However, if it's 24 hours after somebody reports, the product is already restarted. We don't have any information about that. Thread dump and memory dumps are not available. So then we have to wait for another crash to happen. There's a lack of backup storage. That's a daily problem. With Kubernetes, whenever we get this kind of production issue, we are clueless. We can see that time OutOfMemory happened, however, we don't have much information to work with.
Therefore, having a thread dump and memory dump, and seeing how many objects were created would be useful.
Sometimes we go to drill down. It says CPU utilization is very high. If you go inside, you'll see nothing, no information as to why. Similarly, when it says there were a lot of network errors, however, there is no information available on the network errors. It just says 10% network error, 20% network error. Yet if you drill down, there is no information available. You don't know whether it was a server that timed out, the port was not available, or some other network issue. We need more transparency in that regard.
Sometimes the DNS Lookup service does not work very reliably unless you enable cache or something. Recently, I used the latest version of Kubernetes, and DNS cache was available, which was not available in the earlier version. Now we notice we're facing a lot of difficulties, like ENOENT errors, or "Host not found" exceptions. Every day they'd say it was an application problem, however, we ultimately figured out the DNS cache was not working properly. With the latest version, when we enabled it, things sorted out. However, when we were trying to drill down in the Kubernetes, it was not giving any information. There's no clear-cut information here as well as to why this was happening.
For how long have I used the solution?
I've used the solution for the last five years.
What do I think about the stability of the solution?
It's very stable. We have not faced any such problem through Kubernetes. There are no bugs or glitches. It doesn't crash or freeze.
What do I think about the scalability of the solution?
The solution is scalable.
We have 15 to 20 people using the solution.
However, it's a two-way setup, and all those things are done by DevOps. That's why I'd say 15 users. As for the users are concerned, we have, let's say, 100 people. All 100 in one or the other form are going to Kubernetes, seeing the ports and seeing that information based on the services they are working on.
How are customer service and support?
I don't think so we have any technical support for Kubernetes. Our DevOps team typically would look into issues.
How was the initial setup?
I didn't do the implementation. We get all the things set up for us. That said, we see a lot of information. Generally, we are more interested to go through how many parts are running, and what memory is given to each part. All those things we explore. It's very useful and intuitive.
What's my experience with pricing, setup cost, and licensing?
I don't deal with the pricing aspect of the solution.
Which other solutions did I evaluate?
I, myself, tried something a long back, however, I'm not able to recall what it was. I am a developer, so my focus is more on the other side of things. DevOps might have looked into other options. I'm not sure.
What other advice do I have?
We are end-users.
We use the solution both on-premises and in the cloud.
I'd rate the solution seven out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Jul 6, 2022
Flag as inappropriateArchitect Watermanagement at a government with 5,001-10,000 employees
Easy container management, affordable, and majority of installations straightforward
Pros and Cons
- "The easy management of containers is one of the main features I have found useful."
- "This solution is not very easy to use."
What is our primary use case?
We use the solution to modernize our IT landscape. We use infrastructure and platform surfaces for our data center. More recently we have added a container as a surface, which is this solution.
What is most valuable?
The easy deployment of containers is one of the main features I have found useful. In large scale developments, it is less hassle working with containers than virtual machines. It is easier to manage these containers instead of virtual machines, although there is a steep learning curve to graps the benefits of it.
What needs improvement?
This solution is not very easy to use. We are looking also for some tools surrounding this solution to manage the environment and to secure it better. These two are areas that have caused some issues. We want to integrate it with what you call continuous integration and delivery.
It must be scalable, cost-effective, more agile when it comes to developing and managing the environment for DevOps. All these things go together, it must be cured to allow better manageability. That is what we all are doing in most large companies.
In a future release, the solution could become more like a core engine, in which tools like OpenShift are centered. You could see how all kinds of tools could help to better improve the management, security, or scalability of the product. Additionally, we will need more than the core in our organization, there needs to be more additional management tools moving forward.
For how long have I used the solution?
I have been using the solution for approximately one year.
What do I think about the scalability of the solution?
When it comes to my own environment, I have had no issues with scalability but it is not easy. This is just the development environment. Regarding my company, within my department, we are running a sandbox environment just for testing and that is going well. I am not sure how things will go if you go fully into production with this solution.
We will look for additional products to deal with the difficulties with scalability. There are several vendors that offer these products. Although this solution was made for scalability, it does not come out of the box this way.
We currently have approximately 30 users using the solution in my organization.
How was the initial setup?
I am not aware exactly how the on-premises installation went for the IT team at my organization. For my local environment where I am testing this solution myself, the installation has been very easy. This is mostly because it is a local environment. We also have a cloud environment, where we have a hybrid data center and this cloud environment installation was fairly easy too.
What about the implementation team?
The deployment was done by an internal team in my organization.
Maintenance is required for all software versions. We need to manage different areas of the solution such as the cloud-native landscape tooling, registry, DevOps environment, and security toolings. There are three areas that need upgrades, versioning, scalability, and the toolset surrounding the solution. You can not run it on its own, you need additional tools. All of this maintenance is taken care of by our administration IT department.
What's my experience with pricing, setup cost, and licensing?
The solution is affordable.
What other advice do I have?
We plan on using the solution in the future. We are a large data center and we just need to have several options available. We need to have a traditional deployment of Infrastructure as a service, with virtual machines. We need also a platform as a service for very rapid and smaller applications and container management, container as a service which is this solution for all others. We expect that the virtual machines in the next 10 years will decrease and container-based services will increase.
I recommend the solution to others. It is a very good product and the strength can be that other vendors can create their security and management toolings around it allowing it to become a type of core engine. If those other vendors were not there, I think I would be more critical. Within my department, we were a bit late adopting the solution than other parts of the organization. We are still growing and experimenting, we have some clusters already in production. A lot of the product tools are open source which in some cases means the support is also not readily available. You have to adapt to it, but also be cautious when it comes to the support and the steep learning curve issues that you can expect.
I rate Kubernetes an eight out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Practice Director, Global Infrastructure Services at a computer software company with 10,001+ employees
Internal engine designed well, useful Zero Touch Operations feature, and helpful online support
Pros and Cons
- "The most valuable feature is the Zero Touch Operations, which involves a new way of performing operations and support. We do not have to do maintenance, the operations are very simple."
- "Kubernetes can improve by providing a service offering catalog that can be readily populated in Kubernetes."
What is our primary use case?
If our project requires a cloud deployment we will use a cloud provider's version of Kubernetes. For example, Azure or AWS Kubernetes Elastic Services. We try to make use of whatever is provided by the cloud providers.
If the project requires an on-premise solution we use products from various vendors, such as Red Hat or other open-source products that can be downloaded and installed for free.
We are using Kubernetes for container management.
Kubernetes use cases are typically containerized application hosting. This is the basic use case that we do. Another use case can be deploying new application microservices which are loosely coupled and containerized using microservices-based architectures.
How has it helped my organization?
We can achieve a reduction of almost 50% to 60% of effort in operations by using Kubernetes.
What is most valuable?
The most valuable feature is the Zero Touch Operations, which involves a new way of performing operations and support. We do not have to do maintenance, the operations are very simple.
What needs improvement?
Kubernetes can improve by providing a service offering catalog that can be readily populated in Kubernetes.
The service catalog, for example, could be a CRM application on Kubernetes or an eCommerce retail application packaged on Kubernetes and to be readily deployable. Instead of somebody trying to figure out all the configurations of hosting this on Kubernetes, if something was readily available, which the developers for these CRM or eCommerce products, they could partner with either AWS, Google, or Azure and make the deployment of such applications readily available on Kubernetes.
This would allow very little work for a business to go live. The business can quickly straight away and subscribe, launch, and use. It is not difficult for an IT team to be involved to create an application environment to start up. It's would be much easier for businesses to use it directly and start off the applications.
For how long have I used the solution?
I have been using Kubernetes for approximately three weeks.
What do I think about the stability of the solution?
The stability of Kubernetes depends on how we have designed it. Our design is stable because I know how to design it and if something goes wrong how to fix it.
What do I think about the scalability of the solution?
The scalability is superb, it is highly scalable.
We have 75,000 employees in our organization that is using this solution.
How are customer service and support?
Technical support is not used very frequently. We use advanced-level support occasionally. It is only in certain circumstances when we have some advanced complexity that we reach out to an expert.
A person with a moderate level of knowledge on Kubernetes, with the help of the community forum, and documentation, most of their problems can be solved.
We do not need any particular company, such as Red Hat, to come in and support the Kubernetes environment, or some other company, such as Ubuntu Canonical to be signed up for a contract to support Kubernetes. It's not required.
How was the initial setup?
The initial setup is straightforward, it was not complex.
What about the implementation team?
The maintenance for Kubernetes is very minimal.
What's my experience with pricing, setup cost, and licensing?
You need to pay for a license if you buy branded products. For example, if you take the services from Azure, AWS, or Google, the price of the Kubernetes cluster is inclusive of the service that's being offered to us on a pay-and-use model.
What other advice do I have?
I haven't tried all the advanced features of Kubernetes, but I feel it is meeting most of the requirements of a new design architecture for applications to be hosted. I don't see any particular functionality which is not available for me as of now.
The open-source ecosystem is providing lots of ideas to solve all kinds of problems. The open-source ecosystem of developers, implementers, and integrators is providing lots of ideas. If there is something I may not know, I look up to the community forum and receive answers. There are no issues of finding something, however, Kubernetes by itself has to improve. It is a matter of the implementer to discover ideas to solve the problem. The Kubernetes engine is designed very well.
I would highly recommend this solution to others.
I rate Kubernetes a nine out of ten.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Solution Architect | Head of BizDev at Greg Solutions
Cost-effective and it has great integration has helped unify our technology stack
Pros and Cons
- "This product has a rich toolset from the community including CNI plug-ins, Helm packages, operators, dashboards, various integrations, etc."
- "This product should have a more advanced built-in scheduler that uses real application metrics in the scheduling strategy."
What is our primary use case?
The following is a list of the cases when I prefer Kubernetes for application hosting:
- Micro-services infrastructure + possible use of some service meshes, like Istio or Linkerd.
- Cost efficiency; we are using Kubernetes in conjunction with AWS Spot Instances or Google Cloud preemptible VMs.
- Standards-compliant infrastructures like HIPAA, PCI SOC, DSS, and ISOxxxx.
- Highly-available or fault-tolerant infrastructures, due to some sort of self-recovery and self-healing.
- Infrastructures with automatically scalable applications.
How has it helped my organization?
It's unified our technology stack across on-premises infrastructures and public clouds, including Amazon Web Services, Azure, and Google Cloud Platform. Kubernetes provides great integrations with other open-source tools, like Prometheus, Grafana, Elastic Stack, Fluentd, OAuth providers, and others.
Kubernetes distributions are also great because we adopt the platforms for different requirements. These include the AWS Elastic Kubernetes Service, Google Kubernetes Engine, Azure Kubernetes Engine, Rancher, etc.
It allows us to build custom-tailored infrastructures from small to big companies and satisfy various requirements, such as providing a proper level of RPO, RTO, scalability, cost-efficiency, and support high availability/fault tolerance.
What is most valuable?
The most valuable features of Kubernetes are:
- Containers self-healing and self-recovery.
- Unifications allow for internal Kubernetes components to be migrated between Kubernetes providers in an easier manner.
- Kubernetes as a service from the major cloud providers including AWS, Google Cloud, Azure, Digital Ocean, IBM, etc. Kubernetes as a service helps in infrastructure migration from on-premises to cloud, or from cloud to cloud.
- This product has a rich toolset from the community including CNI plug-ins, Helm packages, operators, dashboards, various integrations, etc.
- Built-in scaling features, it's really great!
What needs improvement?
Some improvements that we would like to see are:
- Have reacher built-in features and probably incorporate some features from the community toolset, like KEDA for pod scaling.
- There are even more tools from the community for monitoring, log collectors, authorization, and authentication.
- Have some sort of simplifications for wider adoption.
- This product should have a more advanced built-in scheduler that uses real application metrics in the scheduling strategy.
- Wider integration with cloud providers in terms of volumes and key management services.
- Add support of traffic encryption option from container to container, and Ingress to the container.
For how long have I used the solution?
We have been using Kubernetes as a self-hosted service, managed by external solutions, like Rancher, or a cloud-provider managed service (Azure AKS, Google GKE, Amazon EKS) for between three and four years.
What do I think about the stability of the solution?
This product is pretty stable, especially in the managed service option, but as with all platforms, it has some issues. As an example, during an update Kubernetes version on Amazon EKS from 1.17 to 1.18 Amazon duplicates workers count from 4 to 12 (should be from 4 to 8), upgrades takes more than 1 hour (should be about 10-20 minutes) and suddenly this leads to the short-time interruption of some applications during re-scheduling. In the end, we were forced to write our own rolling update scripts for updating the Kubernetes version on the nodes instances, which completes the upgrade in 10 minutes without application downtime. But again, this is an issue related to managed Kubernetes (in particular, Amazon EKS platform).
What do I think about the scalability of the solution?
Great scalability, especially for the small and mid-size setup with fewer than 100 nodes.
Which solution did I use previously and why did I switch?
We have used various platforms for managing Docker containers, such as Rancher, Azure App Service, and Portainer.
How was the initial setup?
The first adoption was hard because the Kubernete's learning curve is pretty high.
What about the implementation team?
The in-house team only.
What's my experience with pricing, setup cost, and licensing?
It's open-source and free, so pricing should not be applied here.
Google Kubernetes Engine is free in the simplest setup, AWS Kubernetes Engine costs about $50 (depending on the region), in a three master setup, so it's almost the same as the cost of the EC2 instances and it's totally fine from my point of view.
Which other solutions did I evaluate?
We prefer Kubernetes due to the unification and the next level of the platform itself.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Software Architect at Novatec Solutions
Great resources, useful documentation, and generally reliable
Pros and Cons
- "The scalability seems quite good."
- "The price is something they need to improve."
What is our primary use case?
We are developing some microservices for the banking sector. We are developing microservices and deploying all of them into Kubernetes. We're looking to make these projects scalable, so we are designing the policies for scaling. Also, we are deploying some front-end items. We are integrating Kubernetes on Azure, with the keyboard and storage. This means we have to use the invoice controller to properly route the request to the final destination.
Also, we deploy a database, however, it's not the main goal; it's just for a backup plan as we've had some troubles with the database, which is currently in hosted in Oracle Cloud.
What is most valuable?
The full concept behind Kubernetes is quite good in terms of trying to really take full advantage of the resources you have. You can separate your company by names, et cetera.
The scalability seems quite good also.
It seems that there is a community behind the solution that is supporting a lot of additional features that can be included in Kubernetes to integrate with other providers or software.
What needs improvement?
The price is something they need to improve.
I'm not a very technical guy. Graphically, the product could be more friendly for the users.
We'd like it if they had some sort of web management tool, I don't know if there is already one out there, however, it would help a lot.
For how long have I used the solution?
I've used the solution for around four months.
What do I think about the stability of the solution?
It has been very stable. There are no bugs or glitches. It doesn't crash or freeze.
What do I think about the scalability of the solution?
The solution can scale. It's not a problem.
We have been going into production right now, and I know there are other projects currently at the bank with the same infrastructure using Kubernetes. We're increasing usage.
How are customer service and support?
While there is support from the community, I really don't know much in terms of support and if, for example, Microsoft through Azure will provide something. We have a provider that we work with that is in charge of the support. That said, it's something like a blue layer. They set up everything, however, they didn't do anything further like channel configurations or deployments.
How was the initial setup?
I didn't properly set up the cluster. It is a service from Azure. There is another team that is in charge of setting up everything about the cluster. I have only been configuring some of the requirements for the cluster.
The setup is quite small right now. We also have a pipeline supported by Jenkins and there is one person working on that side for the other configurations. So we have about two or three people (who are engineers) working on the right now.
What other advice do I have?
I'm a reseller.
I've been reading a lot about the subject since it is new to me. There is a lot of good documentation. Of course, some of the Kubernetes webpage documentation is sometimes confusing as it's not that straight in terms of what you have to do. Still, it helps to take some lessons from some platforms Microsoft has. People need some training on the subject.
Overall, I'd rate the solution a nine out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Last updated: Nov 9, 2022
Flag as inappropriate
Buyer's Guide
Download our free Kubernetes Report and get advice and tips from experienced pros
sharing their opinions.
Updated: March 2023
Product Categories
Container ManagementPopular Comparisons
VMware Tanzu Mission Control
OpenShift Container Platform
Nutanix Kubernetes Engine NKE
Amazon EKS
Rancher Labs
NGINX Ingress Controller
HashiCorp Nomad
HPE Ezmeral Container Platform
Google Kubernetes Engine
Portainer
VMware Tanzu Build Service
Cisco Container Platform
Linode
Komodor
Diamanti
Buyer's Guide
Download our free Kubernetes Report and get advice and tips from experienced pros
sharing their opinions.