What is our primary use case?
There are many use cases. It's a concept of microservices-based architecture. You will find that Kubernetes is the most reliable solution. I work for a digital advertising company, for example. When you have advertisements that are served on the top of a website, or a sidebar or something, you fill those spaces with digital advertisements. It's a complete market product, and our end customers are media houses and advertisement agencies.We are using 600 or 700 or more microservices on microservice-based architecture, and, in order to run the microservices, we use the container-based technique as it is a much more reliable platform. It's more secure due to the use of isolation techniques. Currently, we are running an almost 190 node cluster. That is a very big cluster.This is how it is used in an advertising context: if there is a cricket game being streamed on a web portal, which has a very high viewership, a lot of companies will want to promote their ads while this particular match is playing. The portal itself is responsible for managing its streaming activity. At the same time, our company is there to display the ads on the sidebars. In such a scenario, where a high volume of people are working on some content and to handle the advertisement from the various media outlets, we need a very good auto-scaling structure. Kubernetes works well for this. At any given point in time, there is a concept for a horizontal port auto scaler based on CCP utilization. Kubernetes itself tries to increase the number of ports, which means it'll try to increase the number of instances, which are running.Another example of how we use Kubernetes is in a banking environment. In this case, they have an on-prem version. They do not have a cloud solution at all. Occasionally, there is a high volume of transactions happening. They need flexibility. They need high availability and the very beautiful thing about the Kubernetes is that, behind the scenes, these companies are doing their own development of their own applications.At any given point of time, if version one of the application is currently running in their data centers in form of Kubernetes, it is very easy for them to launch version two. If version one is running, and another version is running slowly, we can divert all the requests, which are coming to version one over to version two. The moment a customer accepts that particular product, we remove version one, and version two is ready. There is no downtime and no complexity.
What is most valuable?
The deployment strategy is great. If we look into any other framework, we do not have a good deployment strategy here. The Kubernetes framework itself gives you fantastic deployment strategies with rolling updates. We can completely decouple solutions, which means we can scale as much as we want. Technically there are no limitations. The way you can scale up and scale down your cluster with very few commands is amazing.With the high availability, I can put some intelligence on the top of it. We're capable of handling any type of application nowadays. While there were limitations in previous versions, we don't need to maintain the previous state of the application. The moment our application restarts, we are not required to remember what we have used before. We do not require memory. The product is highly scalable. Security-wise, there are a lot of frameworks that are available. The product offers security, scalability, high availability deployment, and scheduling mechanisms. These are all features that people are passionate about.
What needs improvement?
There are a lot of complexities. They're a lot of components that are working together internally. If you look into the installation methods nowadays, it's better, however, previously, it was a very complex process. It's improving. It could still be better. Currently, we do have a very simple method in order to install Kubernetes. They need to focus on more security internally. The majority of the security is coming from external frameworks which means I need to deploy a third-party framework to improve the security. For example, there's Notary, OPPI, or KubeCon. Basically, there are some areas where I need to take the help of a third party. The solution requires networking dependence. Kubernetes does not have its own networking component. Once again, I need to work with a third party. It is fully integrated, no doubt about that, however, I need to be dependent on third-party components to make it work. I want Kubernetes to improve security-wise and make their own stack available inside the core Kubernetes engine to make the secure implementation. If they can integrate the networking component inside the core component that would be best. With dependency removed it would give more choice to the customer. Currently, they're improving immutable structures and a lot of things. They're coming out with version 1.21 in order to reduce some security issues. They are removing the direct dependency from Docker. There are many areas they're working on. A policy enforcement engine is something people are really looking for, which could be part of the four component vertical port auto scaler. A horizontal port auto scaler is already available, however, a vertical port auto scaler should be available. If there was a built-in solution for login and a monitoring solution, if they can integrate some APIs or drivers where I can attach directly any monitoring tool, that would be great.
For how long have I used the solution?
I've worked with the solution for almost six or seven years. I've worked on this particular product rigorously. Earlier, I used to work with on-premises solutions which involved deploying the Kubernetes cluster with the hardware in a cube spray, which is the latest method.
Buyer's Guide
Kubernetes
June 2022
Learn what your peers think about Kubernetes. Get advice and tips from experienced pros sharing their opinions. Updated: June 2022.
610,518 professionals have used our research since 2012.
What do I think about the stability of the solution?
The performance completely depends on the user. Typically, it's stable. 1.20 is a quite stable product as they have improved in many areas. Currently, that is the one stable version. Technically, yes, they are making their products stable. No doubt about that. That said, stability is an ongoing process. They are trying to improve the product in different areas. Performance-wise, it completely depends upon how you define and how you design your cluster. For example, what are the components you are using? How have you made your particular cluster, and under what type of workload? I've worked on medium to large scale workloads, and, if you rate out of five, I'd give it a 4.5. It's got a very good performance.
What do I think about the scalability of the solution?
I would recommend this solution to large enterprises. That said, small enterprises still have very simple options available to them which are reliable and secure. It is very easy to manage. Still, it's more suitable for a large-scale company or maybe something that's in the mid-range, and for a small organization, I do not recommend it. The scalability is quite impressive in this product.
How are customer service and support?
The major setback of the product is the technical support. They might provide some sort of email support, however, you cannot rely on it. You never know when you are going to get the response and unfortunately, when it comes to having a third-party component that you can use to build your Kubernetes cluster, those are also open source, and there is often no technical support, no email support, no chat support. Many have community-based support, which you can depend on. This is a major setback for the user. It's the reason customers need to hire a consultant who is rigorously working with the product. In my case, as a consultant, 24/7 I'm using the Kubernetes container and OpenShift. Due to the lack of support, other companies take advantage. For example, Red Hat. Red Hat says, they'll give support for Kubernetes, however, you have to use their product, which is called OpenShift. If you look into the OpenShift, OpenShift is basically Kubernetes. There's only one more abstraction layer provided by Red Hat. However, Red Hat will say, I will give you the support, and it's a product made by them, so they know the loopholes. They know the way to troubleshoot it. They know what to debug. They can provide support - if you use them. Rancher is another company that does this. It's basically a Kubernetes product, with Rancher as the abstraction layer, and they will provide support to their clients. Cloud providers also have jumped onto this particular approach. If I get something directly from the Cloud provider and the Cloud provider is taking responsibility, then I don't have to worry about troubleshooting and support at all. What I need to worry about is only my client or workers and my application, which is running on the top of a particular stack. That's it.
How was the initial setup?
Previously, the initial setup was complex however, right now It's pretty simple.Nowadays, deployment will take ten to 15 minutes, depending upon the number of clusters you want. If I talk about the single master and a simple testing purpose, it's ten to 15 minutes. A multi-master technique will take possibly one hour or maybe less. It's pretty fast. In previous versions, it would take an entire day to deploy. There used to be a lot of dependencies. A lot of maintenance is required in terms of image creation. Maintenance is required as well as far as the volume is concerned as space is one of the main challenges. Network support is necessary which means continuous monitoring and log analysis are needed. If I set up the cluster as well as operational maintenance activity, I need proactive monitoring and proactive log analysis. I need someone who can manage the users, authorization, and authentication mechanisms. Kubernetes does not have an authorization authentication mechanism. I need to depend on a third-party utility. Sometimes a developer will ask you to create a user and give some provisional space. There are many activities, daily activities, that need to be covered.In the world of management, Kubernetes does not have its own mechanism. That's why there has to be some administrator who can provide the volume to the Kubernetes administrator and the Kubernetes administrator can decide to whom they give the space. If an application is required, they will try to increase the space.
What about the implementation team?
I work as a freelancing consultant. I am actually providing consulting for the company, which I work for. I help my end customers who are service providers. I work as an independent consultant for this particular product.
What's my experience with pricing, setup cost, and licensing?
Even though the solution is open-source, one major service we need to pay for is storage. Normally we are using the storage from EMC or NetApps or IBM. These companies created their own stack of provisions and if I want to use their storage for my Kubernetes clusters, these are the license stacks that I need to purchase. Storage is the major component, as the licensing is based on that. Technically, there's an operating system license, which is something that I need to pay by default for every node, that I'm using. Other than that, with any other framework now, OPPA is completely free. Calico is completely free. A lot of frameworks are available. A framework is going to make sure that our entire Kubernetes cluster is based on compliance and is compliance-specific. Whichever customer I'm handling, I always look for ways to save them money because at end of the day, as they're investing in a lot of operational costs. I try to seek out mostly open-source products which are stable and reliable. Still, even if I do that, storage is an area where people need to pay the money.
What other advice do I have?
The company I am working for is just a customer and end-user. 1.20 is a quite stable version at this moment, however, Kubernetes does have another more recent version of 1.24. For us, 40% of customers are working on the cloud and 60% of customers that have compliance policies are deployed in their own cluster and are not using a managed service from the cloud.There are a lot of caches available. Using the cloud-based instances as one of the nodes in the Kubernetes cluster is acceptable. The question would be how many people are using manage services by any cloud provider for Kubernetes, and that is 30% or 40% of customers. They said they don't want to manage their cluster on their own. They don't want to have the headache of managing the cluster. They are focused on their business application and their business. This is what they want. That's why they are going for managed services. They don't have to do anything at all. Everything can be controlled by the cloud provider. On the other hand, 60% of people are looking for something that offers full control. That way, at any given point of time, if they want to upgrade Kubernetes, they can. For example, there is an open policy agent, which is a policy enforcement utility or framework, which is available on the top of Kubernetes. By default, if I want to use policy enforcement on the top of the cloud, I do have multiple choices on the top of the cloud. There are some restrictions, however. With on-premises, people want everything to be their hand so they can implement anything. One of the major things I would recommend to users is that whenever they are doing capacity planning if they are looking at deploying the Kubernetes on top of their on-prem solution, it will likely require the purchase of hardware. In those cases, I recommend they make sure they understand what type of workload they are putting on the top of their cluster, and calculate that properly. They need to understand how much consumption is in order to understand their hardware requirements in order to get the right sizing on the one-time purchase. They need to know the number of microservices they are using and the level of power consumption in terms of CPU and memory. They will also want to calculate how much it'll scale.Kubernetes will provide all the scalability a company needs. You can add the node and remove the node quickly. However, if you miscalculate the hardware capacity itself the infrastructure may not be able to handle it. That's why it is imperative to make sure that capacity planning is part of the process. I'd also advise companies to do a POC first before going into real production. I'd rate the solution at a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.