Coming October 25: PeerSpot Awards will be announced! Learn more
Buyer's Guide
Container Management
September 2022
Get our free report covering Red Hat, VMware, Rancher Labs, and other competitors of Amazon EKS. Updated: September 2022.
633,952 professionals have used our research since 2012.

Read reviews of Amazon EKS alternatives and competitors

Patryk Golabek - PeerSpot reviewer
CTO at Translucent Computing Inc
Real User
Top 5Leaderboard
Secure solution for our data pipelines that allows us to run fintech and healthcare applications
Pros and Cons
  • "We can scale it all the way from a single zone to multiple regions around the world."
  • "One of the biggest issues right now is the Kubernetes backup system. That's being handled right now by Google, but it's in beta."

What is our primary use case?

We use Kubernetes for our data pipelines. For everything else, we use the standard version of GKE, and we manage the whole stack. It's a bit higher than infrastructures as a service. It's more of a platform as a service. Kubernetes is the platform we use to build our staff products.

We're a Kubernetes certified company. Everything in the cloud, we push toward Kubernetes. You can move Kubernetes between systems, and we're deploying the system into Kubernetes securely. We're running a lot of fintech applications and healthcare applications, so both financial data and patient healthcare is essential. That's one of the reasons we moved to Google Cloud, because the implementation is a little more secure.

Kubernetes is the number one tool used by every company in the last five years because it's a dynamic container runtime engine, and we ship software in containers.

We've been shipping software containers since 2014. We switched to Kubernetes around 2016 or 2017. We switched completely to Kubernetes as our container runtime engine. That's how we ship software and maintain managed software. Kubernetes is the primary tool for all our work that we do in DevOps.

What needs improvement?

There are multiple flavors of GKE. That's why we deploy it for different use cases. There are different issues with each of the use cases. When it comes to using Kubernetes as a commodity, which means allowing Google to manage your virtual machines, they still don't have all the features baked in. Our issue is that you have no ability to change it because Google manages it. Our biggest issue right now is that it just requires a little bit more control over some of these Google managed versions of virtual machines.

One of the biggest issues right now is the Kubernetes backup system. That's being handled right now by Google, but it's in beta. There are no fundamental issues like we have in EKS or Azure with a private cluster. They nickel and dime you with features or they're pushing you to their own observability tools. We want to use our own observability tools.

The whole thing is integrated with Google monitoring and logging all that stuff in the cloud, so it's not necessarily bad. It's just that we want to use our tools. Service mesh management is an issue as well. There's only one service management, so we would like to use console service mesh so we can use the managed project there. We don't like the way Google deploys things. We like to deploy things ourselves, and that can cause friction in terms of how we deploy things. We have to spend a little bit of extra time on coding. Fundamentally, I don't see any issues with it right now.

What do I think about the stability of the solution?

The stability is good.

What do I think about the scalability of the solution?

The scalability is good.

In terms of deploying multi-regional clusters or multi-zonal clusters, and multi-cloud, we can do that. Most of the time for development, we use a single zone just to save money, but for our staging environment, we use multi-zonal in a single region. Then when we go to production, we use multi-regional. We can scale it all the way from a single zone to multiple regions around the world. 

Now with Anthos, which is the multi-cloud version of Kubernetes and Google Cloud, it allows us to deploy Google Cloud Kubernetes into AWS or Azure. That means that we don't have to use EKS; we can just deploy Kubernetes and Amazon through Google Cloud, so we have one portal to manage everything.

How are customer service and support?

We haven't needed to use technical support yet.

Which solution did I use previously and why did I switch?

We use Amazon EKS for some clients. When we write software, a lot of times we try to keep the deployment in-house. But if it's just the client, they have their own deployment DevOps team.

How was the initial setup?

It's straightforward. We did everything in Terraform. All of our infrastructure is codified. We basically codify our whole infrastructure. We have a deployment platform to plan this and to be able to deploy all the tools in Google Cloud. It's easy for us to manage. If there are any issues, we can always change that in our code, and manage it through just code. The accessibility is nice.

Auditing those cloud resources is very easy for us. It gets quite expensive as well. If you don't keep your tabs on it from a business point of view, you're always going to check the billing to make sure that there are no services that aren't being used efficiently. Google is fully committed to Terraform scripts. 

I think every part of the communication has been updated from just basic CLI to Terraform scripts, which is nice. The Terraform, Google Cloud module is almost fully baked. It's basically a mature tool as well. It makes things a little bit easier for us to manage.

What other advice do I have?

I would rate this solution 8 out of 10.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Principal Engineer at a financial services firm with 5,001-10,000 employees
Real User
Top 10Leaderboard
Easy to scale with good capabilities, however, it could be easier to use
Pros and Cons
  • "The scalability potential is very good."
  • "We're looking for something that is even easier to use. It's a bit complicated."

What is our primary use case?

We're not using it full-blown; the plan is you have the developers to be able to develop an on-prem and then move it over to the cloud. That was a scenario. We stood up these environments so that it would be easier for us to control our internal costs for development so that we're not spitting up these cycles in the cloud and causing more cost versus just building an on-prem and then moving it to the cloud.

What is most valuable?

So far, we are in the POC stage. Overall, it is working well and we are happy with its capabilities.

The product has been quite stable.

They come out with versions regularly and they are pretty simple to update and the updates are pretty straightforward. You don't have to go through a lot of stuff to go through with the gap. You have a rolling upgrade which is nice. The key part about Ranger is that it has a good relation to storage. It has a great feature for persistent storage for Kubernetes and they work extremely well with one another. Otherwise, you've got OpenShift that uses Cipher and that's yet another infrastructure you have to build out. 

The scalability potential is very good.

What needs improvement?

We're looking for something that is even easier to use. It's a bit complicated. When you're dealing with Kubernetes, it is easier, however, there's still a lot of convoluted types of commands. 

The situation goes pretty deep and when things go South, you spent a lot of time trying to figure things out, and it takes a certain amount of skillsets. To ramp up those skillsets takes time. Unfortunately, we don't have the time to ramp up those skillsets to bring those things in. There are other solutions out there such as HashiCorp, which offers its own version of managing containers. It doesn't use Kubernetes; is a different way of doing it.

The product could use just a little bit more Federation-type capabilities going forward when you're going to the cloud. That's the kind of stuff I can use. You've got different regions within the cloud, and just from that, we that kind of support.

For how long have I used the solution?

I've been using the solution for about a year now.

What do I think about the stability of the solution?

The stability of the solution has been good overall. It's reliable and offers good performance. There aren't bugs or glitches. It doesn't crash or freeze. 

What do I think about the scalability of the solution?

The solution will scale well. That said, we haven't tested it, however, it's pretty straightforward. You just run a couple of commands to build out another node and you scale.

Currently, we don't have too many people on the product. It's just a very few people, just the Unix team, which is about six people right now.

How are customer service and technical support?

Technical support is very good. They're very helpful and responsive. We are quite please with the level of assistance we receive.

How was the initial setup?

The initial setup wasn't simple. It was a bit difficult and a bit complex. We had to get a little help from the Rancher folks to set it up initially, as we really didn't know what the hell we were doing.

The solution doesn't need a lot of maintenance; we have a team of six people working on the solution currently and any one of us could handle it.

What about the implementation team?

Rancher helped us a lot at the outset. We didn't know how to manage the initial setup by ourselves.

What's my experience with pricing, setup cost, and licensing?

I don't handle contracts or licensing costs. I can't speak to how much the solution costs.

That said, it is my understanding that the pricing is very comparable to what else is in the market.

Which other solutions did I evaluate?

We're starting to look at Amazon EKS. We haven't begun to work with it, however, since we use Kubernetes, it may be an option in the future.

What other advice do I have?

We're using the latest version of the solution at this time. I can't recall the exact version number, however.

Currently, the solution not fully in production just yet. It's still kind of like a POC. The value it brings to us is that it's a great tool. We're able to use it. It seems to be able to do the job, however, we are still looking around.

I have no negative thoughts about Rancher. It's a great product for what it is. As far as getting support, the support has been very good. There are some good price points. Ever now since they were acquired by SUSE there's been plenty of opportunities there as well.

In general, I would rate the product at a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Russell Nile - PeerSpot reviewer
Solutions Architect at a financial services firm with 10,001+ employees
Real User
Top 20
Provides centralized control of container resources, but it's prohibitively expensive to get something simple going
Pros and Cons
  • "Centralized control of container resources is most valuable."
  • "There should be a simplification of the overall cluster environment. It should require fewer resources. Just to run a simple Hello World app, it requires about seven servers, and that's just crazy. I understand that it is fully redundant, but it's prohibitively expensive to get something simple going."

What is our primary use case?

We are moving as many applications as possible to a containerized environment. In terms of our environment, we have multiple data centers. One, of course, is for redundancy. Most of them are hot-warm. They're not hot-hot or hot-cold, depending on how you look at it, but pretty much everything that's important is fully redundant. That would be between our own private data centers and within Amazon across regions.

We have an on-premises and private cloud deployment. Amazon is the cloud provider. We've got some Azure out there too, but Amazon has been the primary focus.

What is most valuable?

Centralized control of container resources is most valuable.

What needs improvement?

There should be a simplification of the overall cluster environment. It should require fewer resources. Just to run a simple Hello World app, it requires about seven servers, and that's just crazy. I understand that it is fully redundant, but it's prohibitively expensive to get something simple going.

We've had a very difficult time going from version 3 to 4. We need to go to version 4 because of multiple network segments that may be running in a container and how we organize our applications. It's very difficult to have applications from different domains in the same container cluster. We've had a lot of problems with that. I find it to be an overcomplicated environment, and some of the other simpler containers may very well rise above this. 

For how long have I used the solution?

It has probably been in use in the organization for about a year and a half.

What do I think about the stability of the solution?

It is fine. I've not heard anything negative about either the performance or the reliability.

What do I think about the scalability of the solution?

Scalability is one of the primary reasons for going with a containerized environment like this. I have not heard that we've had any restrictions there, and I would be shocked and remarkably disappointed if we did. We have not hit any scalability issues yet.

How are customer service and support?

I personally do not have any experience with them. I'm quite sure our low-level implementers do. 

Which solution did I use previously and why did I switch?

They were just different JBoss containers. It really wasn't a containerized environment. We're looking at some of the AWS solutions.

How was the initial setup?

I didn't do the initial setup. Some other people did that. We're all pretty uber geeks. So, I'm quite sure that we'd be able to figure it out naturally. Because it's a fully-featured and complex environment, you'd have to bone up on OpenShift to figure out how to install it properly, but I wouldn't expect it to be onerous.

Our implementation strategy was to start moving applications to be containerized and then implement them in the OpenShift. We were moving to OpenShift running on our own ECS on Amazon, but we have a lot of on-prem as well.

We're still working out the kinks. A part of that is our own dysfunction in terms of how we organize our apps, and then there is the problem with running apps from different domains in the same container. Some of those are our own self-imposed problems, but some of it is due to the OpenShift complexity.

What about the implementation team?

We definitely hired different experts, but for the most part, we went out and hired people with the expertise, and now, they're employees. So, I'm quite sure there were consultants in there, but I don't know that offhand. 

What was our ROI?

We have not yet seen an ROI.

What's my experience with pricing, setup cost, and licensing?

It depends on who you're talking to. For a large corporation, it is acceptable, other than the significant infrastructure requirements. For a small organization, it is in no way suitable, and we'd go for Amazon's container solution.

Additional costs are difficult for me to articulate because ours is a highly-complex environment even outside of it.

What other advice do I have?

Ensure that you need all of the features that it has because otherwise, it's not worth the investment. Be careful what version you're getting into because that can be problematic to change after you've already invested in both the training and the infrastructure.

I would rate it a seven out of ten. Considering some of the problems we've had, even though some of them are self-imposed, I would hope that a containerized environment would be flexible to be able to give us some options there. 

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Container Management
September 2022
Get our free report covering Red Hat, VMware, Rancher Labs, and other competitors of Amazon EKS. Updated: September 2022.
633,952 professionals have used our research since 2012.