I use the solution in my company to create an end-to-end flow for the clients' pipelines and to host different applications in different languages.
The tool improves the functioning of my company and makes continuous delivery seamless. Once we set up the pipeline and you connect it to your branch, you can make any changes to different branches anytime. The tool provides an easy way of scanning the entire setup packages and the applications we work with within our company.
The solution's most valuable feature revolves around migration. The tool makes it easy to deliver the software. You don't need to sign in to any script to do anything, so you can just package everything in the tool.
AWS continuously improves the tool's UI.
One downside in AWS is that when you attempt to push a change in, it misses that part, or it could be because some variables are not set correctly. When AWS CloudFormation Stack goes for an update, it gives a failed status. Most of the time, you have to delete the stack and run your pipeline to create a new stack. You can't go back to reverse those changes with the AWS pipeline if a stack fails when you've pushed a change that didn't match up so well.
It is not true that our company doesn't have a backup solution, which will help save everything in case something goes wrong. It is not that a backup is missing but more of a configuration based on how AWS CodePipeline is set up, like how AWS CloudFormation works.
I have been using AWS CodePipeline for over two years.
It is a very stable solution as long as the developers don't add anything that will end up breaking the pipelines. If we communicate with the developers about everything we do in our company, then breakages do not really happen.
Linux is where you may have some limitations on the number of pipelines you can create through the tool. If you want to have more pipelines than the limit, you just need to add a new stack file. One stack file can accommodate a particular number of stacks, so you add a new one, then reference each of the pipelines to that stack, and then continue creating pipelines.
Everyone has their own deployment strategy, depending on their needs. But I think it's pretty popular, especially when you are doing AWS-type deployments. The tool is used by the developers, including the front-end and back-end ones, along with the DevOps engineers.
Currently, I think we are quite comfortable with whatever we have, but I think with time, we may extend its use depending on the needs of our management.
The solution's technical support responds whenever you have an issue, especially whenever you need something sorted out from their side. The engineers are always ready to help. If you have done a wrong configuration, the solution's technical support team will help you have uptime and support when you are offline for an extended time period. I rate the technical support a nine out of ten.
When you are done setting it up, sometimes a dev pushes a change and then ends up missing something or adding a variable that doesn't really exist in the stack. Then it won't update, and it won't go through so that it will fail.
The initial deployment process is not very easy, but then you have to set so many things up. You need to have quite a number of files when you are setting up the CI/CD pipelines. You also need to know the approach that you want to use so that you don't end up using a strategy that will not work out very well with the expectations or with the requirements that have been set.
The solution is deployed on the cloud. You can deploy the application on VMs. For most of the production setup, it has to be a private cloud. For other test purposes, you can go to the public cloud to deal with different types of instances. If there are front-end applications that need to be accessed by the users, then it is better to go with the public cloud option as long as one does not have sensitive data.
The type of strategy we have when deploying the tool is something that will depend on the type of front-end and back-end applications. You have your branches where you are deploying from, and then you have a main area where you add all these configurations that you have added in the individual branches when you run to execute the creation of the pipelines and the deployment of the applications. For a well-prepared user, the tool is easy to deploy.
To deploy the tool, you just need to discuss it with your front-end application developer and hire a new back-end developer for the two applications that you need to deploy if you are doing both front-end and back-end there.
I don't have to maintain the tool once the pipeline is set up. As long as one observes exactly what the setup is like, which involves not adding new variables without communicating to the DevOps engineer, everything is pretty seamless.
For the maintenance part, there is a need to basically ensure that if there are any new variables that need to be added, the developers communicate to you with the actual variable or the value. With the variable names and the values, if you are doing deployments on an EC2 instance, you would need to have a look at if there are any changes in the libraries. You would need to also update your AMI. If you use AWS, then you would need to update your Amazon Machine Images (AMIs) so that when you are evaluating your pipeline, there is no clash because of the different versions of the libraries you run.
AWS charges you based on the number of pipelines you have and how active they are, and I also think that the root account user knows about all the price-related metrics. The tool offers the best value for the money one pays for it.
The configuration and setup of the tool are good areas.
For now, I think the tool has more advantages than disadvantages.
The tool is very easy to use for the setup, deployment, and continuous integration processes, so it is an effective product. There is not much need to maintain it. Initially, you need to know exactly how you want to map everything.
I rate the tool a nine out of ten.