IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Process Automation
June 2022
Get our free report covering Micro Focus, VMware, Microsoft, and other competitors of Microsoft System Center Orchestrator. Updated: June 2022.
610,336 professionals have used our research since 2012.

Read reviews of Microsoft System Center Orchestrator alternatives and competitors

Jason Fouks - PeerSpot reviewer
Sr Technical Engineer at Compeer Financial
Real User
Top 20
We can automate just about anything
Pros and Cons
  • "ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in and and rerun it themselves."
  • "They have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed."

What is our primary use case?

It does a little bit of everything. We have everything from console apps that our developers create to custom jobs built directly in ActiveBatch, which go through the process of moving data off of cloud servers, like SFTP, onto our on-premise servers so we can ingest them into other workflows, console apps, or whatever the business needs.

How has it helped my organization?

We use it company-wide. With us being a financial organization, we rely on a bunch of data from some of our parent companies that process transactions for us. We are able to bring all that data into our system, no matter what department it is from, e.g., we have things from the IT department that we want to do maintenance on, such as clearing out the logs in IAS on the Exchange Server, to being able to move millions of dollars with automation.

If there is a native tool for it, then we try to use it. We have purchased the SharePoint, VMware, and ServiceNow modules. Wherever we find that we can't connect in because the native APIs aren't there, we have been using PowerShell to strip those rows out into an array of variables that have worked pretty well. So far, we have not found a spot where we can't hook in to have it do the tasks that we are asking it to do.

We have only really tapped into SharePoint native integration because we haven't gotten to the depths of being able to use the ServiceNow and some of the other integrations. However, being able to use the native plugins has been very helpful. It saves us from having to write a PowerShell script to do the functionality that we are looking to do. We are really trained to write it, because within the old process that we used to use, we would do a lot of PowerShell as the old tool just wouldn't do what we're asking it to do. We are finding a lot of processes within ActiveBatch are now replacing those PowerShell scripts because ActiveBatch can just do it. We don't have to teach it how to do it.

We can do things within ActiveBatch, not having to teach it everything. That is the biggest thing that we've been learning with it: It's easy to use and its workflows work a lot better. The other day, we ran into a problem where Citrix ShareFile, which is one of our SFTP locations, was being stupid where it would disconnect from the SFTP server. It was all just a time out. Well, ActiveBatch has a process included where we can troubleshoot the connection failures and have itself heal enough to be able to get the data off of the SFTP server. Being able to discover the functionalities of ActiveBatch self-healing has been a lifesaver for us.

We have so many different processes out there with so many different schedules. My boss looked at it one day and noticed there was somewhere between 1,000 and 2,000 processes a day. The solution gives us that single pane of glass to see everything under one spot because we have four execution agents constantly running, so there are processes happening at all times of the day and night.

We are actively monitoring all our ActiveBatch processes using SolarWinds Orion. If a process doesn't run, a service is not running on one particular execution agent, etc., Orion will alert us to that. I don't think that we have set up anything too major within ActiveBatch to figure out what is going on. I know that we have HA across everything. So, we are running four execution agents and two jobs schedulers. Having all that stuff put together, then it does failover to the other location if there is a problem with one of the sites.

What is most valuable?

The most valuable feature is being able to ingest some PowerShell scripting into variables that we can then utilize in loops. Our first rendition of doing PowerShell into variables was being able to pull some Active Directory computers using a PowerShell script and Active Directory PowerShell modules, then we were able to take that and dump it into a SharePoint list, because we keep inventory of all our servers. It was through the process of trying to understand how to get something out of PowerShell into an array and being able to process that out into something else that it would become useful down the road.

There are some things that ActiveBatch can't do natively, which is no fault to them. It's just the fact that we're trying to do things that just don't exist in ActiveBatch. With us being proficient in PowerShell scripting, we were able to extend the ActiveBatch environment to be able to say, "We'll run this PowerShell script and get the array that we're looking for, but then take that and do something native within ActiveBatch that can ultimately meet our goals."

The ease of use has been pretty good. I have been able to create workflows and utilize different modules within the job library, which has worked out really well. 

ActiveBatch's ability to automate predictable, repeatable processes is good. It does that very nicely. A lot of what we do is we pull files down from SFTP servers and put them onto our local file servers. Based on that, we are able to run a console app that developers have written, which is a lot more complicated, for doing various tasks. Our console apps are easy to set up because we have templates already drawn up. So, if we just right click into our task folder, we can quickly create an item in there that we can start up for doing an automation feature. Just being able to use PowerShell to drop variables into the ActiveBatch process has worked really well now that we understand it.

What needs improvement?

I know that there are some improvements that I have brought back to the development team that they want to work on. The graphical interface has some hiccups that we have been noticing on our side, and it seems a little bit bloated. 

While the console app works well, they have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed.

For how long have I used the solution?

We did a proof of concept back in April.

We are in the process of migrating all our old processes over to ActiveBatch. The solution is in production, and we do have workloads on it.

What do I think about the stability of the solution?

It is pretty stable. Now that we have worked through the details and ensured that we can do a failover to let the process do what it needs to do, we haven't seen any problems with it.

We are about 90 percent done migrating our processes.

What do I think about the scalability of the solution?

Right now, we have four execution agents, and they are sitting pretty idle for the most part. If we find that we're starting to see taxed resources on our execution agents, then we have the capability of spinning up more. So, we can run hundreds of servers and automation, if we wanted to.

There are only three of us who have been working with ActiveBatch, which is a good fit. We have one admin who is a developer first, then admin second. Then, there are two of us, who are server people first and developers second. All three of us manage all the different job libraries out there.

In the entire organization, there are about 1,300 of us using the different processes. A lot of people who would be more hands-on are the IT department, mainly because we are directly involved with all the different console apps. We have actually got a significant number of console apps, just because SCORCH couldn't do some of the things that ActiveBatch can do, so our developer teams went in and created the console app. At this point, all that ActiveBatch really needed to do was to be able to run an executable and provide an exit code on it, then let us know if it fails. There are some other business units who are involved a bit more along the way due to the movement of money, for example.

It is heavily used, at least in terms of what is out there. There is a lot of interest in adoption of using it in the future along with a lot of processes that people are really pushing to get put into ActiveBatch. They still have the mentality that a lot of it needs to be done as a console app. However, with us just ending the migration phase of things, we are trying to just get everything moved over so we can shut down the servers. Then, the next step in the future, probably 2021, we'll end up focusing on what ActiveBatch can do without us having to write a console app. 75 percent of the time, we could have ActiveBatch do it natively. There is just a matter of getting a lot of the IT developers to feel comfortable with adopting it as a platform.

How are customer service and technical support?

I am working with them on their tech support. We have a customer advocate with whom we have been working. She has been awesome. We have had some issues where tech support will suggest one thing, then we are sitting there scratching our heads, going, "Do we really need to go that complicated on a solution?" Then, we reach out to our customer advocate, who comes back, saying, "No, this is how you really need to do it. I'm going to take this ticket and go train that tech support person. So, in the future, you don't get the answer you did." Therefore, their tech support is a bit rough around the edges, but I foresee in the next six months to a year, they will be on their game and able to provide exactly the answers within the timeframe that we expect.

Which solution did I use previously and why did I switch?

We see ActiveBatch as the Center of Excellence for all things related to automation for our business. It is the best solution that we have had compared to what we were running before, which was Microsoft System Center Orchestrator (SCORCH). We don't want to have a whole bunch of different solutions out there. Being able to have one solution that can do all our automation is the best way to do it.

We switched over because of the intelligence. We were right in the middle of trying to decide whether we were going to upgrade SCORCH to the latest version or if it was time for us to go a different path. As we started going down through the different requirements that we needed SCORCH to do, we decided that it was time for us to go in a different direction. SCORCH had to be taught everything you wanted it to do, whereas there are a lot of processes that ActiveBatch will just go ahead and handle.

The performance is about the same between the two solutions in terms of doing what they are supposed to do. Where we really have the advantage is the fact that we don't have to reinvent the wheel, e.g., triggers within Active Batch are native and can be set up pretty quickly and easily. Whereas with SCORCH, we struggled with trying to get a schedule setup for that trigger or being able to rely on constraints. For example, if a file doesn't exist, then you really can't do anything. In SCORCH, we had to teach it that if you don't see a file, then hold on a second because we have to wait. Where ActiveBatch just says, "Oh, okay. I know how to do that."

In certain cases, ActiveBatch has resulted in an improvement in workflow completion times, because of the error retries. We can take care of them by telling ActiveBatch that if you have a problem, go ahead, try it again, and modify this. If the job runs at two o'clock in the morning and it failed with SCORCH, we always had to go back, figure out what happened, and how to get it run again. It might have been something as stupid as no network connection, because one of our upstream providers had an outage. Whereas, at least with ActiveBatch, we have been able to build in that self-healing or error detection. Once it sees the connection, it can go ahead and just correct the problem. For example, the Internet might go down from 2:00 AM to 2:15 AM, then by 2:30 AM, it's all back up and running. ActiveBatch can go ahead and finish the task. Where with SCORCH, we were finding that it would fail. Then, at seven o'clock in the morning, we got to troubleshoot any issues that might have come up. 

A lot of times, troubleshooting did not take very long, as it depended on the process. If it's something that could be downloaded from the SFTP, then that relied on several other steps that needed to take place. That might have delayed it a bit because we had to walk through five different processes that normally would have been scheduled to run at 3:00 AM versus 2:00 AM. So, if the Internet is out between 2:00 AM and 2:15 AM, ActiveBatch heals that first process before the second one runs at 3:00 AM. Then, we don't have to go through and do any added troubleshooting because step one didn't work, and step two failed because we can't troubleshoot it until we get up and start looking at it that day.

How was the initial setup?

The initial setup was straightforward.

It took two to three hours to deploy, by the time we had all the intricacies done that we wanted.

We knew that we wanted it to be highly available in two data centers for DR purposes, because some of these processes move millions of dollars of money between accounts (in various pieces for wire transfers). I think HA was the big thing that we were trained to ensure that our strategy was based around. 

The only other strategy was the fact that we have multiple environments that we go through to test our solution out first. When we are done, we export/promote it up to the production environment.

What about the implementation team?

The good part was that we really didn't have to do the install because we ended up getting a proof of concept setup with one of their engineers. So, we didn't have to do the initial setup ourselves, but we did build two other environments: one in our test environment and one in our development environment. Based on the fact that we walked through it the first time with the proof of concept, I was able to go back and reproduce every step that they walked us through on day one to build out the test and dev environments.

What was our ROI?

I have absolutely seen ROI. Coming from the admin point of view, it has streamlined the process of being able to just implement something instead of having to teach the software how to do its job. From our point, I know that I have implemented a couple of different processes that were not a migration piece, and it's been fairly easy for us to deploy because we know what the business unit wants to do with it. For us to implement, it takes us about 20 minutes to get it perfected on my side, then I can have developers run with it, test it, and figure out what their code was doing to make it happen. So, the biggest thing is that it is easy to use.

I know that there are enough processes out there that it's worth a gold mine. We can automate just about anything that we would ever want to. If we wanted the lights to turn on at a certain time, we could go ahead and turn the lights on at a certain time, and it would just happen.

ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in, then rerun it themselves. 

This solution improves our job success rate percentage. The biggest thing is having built-in capabilities of error detection, retries, and the ability to self-heal.

ActiveBatch has saved us man-hours. We don't have to rerun some of these scripts on behalf of the business unit. Or, if there is a script that fails, it can go ahead and self-heal, fixing itself. That is all unaccounted for troubleshooting time while helping our business units. 

What's my experience with pricing, setup cost, and licensing?

The pricing was fair. 

There are additional costs for the plugins. We have the standard licensing fees for different pieces, then we have the plugins which were add-ons. However, we expected that.

Which other solutions did I evaluate?

We had a consultant come in and try to share with us all the different tools. However, there isn't a lot of competition out there for automation capabilities.

A major component was that the vendor is thinking five years ahead, looking to future-proof our business. When we were making our decision, we were either ready to go with either upgrading SCORCH or a different path. We wanted to be in connection with an organization who had a long-term plan. We didn't want to revisit this in one to three years down the road.

What other advice do I have?

We have been able to learn it pretty quickly. We were kind of thrown right in after we got the proof of concept up and going. We had a couple of use cases drawn up and implemented, and they showed us how to do it. Our boss ended up buying the software, and said, "Ready, set, go. We're going to start migrating all these different processes over." We really didn't get time to learn it. Based on what we knew about our previous application that we were using for automation, we were able to step right in and do the best we could. We have been doing weekly, one- to two-hour sessions where three of us get together, just understand the solution, and try to work through all the details. We have been able to learn it pretty quickly without having too much training or knowledge.

We have gone through and given the business units a demo of what the possibilities are for sharing knowledge and ideas. At the end of the day, there is a team of three of us who are actually implementing all the processes so we keep a kind of standard. However, to give a business unit an idea of what the functionality is and how we could best utilize it, we at least give them the 30,000 foot view of what ActiveBatch could do, then we build it.

We mainly use it for console apps, but we haven't explored them in real depth. I know that we could get even deeper. At some point down the road, a lot of the console apps that our developer teams create will more than likely become native ActiveBatch processes which we will no longer need the console apps to run.

For the admins, the biggest lesson learnt would be in those first 30 days going through and learning through the Academy. They have an online Academy that they have out on their website. The biggest struggle that we had was just the fact that we were trying to do this migration not knowing all the different features of the software. We ran into trouble where we would try and implement something (and we wanted to do it by best practices because we want to get it right the first time), but there were features that we were discovering along the way that we had no idea about until all of a sudden we needed that feature. Then, we would go back, and go, "Oh, you know what? That last procedure that we just implemented. It would've been really cool if we would have known that at the time."

If we would have taken the first 30 to 60 days, or even a week long crash course, in ActiveBatch development to get all the highlights of everything that the software could physically do, that would have helped us immensely just to make sure that we knew what was going on and how it worked. We probably would have implemented some of our migrations a little differently than we have them done today. So, we will have to circle back and revisit some of those processes and reinvent them.

Take that time and learn the solution. Make sure you understand the software, at least at a higher level, maybe not the 30,000 foot view, but maybe the 1,000 foot view and get through the Academy first. Once you get through the Academy, then you can go ahead and start implementing the job libraries and how you want it to lay out and be implemented. Even after nine months of working with the software, we're still discovering features that we wish we would have known nine months ago coming into the migration.

I would probably rate the software as a nine and a half or 10. I would rate the tech support as probably a six, but they are improving immensely. If I had to give it an overall score, I would go with an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Architect at a financial services firm with 1,001-5,000 employees
Real User
Provides a holistic view of jobs, a nice interface, and offers lots of plugins
Pros and Cons
  • "The Control-M interface is good for creating, monitoring, and ensuring the delivery of files as part of our data pipeline. There's a wealth of information in both the full client, as well as the web interface that they have. Both are very easy to use and provide all the necessary material to understand how to do various tasks. The help feature is very useful and informative and everything is very easy to understand."
  • "Some of the documentation could use some improvement, however, it gets you from point A to point B pretty quickly to get the solution in place."

What is our primary use case?

We primarily use the solution for automation, orchestrating and automating the workloads, and being able to schedule tasks. Prior to Control-M, we were manually running jobs or there was either a scheduled task on Windows, getting Task Scheduler, or we'd have a script laid out that someone would have to run through manually on a daily basis. 

We learned about Control-M and felt that it could take over that process and have it automated, while also providing some monitoring and notifications so that if something did fail, we could easily be notified and keep track of it.

How has it helped my organization?

It provides a holistic view of jobs that are scheduled to run. We haven't done full production on it yet. Hopefully, we'll be in production by July or August this year. That said, so far from what we can see, it's going to free up some time for certain staff that has been running these tasks manually overnight. Now, if someone gets notified of an issue, then they can address the issue. In the long run, it'll free up some time and resources to focus on other tasks. 

What is most valuable?

I like the interface, including how I can see everything and how I can put the jobs together. Depending on the experience, I can either use the GUI or I can use the command line to create jobs based on JSON scripts. It provides that flexibility for someone who has no experience of using Control-M as well as with someone who's a full-blown developer that can get very complex with creating these jobs. Generally, it provides a good interface for everyone with different levels of experience.

Control-M doesn't really process data as far as I can tell. It orchestrates other scripts. From what I understand, Control-M doesn't really ingest or analyze any data. It's a tool to help with the processing of data on different platforms. I can tell it to run a script on one server, to send the data over to another SQL server, or a different platform, Power BI for example, and run a script on Power BI so that it can ingest the data when it gets there and do what it needs to do. Once that's finished, I can send it to another platform to put a dashboard together based on when that data is available.

Once one understands the process of how it functions, it's pretty simple and straightforward to create, integrate, and automate the pipelines. There is a learning curve to understand how it all works, all the components, and all the requirements for parameters and different options. However, it's pretty simple once someone has a basic understanding of how it all works.

The Control-M interface is good for creating, monitoring, and ensuring the delivery of files as part of our data pipeline. There's a wealth of information in both the full client, as well as the web interface that they have. Both are very easy to use and provide all the necessary material to understand how to do various tasks. The help feature is very useful and informative and everything is very easy to understand.

It’s great that Control-M orchestrates all our workflows, including file transfers, applications, data sources, data pipelines, and infrastructure with plugins. There are a lot of plugins and we haven't used all of them yet. Primarily, we've only used the file transfer plugin, the Azure file service, and Azure functions. Primarily, the developers have used that to put the various tasks and workloads in place. While we haven't fully utilized everything in Control-M yet, we're learning how to use the various functionalities and transitioning from our legacy scripts and data sources. 

What needs improvement?

Some of the documentation could use some improvement, however, it gets you from point A to point B pretty quickly to get the solution in place.

For how long have I used the solution?

I've been using the solution for almost a year. 

What do I think about the stability of the solution?

It seems stable. I haven't rolled the solution out to a very large environment yet. The solution we're working on right now seems to be working fine. All the issues we've seen have to do with us figuring out connectivity between Control-M and the cloud services, however, I haven't had any experiences with issues around stability with Control-M.

What do I think about the scalability of the solution?

Right now, it's a small deployment and we have it in four environments. We have it in our dev, QA, UAT, and production environments. Right now, there are two application teams that are using Control-M, however, we have another two or three teams that are looking to get onboarded.

It's pretty scalable. I haven't done a deep dive look into it the scalability, and we haven't identified a need yet to scale out. It seems pretty scalable, yet I'm not sure as I can't speak from personal experience. I don't have experience with it yet.

How are customer service and support?

It was a challenge to get the direction on how Control-M should be implemented. As we learned about new requirements from the customer, implementing those with help from the engineers at BMC was hard. The third-party contractors were one issue, however, when I escalated it to our customer representative, he was able to get me in touch with a dedicated BMC engineer and she was able to give me the information I needed and provided the context and direction on the best approaches. I wasn't able to use the third-party engineer that was assigned to us, however, the internal resource was a great partnership to help move this along.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We were using Microsoft and internal tools. We used the basic Windows tools that were built in.

We went with this product to centralize the deployment and to centralize the management of all of the workloads.

How was the initial setup?

Some of the installation components were really complex. I'm more on the infrastructure-based side of Control-M, I deploy it and then get it ready for functional use so that the application developers, script developers, and workload developers could easily access it. It took me three weeks to figure out the requirements for getting the SSL certificates as the documentation wasn't really clear on what those requirements were. Once we figured it out, it was simple, however, the support staff couldn't give me the right information to understand what was required.

It seemed like there was a gap in expectations on what was required for certificates. In terms of the installation overall, it wasn't clear what each variable or what each configuration point was referring to until we were well versed with how everything functioned. Then we were able to say, "Oh, this is what that field meant and this is what was required here." However, during the installation process, there was very limited information on what was being asked at each configuration point.

In terms of strategy, there was a challenge with the customer. I was the third or fourth resource that was brought onto the project. The first three people that handled it, internally and externally, had trouble figuring out what the expectations were. I was handed the baton at the last moment. I had to tie up loose ends and try to get this up and running for the CIO before he started to send up red flags to BMC.

What about the implementation team?

We had an integrator, however, setting up the timing with the integrator was a challenge. What I got from my company and the general expectations weren't clear. When I did get clarification, I wasn't able to get ahold of the contractor since he required a week or two weeks lead time. We then ran behind based on the lack of information I got. Setting up time and requirements was a challenge.

I'm also a contractor working for a customer. Being a third party, trying to work with another third party with minimal information from the client, was just a challenge all around.

What's my experience with pricing, setup cost, and licensing?

There was another team handling the pricing. I'm not sure of the exact costs. 

Which other solutions did I evaluate?

Our customer chose this solution. 

What other advice do I have?

We do not use the Control-M Python client and cloud data service integrations with AWS and GCP and we do not use Control-M to deliver analytics for complex data pipelines yet.

We haven't gone into production yet, so we haven't rolled this out to all our customers. We're still testing the features and we'll be starting the UAT in two to three weeks.

Right now, we're still in the early stages of rolling everything out. We've gone through the testing in our development environment and in QA to make sure things are good. Now, we're testing performance in UAT internally, and then we'll have customer validation within a few weeks before we go into production.

The solution will play a very critical role in day-to-day operations. However, it'll be at least two months before it becomes critical. Right now, it's still being implemented and evaluated.

It is pretty flexible on various cloud solutions, working with different cloud technologies and platforms. I would say potential users should take a look at it. It does provide a lot of flexibility, especially with the application and integration component that they have. The developers seem to really be able to get what they need out of the AI or the application into an integrated product or feature set.

Before installing Control-M, have a sit down with the Control-M solutions engineer and make sure you share with them all of the details of what you'd like to accomplish before deploying the solution. My client just said, "We want this" and they didn't give us the details about what they were looking for. We ended up having to redesign a few features, as those items were not clarified as part of the installation. When I was brought on board, the customer didn't mention they wanted HA, so that came later. At that point, we had to reinstall and add more servers.

The person who signed the contract was focused on MFTE, which is the enterprise file transfer tool or managed file transfer tool. However, later, the architecture team decided not to use that and go with another tool. Due to that decision, the client could have gone with a SaaS solution instead of the on-premises solution to Control-M and saved a lot of time, money, and hassle on deploying the on-premises infrastructure. So my advice to others is to make sure that the needs and the functional usage of the tool are identified clearly before purchasing or implementing the tool.

I'd rate this tool ten out of ten. It does what it says it does. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
Thomas Chan - PeerSpot reviewer
Assistant Manager System at Pro-Technic Machinery Ltd
Real User
Top 20
Makes it easy to manage VMs, reasonably-priced, and the technical support is helpful
Pros and Cons
  • "The most valuable feature is how easy it makes it to manage the VM."
  • "When the SSO certificate needs to be renewed, the upgrading and testing are quite complicated."

What is most valuable?

The most valuable feature is how easy it makes it to manage the VM. In our case, that is quite good.

What needs improvement?

The product is not quite easy to use and needs improvement in that regard.

When the SSO certificate needs to be renewed, the upgrading and testing are quite complicated. We faced this issue just three months ago.

For how long have I used the solution?

I have been working with vCenter Orchestrator for between three and four years.

What do I think about the stability of the solution?

For what we use it for, it has been quite stable.

What do I think about the scalability of the solution?

It is a scalable product and we have between 60 and 70 end-users. We do not plan to increase usage within the new few years.

How are customer service and technical support?

Technical support is quite good and quite helpful.

Which solution did I use previously and why did I switch?

Prior to vCenter Orchestrator, we did not work with a similar solution.

How was the initial setup?

There are a lot of steps that have to be done using the command line and the documentation doesn't cover the entire process. The knowledge base was not open to the public so it was not easy to resolve the issue that we were having. We had to ask for support.

What about the implementation team?

We had support from the vendor for our deployment.

Two people are in place to support this product and perform maintenance.

What's my experience with pricing, setup cost, and licensing?

The price is reasonable, and one of the reasons that this product was selected.

Which other solutions did I evaluate?

We evaluated Microsoft System Center Orchestrator at the same time, but vCenter is the solution that was chosen. One of the reasons that we chose it was the price.

What other advice do I have?

I would rate this solution a nine out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Process Automation
June 2022
Get our free report covering Micro Focus, VMware, Microsoft, and other competitors of Microsoft System Center Orchestrator. Updated: June 2022.
610,336 professionals have used our research since 2012.