What is our primary use case?
I've been with the same company for 22 years. The use case started out truly as a batch processing solution. That was what we originally got it for back in the day to help us automate what was being done manually or being done through homegrown tools or scripts, et cetera. The use cases evolved through the years. Now, we use it to orchestrate workflows that are touching traditional data centers and that are going out to the cloud and bringing it back.
From one spot, we have a single pane of glass. Like many companies, our systems are getting more complex and more diverse, with cloud and edge computing, containerization, et cetera. However, we have one place where we can go and look and see what's going on. If something happens, we can check what happened and where it happened. Today, we're dependent upon a lot of services and cloud technology that sometimes we don't know the ins and outs of.
A big challenge is to make sure that we have certain things run daily or on a periodic basis. That really was the driving use case. We had a lot of manual tasks going on and if someone, for example, left on vacation, something may not get done for two or three days, a week or two weeks. This solution takes all that away.
The main use case was to get away from having to stare at a system or a screen, and just let things run, let the workflows flow, and only be notified if there's something wrong. That was really a big driving use case.
How has it helped my organization?
It freed up people to work on exciting work instead of mundane work. No one has to sit around and stare at that screen all day long. No one has to reinvent the wheel for the 50th or 500th time to do tasks like maybe put a file out into an S3 bucket or out into an HDFS Hadoop file store since it's already there. It's already done for them. They just drag, drop, click and they're done. It's freed people up and they can do the exciting work that is really what we should be doing anyway. No one wants to be doing boring work.
What is most valuable?
I am a big proponent of an automation API and Jobs-as-Code. That is Control-M in the DevOps world. It opens up the tool to a traditional operations tool. Developers can jump right in there now, giving them that ownership, and integrating the existing DevOps tools that they have. That is a huge feature that I just love.
There's an application integrator. It doesn't matter if you're trying to integrate with on-premises, off-premises, API, container, or serverless functions, it's easy for you to design. You just design that integration and then it's available instantly, and that's a huge time saver.
It's rather easy to create, integrate and automate data pipelines with Control-M. I can give a broad answer. It can be as easy as drag and drop, or it can be as complex as designing the integrations. If you use customization, you can access a data lake that your organization developed. For the typical user out there, the difference is on a scale of one to five, with one being easy and five being hard, you're probably looking at a two and a half. For most people, it's very easy. It's getting easier as it's all web-based nowadays. Alternatively, it can be all code-based.
I have not explored Python Client too much. I've tinkered with it and that's been the limit of my exploration. Now, with the integrations like AWS, we've made extensive use of it, and it is very easy for anybody to do. Python Client has a lot of great possibilities, especially in the data science arena, however, sadly we have not had an opportunity as of yet to play with it.
The Control-M interface for creating, monitoring, and ensuring delivery of files as parts of your data pipeline has gotten better. It is not perfect. That said, it’s come a long way over the years. Nowadays, most of it is web-driven. A lot of it can be API driven if you so wish. There's still probably some future work to be done there, however, for the average user that's coming in, starting to use it for the first time, they're going to need a little training and handholding at the beginning for maybe the first week or so. Then you can start setting them free to go out and use it on their own.
The orchestration of our data pipelines and workflows has been able to give a single point of view too. Management doesn’t care about the bits and pieces. A workflow or a data pipeline could have 100 or 1,000 components behind it, and management does not care about that. Management cares whether the SLA has been met or not. They want that easy-to-see red light or green light. We can provide them with that. The solution drives self-service and it helps. A manager doesn't have to call somebody in IT and wait around for an answer.
They can immediately get that information for themselves, consume it and be able to understand that, "Hey, you know what, this data pipeline over here, we're going to be 15 minutes off our SLA for today." Then, they can start asking why. I like parts of Control-M like Batch SLA Impact, is they can start doing some of that analysis themselves, for example, “this late due to the fact that maybe the system was down for maintenance for two hours last night." That's really beneficial in today's business world.
The automation of Control-M has sped up everything. We can integrate directly into existing pipelines and the DevOps teams can get anything integrated with their Jenkins deployments. They don't have to wait for traditional operation functions. This is all built-in. It validates and checks. In some cases, it automatically deploys the agents and deploys the configurations. That's something that years ago you'd have to wait for. The speed of delivery has vastly improved.
Nowadays, auditing is as simple as running a report. If this falls under an auditable category, we can just hit a button and the report is done. Control-M audits everything, even if it is not under the regulatory or audit spotlight. Every process, every movement, and every change is logged by the system. If there's ever a question, you’ll be able to find a why and a when. There’s an audit trail.
It certainly helped speed processes up. I can eliminate what I call the manual gaps between certain features. I don't have to send an email to somebody to say, "Hey, guess what? That file's ready. Now you can run process X, Y, Z." The system just says "Hey, the file is there, let's go." It's eliminated those gaps between parts of the workflow. It also helped optimize the infrastructure needed as it's like a Tetris Puzzle. I have these ten different workflows that I'm trying to run and before I may have had ten dedicated systems for them. Now I know that I don't need that.
We use this model all the time. We can run those ten processes on three systems and be just fine. That saves money. The solution is not only speedy, but it also saves money.
They are doing a great job with continuing to drive the open-source model of it. Five years ago, if you looked for Control-M anywhere, you would not have found it. Today, that model has changed. They're actively publishing on GitHub.
You can download for free an entire container and run Control-M at home if you want to tinker with it. That was unheard of a few years ago. You can type a query in Google and start to see all sorts of documentation that is now available to the public. The major strides that they have made there are pretty darn good.
What needs improvement?
If you want to take it and ramp it up to doing some very heavy-duty integrations, you can find yourself at first dealing with a difficult integration. However, once you get that integration going for maybe a month or so, the next person after you will have less difficulty. That's the power.
They can improve their interface. They're going through huge modernization efforts and they're getting there. They're probably 75% there, however, there's still another 25% to go.
Buyer's Guide
Control-M
May 2025
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
852,764 professionals have used our research since 2012.
For how long have I used the solution?
I've been using the solution for 22 years.
What do I think about the stability of the solution?
Since it supports business, it has to be stable. It's very stable. We have not had major outages or anything. That's always a good thing, however, like with any solution, its stability is going to depend on how you deploy it and what safeguards you put in place, including high availability and disaster recovery, et cetera. All the hooks for that are in the product, however, it's up to you to decide how you're going to use those hooks.
What do I think about the scalability of the solution?
It's highly scalable. You can run five things in it today and easily scale up to run 1,005 things tomorrow. In terms of scalability, there are no issues there.
How are customer service and support?
Technical support tends to be very helpful.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
I used to work for an insurance company and I used Computer Associates. It was called CA-7 and CA-11, which are similar tools.
We tried to use Computer Associates before this, but it didn't support the systems we needed and the integration was next to impossible.
How was the initial setup?
I was involved in the deployment and initial setup of the solution right from the beginning.
We had jobs and workflows running within the first day. That was pretty good. We don't use the Helix model, however, there is a Helix model you can purchase, in which everything's hosted by BMC. You can be up and running literally in hours which is reasonable. There's a learning curve, however, if you do not get some value out of it within two days, you're probably doing something wrong.
At the time, there were only two of us deploying the solution. Today there are only three of us.
It's business-wide. Everything from data to marketing, to finance, even though it probably wouldn't make sense to anybody else, it touches everything. It's deployed across Windows, Linux, containers, VM, cloud, et cetera.
If anybody has a use case or wants to learn more about it, we'll show them. Anybody in our organization can get basic access and can tinker around in an alpha test environment. This includes non-technical people. We have non-IT people that use it.
If they can self-service and maybe design some parts themselves, that's a huge win right there. We have a very open model of deployment.
There are occasional patching and vulnerabilities that come out. Most of the patching nowadays can be automated if you're using the Helix-based solution. A lot of that is handled by BMC.
What about the implementation team?
We did not use an integrator, reseller, or consultant for the deployment.
What's my experience with pricing, setup cost, and licensing?
I can't speak to the exact licensing costs.
Which other solutions did I evaluate?
Every few years we go through a reevaluation. We'll go through and look at what's on the market and what companies have come up with or released new versions. We'll go through and we'll say, "Okay, let's compare these, what do we need and what are all the tools offered out there?" We do that roughly every five years and it keeps us on our toes.
The biggest difference as of late is the API and Jobs-as-code. Control-M is light years ahead of others. It is light years ahead of the competitors and what they're offering. Other competitors are starting to get APIs, however, only BMC is working with Job-as-code and is in the lead. To my knowledge, they're really one of the only ones who can define your entire workflow as code.
What other advice do I have?
Control-M is pretty critical to our business as it runs many different business processes every day, and if it wasn't there, we would probably hire many more people, be a lot slower, and be more prone to error.
We use a hybrid deployment. We have parts in the traditional data center. We have parts in the cloud. We sometimes have parts that live on containers. They only exist for two minutes. It is very much a hybrid mix of goodies with our deployment.
I'd advise potential new users to examine it today and not think about what it did ten years ago. Control-M is an old product. It has been around since we all used mainframes, however, just because something's been around for a long time, doesn't mean it's a piece of junk or doesn't work with modern technologies. It has adapted and grown with the times. Control-M did cloud-based work before many of us were even talking about the cloud. It's hard to get rid of negative perceptions sometimes, however, the best thing for people to do is to head out to the internet, look it up, and go out to GitHub.
If you have a technical team, send them out to GitHub. You can download everything in an image or in a container and try it yourself. It doesn't cost you a nickel.
I'd rate the solution nine out of ten.
The biggest advice I can give is to try it out. Don't only believe what the PowerPoints tell you. There's no excuse that you can't have a deployment running clearly within hours. Be willing to think about how it can solve problems in new ways. Sometimes we try to find a new tool as we have a square problem and we get upset as all the tools we're looking at only have round solutions. Sometimes the reason that it only has round solutions is due to the fact that that's the proper way to solve the problem. You have got to be willing to break down whatever you're trying to do, whatever workflow you're trying to automate or integrate, and take it in pieces.
If all you want to do is save yourself a lot of money, use Cron, and use Windows Task Scheduler. However, if you want to take your business to the next level and start to get to the point where you can automate to remediate and audit, that's where tools like Control-M come into play.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.