Try our new research platform with insights from 80,000+ expert users
reviewer1631958 - PeerSpot reviewer
Maintenance Manager at a transportation company with 10,001+ employees
Real User
We have seen quicker file transfers with more visibility and stability
Pros and Cons
  • "If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it."
  • "The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability. We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better."

What is our primary use case?

We schedule the majority of our SAP jobs Control-M. We do that globally for all the production plants. We have tens of thousands of SAP jobs and managed file transfer.

SAP batch and managed file transfer are critical processes that we have automated. We are in the process of replacing Connect:Direct and SecureTransport, the legacy file transfer solution, with Managed File Transfer (MFT). That is on the global scale. 

The Control-M for Informatica is gaining a lot of popularity, primarily in the financial side of the business. They have a lot of security restrictions that make their jobs very difficult. Also, there are cost issues for Informatica, e.g., anytime they execute a workflow in Informatica, they get billed for it. We are adapting the solution to not scrum the workflow every half an hour or hour because they pay for it, but only when it is needed. Therefore, we can do a database query and check if there are new records that need to be processed. If there are no records to be processed, then depending on that output, we either run the Informatica job or leave it and check again for maybe half an hour. We are optimizing, saving money for the customers and ourselves, while reducing the number of executions, jobs, etc.

We are using on-premises. We have been for many years. We are aware of the new Helix offering, which is a SaaS/cloud offering from BMC, but it is not really ready for enterprise yet, not at our scale. We are doing some cloud, though not the Helix offering. I have installations in the cloud using Azure and AWS. We are not fully functioning there yet. We are waiting for the demand, but we are aware of the cloud opportunities and making use of them.

We have been busy upgrading to version 9.0.20 Fix Pack 100 but our production environment is still on 9.0.19 Fix Pack 200.

How has it helped my organization?

We use Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. We have found that a lot of the new customers who are developing for cloud prefer to use the API and would like to test for themselves. That is really where Jobs-as-Code comes in. They can test and fail quickly the agile way. We definitely have some customers who are using that.

We have seen quicker file transfers with more visibility and stability. Because data transfers are part of the Control-M tool, they form as part of the normal workflow. We see the value in that.

If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it.

What is most valuable?

A new feature, which we deployed about two years ago, is the Managed File Transfer (MFT). We also use Managed File Transfer Enterprise (MFTE) for external transfers of our biggest use cases. 

Another valuable feature would definitely be the MFT dashboard that is now available in Control-M natively. It is easy to just search for jobs, files, etc. Instead of the customers contacting us to find out what happened, when it happened, and why it happened, they are able to service themselves. This allows us to cut down on operational staff, costs, and time because customers can manage it themselves to a degree.

The most valuable feature is definitely the Self Service. A couple of years ago, it was available, but not with the features that it is today. There wasn't really uptake on it, although it was available. We have seen a steady growth in the number of users using it and what they are using it to do. They are using Self Service to schedule by themselves and do monitoring by themselves. They interact with their schedules. Also, the performance of Self Service is very user-friendly and more accessible. That is one of the features that we use a lot lately.

The reporting has definitely improved over the years. We are definitely doing more of that as well. We are definitely seeing more value in reporting on the batch schedules, optimizing it and seeing if we can cut costs. 

What needs improvement?

The reporting has improved. It is not where it should be yet, but we have seen improvements. The biggest thing for me is the restrictions regarding templates for reporting. You can't create your report with your own parameters. We have a meeting weekly with BMC and our customer lifecycle architect, and this comes up quite frequently. We have been privileged enough to do work with the developers. They are aware of the requirements regarding reporting and what our customers are asking for.

What I found lately about the YouTube videos, specifically, is that they are very simple. Usually, when I watch a video, I would read the manual, instructions, etc. to see if I understand it. I would hope that the interactive sessions, Q&As, or videos could be used to handle more complex issues of what they're discussing. An example would be the LDAP authentication for the Enterprise Manager. They would typically just go through the steps that are in the documentation. What people typically looking at those videos are looking for is how to do the more complex setup, doing it with SSL and distributed Active Directory data mines. Things that are not documented. I find those videos helpful for somebody who is too lazy to read the manual. I expect them to handle more than what is available in the documentation and the more complex situations.

The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability.
We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better.

Buyer's Guide
Control-M
August 2025
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: August 2025.
866,483 professionals have used our research since 2012.

For how long have I used the solution?

I have been using Control-M for 12 years.

What do I think about the stability of the solution?

Control-M is really stable. We have seen that throughout the years. I have had customers who have been running version 6.3 for seven years after support stopped. It has been running for three years straight, without a reboot or restart, doing its job. We have actually had issues with customers who don't want to upgrade. They have said, "This stuff is working perfectly. Just leave it alone because it just doesn't go down." 

We have a saying in our department as well. When somebody says there is a problem, we say, "It's not Control-M. Check everything else. Check the server, network, and database. It's not Control-M." 99 out of 100 times, we are right. It is either infrastructure or something else, but it is not Control-M.

What do I think about the scalability of the solution?

I have never run into any problems scaling, either vertically or horizontally, with Control-M. In each version, it just gets better. I am really happy with that.

We were one of probably the first companies who bought MFTE, and it was not ready yet. It didn't scale properly. It didn't offer the functionality that the competing tools that we were currently using had. It's grown tremendously because of our input and feedback directly to the developers and BMC. I'm not complaining about it, but it put us back a bit. We have learned not to be a very early adopter. We have seen the same with the cloud. Everybody wants to jump on the cloud, but nobody knows why. They just want to do Cloud. We've made a substantial investment with MFTE. It was a couple of hundred thousand euros, and it was not ready yet for our enterprise requirements.

Our monitoring team who does 24/7 monitoring. They handle the alerts. They check their job flows. They make sure escalations are going through. If tickets need to be logged, make sure that gets done. They also interact with ad hoc requests from customers. 

There is the scheduling team who does the job definitions, updates, etc. 

There is the administration team, which I'm part of, with administrators who look after the infrastructure, Enterprise Manager, servers, agents, gateways, etc. Recently, we also have a dedicated MFT team that only looks after MFT because of the huge number of customers, requests, and requirements.

Other customers who use it are really all across the board. We had a presentation last week to our bigger department that is worldwide, but which we are a part of in South Africa. We have noticed about 52 main departments, then the sub-departments, between them. A lot of them sit right across the enterprise. Typically, the most active users would be SAP users who checks for output on the jobs running on Control-M. It is just 10 times easier to do it in Control-M than on SAP itself. We also manage to keep the output for longer than SAP. What they can't find on SAP after seven or 14 days, they can usually find with us, e.g., outputs for the jobs or logs. 

There are the MFT users who love being able to see each morning that their file was transferred, how long it took, and how big the file was. A lot of self-service users are using the Self Service function. Team leads and operational staff use it most.

How are customer service and support?

I love support and the support people. It is very good. Because we are quite a mature customer and the whole team has a lot of experience (sometimes more than the support people), if they don't realize the seriousness of the situation, then we would not escalate but just to make our customer lifecycle architect aware by saying, "We are not feeling this case is getting the required personnel on it. We need somebody more senior. We don't have time to cover the basics that the first line support is trying to deal with. We've been over that." Overall, I would rate the technical support as nine out of 10.

Which solution did I use previously and why did I switch?

Previously, we used a big SAP solution, which was not a commercial, and specifically designed for our company.

We have recently taken over a mainframe migration as well as the scheduling was on TWS, which is IBM's scheduling software on the mainframe z/OS. We moved that all over to Control-M. That was a combination of SAP jobs, Informatica jobs, database jobs, and normal script jobs. So, we use a bit of everything. We have also used the automation API a lot for interfacing with Control-M and other middleware tools, but primarily it is SAP and file transfer.

We use Control-M to integrate file transfers within our application workflows. It integrates with the tools that we are replacing, i.e., Connect:Direct, which is quite a legacy tool, and our old IBM tool, which we have been using for more than 15 years and has no visibility. With Control-M, you get visibility on your file transfers and how it mostly interacts with your batch schedule. Something gets created, it's sent over, and then it gets processed. Control-M has already been part of the executing, extracting, import, or processing. Now, with the file transfer, customers can see the entire workflow from the data being generated, transferred, and processed. This resolves a lot of complexities because you used to need to contact three different teams to find out if the file arrived and was processed. One tool does all of that now.

There isn't a lot of new functionality that our previous tools didn't have. It is just re-consolidating all the tools that we need into a single one. That makes it much simpler. There is one team to contact globally for file transfers, and that makes it easy. It provides visibility with its Self Service that wasn't available with Connect:Direct or SecureTransport. Our customers are quite happy to have that. We can also provide reports. 

SecureTransport competes with MFTE. There isn't a conversion tool for that yet. Connect:Direct simply provides the means for a conversion tool, but it gets integrated into scripts and applications. It's very difficult to migrate or extract that data.

How was the initial setup?

The initial setup is straightforward. It changed a lot over the years as well, but in the nicest way. You have minimal downtime with the upgrades on Enterprise Manager as well as the Control-M servers. A lot of preparation is done before the tool is shut down for the upgrade. Our downtime used to be at least an hour for upgrades or migrations. That has typically come down to 10, 15, or 20 minutes, depending on the size of the server. It is definitely more stable and understandable.

We have also noticed that the exception handling is much better if there are issues. We don't get that many surprises. The errors are understandable. The agent upgrades have zero downtime, so that is just amazing. All the patching and maintenance is centralized. We have migrated our development and integration environments to 9.0.20 in the last month or two. That went very smoothly. We will start with production next week. We have been through this quite a number of times. We came from version 7 to version 9 to versions 9.0.19 and 9.0.20. We do all the upgrades in-house.

What about the implementation team?

We do it all ourselves. If we get stuck, we would contact BMC. At my previous job, we were a partner for BMC in South Africa, and I was on the support side for BMC. It is only we need to open tickets for bugs or problems that we contact BMC. Typically, upgrades and migrations, we handle those in-house.

There are three people full-time on the administrative side. We have a global setup: Europe, Mexico, America, Africa, and China. We have tons of virtual machines and hundreds and hundreds of agents, and even more that we might host.

What was our ROI?

I know we have already budgeted for more tasks. The company is very happy with the performance of our teams, specifically the South African team. We are really doing more with good tools and less people. There is definitely a return on investment, just from the stability and visibility which has improved a lot.

On the effort side, we have definitely seen a lot of savings. We have some bigger projects that are automating the schedule and removing human intervention. These have reduced department staff/headcount, by about 50%, when we were able to automate the batch side of it, because also our department offers monitoring and operations as part of our service. We have a dedicated monitoring team. Whatever runs in Control-M, that is monitored by us and escalated, if needed. 

Departments now have multiple scheduling tools between the mainframe, distributed systems, and cloud. Control-M brings all of that, e.g., we have it on a single pane of glass so we can see the exact execution on the mainframe, the execution on the line, and the execution in the cloud. This is instead of using three or four different tools. Therefore, the complexity of batch monitoring and scheduling has decreased as well with the standardization of Control-M. That is definitely one of the big advantages that we have seen.

What's my experience with pricing, setup cost, and licensing?

It is expensive. We have a lot of customers who complained initially about the costs. Because it's not just the licensing, unfortunately. It's the infrastructure, salaries, etc. I like the licensing model. It is pretty straightforward. We are on the task license. I know that we have some really good discounts. Our BMC account manager makes sure that we stay below the license count as well as checking for growth. Overall, it's good. The licensing is simple enough for me. It is a bit expensive. Especially with the cloud coming in, we might see the licensing change in the future, but I'm guessing.

This is now from my previous years as support for banks and big companies. If it's not enterprise scale, I find that it's too expensive for smaller companies. You really have to be quite big and need to have a dedicated support staff to run it, then you'll be fine. What we've seen at smaller companies, it's too expensive because they want to automate everything. Now, stuff that can literally run once a day for the rest of their lives is costing them $3 a job a day. It becomes too expensive, eventually. They are not seeing the return on investment because it's not business critical. Nobody is going to die or they're going to lose money if that job didn't run exactly at 11 minutes past 4:00. It's definitely for bigger enterprise companies, especially banks or healthcare providers. We have had an instance where Control-M was unavailable due to external factors for 20 minutes and there was a loss of almost a million euros because the solution involved logistics. 

Which other solutions did I evaluate?

We have done the usual crontab migration. Everything is in crontab or Windows Scheduler. Typically, we end up with a migration, even if it's from a known tool, where we end by exporting it into Excel and converting it into job definitions with a script. We have been involved in that, but nothing using BMC tools.

When I joined the company, I first supported them through the local partner. Because we have such a vast array of scheduling tools, they went through a PoC and business case. We evaluated three or four tools, where BMC Control-M was one. Quite soon, because the company was already using Control-M in Africa and China, they were looking for global solutions to see if it really could create change.  

What it came down to was ease of use, enterprise capability, and BMC was already in the company with ITSM and a couple of other products as well. They had a good relationship with us. We consulted with other customers who have used it as well as references because it was expensive. It was definitely the most expensive solution then, out of the four. However, we didn't want to go five years down the line and then have to change again because of issues.

What other advice do I have?

We have had a very good run with Control-M. I love it.

With the move to big data and especially with our AWS Cloud presence, we have a data lake. We are in discussions with the analytics teams about how they can utilize Control-M in the cloud for analytics, big data, etc. However, at the moment, it is not a big deal.

What we have found with the Jobs-as-Code is that customers need to understand Control-M better, how the scheduling works, the knowledge around it, its conditions, etc. It took some time for the developers to get used to Control-M, then Jobs-as-Code. They are now confident with it. We are presenting twice weekly. We have an open forum for interested parties about Control-M or our department, Enterprise Scheduling and File Transfer, where we have a dedicated session about Jobs-as-Code. If there are questions about how other departments are doing it, if there is a better way to do it, if they are able to save on the number of jobs, can we make them rerun, or instead of creating 10 jobs, can it be done with five jobs? So, there is not a lot going from Jobs-as-Code directly into production, but we have a couple of parties, especially on the cloud front, who are very interested in it.

The solution is enterprise scale. Also, if you want to integrate all your applications into one view and offer all the functionality across the board, such as file transfer, scheduling, cloud, and on-prem, then you can create your own application integrations to integrate with applications that's not supported currently by BMC, like APIs. For top 100 enterprises, there isn't another better tool on the market for enterprise.

I would rate it as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer1657833 - PeerSpot reviewer
AVP - Systems Engineer at a financial services firm with 10,001+ employees
Real User
Allows us to integrate file transfers more readily, resolve issues quickly, and orchestrate a diverse landscape of vendor products
Pros and Cons
  • "The File Transfer component is quite valuable. The integration with products such as Informatica and SAP are very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit."
  • "A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions."

What is our primary use case?

Control-M supports a lot of business processes. It supports some of the HR functions. I don't know if payroll is directly supported, but we do run jobs through PeopleSoft, which obviously impacts HR. Recently, we've started using the SAP module. So, we're making a transition from PeopleSoft to SAP, and I also see some payroll functions happening there.

How has it helped my organization?

We use Control-M to orchestrate a diverse landscape of vendor products such as Pega, MuleSoft, etc. File transfers and data feeds fetching are quite important for us. So, a lot of data processing happens through Control-M.

Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all of our application workflows and data pipelines. Of course, such a diverse landscape requires you to make the effort to utilize Control-M to tie everything together or to act as the glue. Once you do that, everything is clearly defined, and you can view these disparate systems using one unified pane. If you don't define it correctly, then obviously Control-M won't have that insight, and so you'll have to go to multiple locations to go look at your job statuses.

We use its web interface. It is primarily for the application support teams to go monitor their own jobs. The jobs defined within Control-M are tightly controlled by a specific group of people. There are also people who need access to view that the jobs were completed successfully or why the jobs may have failed. These people are given access through Control-M web to view and monitor the jobs that they support or the applications they support. They're usually able to log on without having to install any client on their personal workstations. So, it's quite convenient. We have not implemented its mobile interface.

The integrated file transfers with our application workflows have certainly sped up our business service delivery by 80%. It has allowed the business to integrate file transfers more readily. Prior to utilizing the Control-M module, people had to write their own file transfer scripts in a scripting language of their choice to vary degrees of effectiveness. With the integrated File Transfer solution within Control-M, there is a standardized way of performing file transfers along with the capability of file watching and grabbing the file names that were transferred, making it much more versatile.

Control-M can immediately report when a job fails. If you have proper monitoring in place, you're notified immediately when your business flows are impacted. In the past, when you run jobs using Cron or just wrote shell scripts, you're really left in the dark because they don't necessarily report even from within Control-M. Implementing Control-M has made the business realize how critical and important it is to have proper error coding within the scripts that they schedule. If the scripts don't necessarily report any errors or redirect the system output into log files, when a job fails, there is no way to detect that.

We've automated many time-consuming business reports and other things that were very manual and took a tremendous amount of manhours. We've also automated a lot of maintenance using Control-M. We've integrated with Ansible Tower. So, we now are able to run Ansible playbooks and Ansible job templates. With the scheduling capability and the multitude of integrations that Control-M offers, it really acts as the unifying glue and as a communicator and orchestrator across the enterprise. With Ansible Tower, you can run a number of playbooks through it to perform patching and reboots and whatever maintenance that the infrastructure teams require, but you can't really do it when the business is still operating, or you can't do it when that business is operating, but you could do it for another business that's not operating at the moment. It is very hard to coordinate that without knowing which lines of business have jobs running or things like that. With Control-M, you can see that and you can actually enact workload policies to put jobs on hold prior to running Ansible playbooks. Once your Ansible playbook is complete, you can release the jobs again by deactivating the workload policies. So, it makes those processes very streamlined.

We do use the Role-Based Administration feature. We have been allowing other groups to gain more control over their agents so that they can define connection profiles, and they can do a little bit more on their side without inundating the main team with a lot of tasks. Everybody is happier. They can get things done faster, and they have immediate feedback and response because they're in control. The main Control-M team is not inundated with a lot of different requests from various teams to do a number of mechanical tasks. They don't get asked to create the connection profile for a database. People have all the information there, and they can do it themselves. They can define it in a way so that only they have access to it.

It has helped us to achieve faster issue resolution. Control-M reports on the error. It is easier to view the system output of that job. Whether it is an Informatica job, a scripted job, or a database job, it is easier to go in and view the issue and then troubleshoot from there. Most of the time, you can be running from the point of failure if the jobs aren't defined correctly. In a properly defined job, I would estimate that there is a 70% to 90% reduction in the meantime to resolution.

It has helped us by improving our service-level operations performance. We've built integration between Control-M and our ITSM, which is ServiceNow, and that has certainly allowed us to gain more visibility within our community through ServiceNow. Every time a production job fails, an incident ticket is cut, and that's highly visible. That needs to be escalated too, and there is a much more defined process to be able to resolve that issue. In the past, obviously, when you didn't have that level of visibility or that integration, there was always time lost in identifying what the issue is.

What is most valuable?

The File Transfer component is quite valuable. The integration with products such as Informatica and SAP is very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit. The file watcher component is also indispensable when integrating with other applications that generate files, instead of triggering a workflow based on time.

What needs improvement?

We have been experimenting with centralized connection profiles. There are some bugs to be worked out. So, we don't feel 100% comfortable with only using centralized connection profiles. We do have a mix of control on agents out there, which leads to some complications because earlier agents do not support centralized connection profiles.

A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions. One particular example is that we were trying to use the Automation API to fetch a number of run ads users from the environment. The username had special characters and backspace characters because it was a Windows User ID. In the documentation, there is a documented workaround for that. However, that relied on two particular settings in the Tomcat web server. I later found out that these settings work out-of-the-box for version 9.0.19, but those two options were not included in the config file for 9.0.20. So, it led to a little bit of confusion and a lot of time trying to diagnose, both with support and the BMC community, what is the issue. Ultimately, we did resolve that, but that is time spent that really shouldn't have been spent. It had obviously been working in 9.0.19, and I don't know why that was missed in 9.0.20, but that's a primary example of an improvement that can happen.

We've also noticed that the Control-M agents themselves now run Java components. Over time, they tend to destabilize. It could be because garbage collection isn't happening, or something is not happening. We then realize that the agent is consuming quite a large amount of memory resources on the servers themselves. After recycling the agents and releasing that memory, things go back to normal, but there are times when the agent becomes unresponsive. The jobs get submitted, and nothing executes, but we don't know about it until somebody says, "Hey, but my job isn't running." When we look at it, it says Executing within the GUI, but there is no actual process running on the server. So, there is some disconnect there. There is no alerting function or the agent there that says, "Hey, I'm not responding." It is not showing up in the x alerts or anything like that.

The integrated guides have not been that helpful to us. I do find a lot of the how-to videos on the knowledge portal to be useful. However, there are some videos where the directions don't always match with some of the implementations. There are some typos here and there, but overall, those have been more helpful for us.

Its pricing and licensing could be a little bit better.  The regular Managed File Transfer piece, is a little overpriced, especially for folks who already have licensed Advanced File Transfer.

What I'm also noticing when I'm trying to recruit for Control-M positions is that the talent pool is quite small. There's not a whole lot of companies that utilize Control-M, and if they do, most people don't want to let their Control-M resources go if they're good. There is a high barrier of entry for most people to learn Control-M. There are Workbench, Automation API, and so forth mainly for developers to learn, but there are not a whole lot of resources out there for people to get more familiar with administering Control-M or things like that in terms of the technology or even awareness. So, it becomes very challenging to acquire new resources for that. A lot of the newer people coming out of college don't even know what is Control-M. If they do, they think of it as a batch scheduler, which is certainly not true in its current transformation.

Control-M is a very powerful enterprise tool, but the overall perception has not changed in the past five to six years that I've been working with Control-M. There's not much incentive for people to dive into that world. It is a very small community, and overall, the value of Control-M is not being showcased adequately, maybe at the C-level for corporations. I've had multiple conversations with other people and other companies who have already exit using Control-M. About 70% of the companies out there do not take full advantage of the capabilities in Control-M. So, that type of utilization really hampers and hinders the reputation of Control-M. That's because people then acquire this untrue concept that Control-M can only do X, Y, and Z, rather than the fact that Control-M can do so much more. I don't know if it needs a grassroots marketing movement or a top-down marketing movement, but this is what the perception is because that's what I'm hearing and that's what I'm seeing. For some of the challenges that I face working in Control-M, when I go back to my management and say, "Hey, I want to spend more money in this space," they're like, "Why? Can you justify it? This is what we see Control-M as it is. It's not going to bring us value in this area or that area." I have to go back and develop a new business case to say, "Hey, we need to upgrade to MFT enterprise or something like that." So, it definitely requires a lot more work convincing management in order to get all these components. In the past, we had to justify acquiring a workload change manager. We had to justify acquiring the workload archive. All of these bring benefits not only to our audit environment but also to the development environment, but the fact that we had to fight so hard to acquire these is challenging.

For how long have I used the solution?

I've been using Control-M for about eight years.

What do I think about the stability of the solution?

Version 9 was very stable. Once they started adding a lot of the newer Java components, the stability suffered. It seems to have gotten better in version 9.0.20, but that's could be my basic perception. 

We run a lot of database client jobs. There are some things that we've implemented that I understand can contribute to the agent instability. We sometimes extract a lot of database output and massage that output using other scripts. I've noticed there are certain things that you cannot do with it, or there are some things that contribute to the instability. For example, in the output scanning functionality, there certainly is a size limit. You probably don't want to scan anything too large because that's going to put a lot of resources on the environment.

In addition, there are times when the agent becomes unresponsive. The jobs get submitted, but nothing executes. There is no alerting function. These are the examples of instability that I've noticed. Overall, the main application itself, the EM, and the scheduler have been pretty stable.

What do I think about the scalability of the solution?

It is very scalable in terms of job execution. I haven't really explored scaling Control-M and the EM environment to a point where we have hundreds of users accessing it at a given time. That's because I don't have a hundred users who want to access that at a given time, but I do understand that you can distribute the web server more, and then have a load balancer to balance the load. I would think Control-M is a fairly scalable application.

In terms of its users, we have a lot of application support folks. We do have some developers who access Control-M mostly for the non-prod environments to execute and monitor their own jobs. There are some software engineers and operational engineers who are part of the application support teams that access Control-M. As for size or concurrent users, we have about 50 concurrent users at the max.

How are customer service and support?

I would probably give them a nine out of 10. For the most part, they're very helpful, but there's always an initial standard dialogue. For an issue, you have to collect from EM logs, agent logs, and so forth, and you submit that. Sometimes, we have done all the advanced work and submitted it, but they still come back and say, "Hey, we need the logs." It seems like that's a canned response without looking at the tickets.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We've been with Control-M for quite a long time. We have not been using anything else in my history with this organization. 

I have not looked at anything recently. I am aware there are other application orchestration solutions out there, but I have not felt the need to go explore those options at the time.

How was the initial setup?

If you're deploying using out-of-the-box options, the process is fairly straightforward. If there is some customization that needs to happen, then the process can be complex, and the documentation does not cover some of those complexities.

For the most part, we are standard out of the box. We have run into some performance issues where we had to, later on, go in and maybe make some modifications. For example, we had to stand up different gateways for various purposes just because one singular gateway was not enough to take the load in particular because we had installed a workload archive, and that was just taking up a lot of resources. Other human users were not able to perform their actions because the archive user was consuming so much of the server's resources. So, there was a lot of tweaking there, and we had to basically break out and distribute some of the components.

In terms of implementation strategy or deployment plan for Control-M, the environment always had Control-M, and we just had to upgrade the Control-M environment. We've had Control-M in our environment for quite a long time, probably when it was still version 6. So, as we progressed through different versions, we obviously had to expand the environment and the platforms. We initially started off with Control-M on AIX, and we later moved to Control-M on Linux. As you go to Linux, obviously, there is planning for high availability and production environments, disaster recovery environments, and so forth. So, you have to plan for marrying a lot of the BMC Control-M components and identifying where a load balancer may be required, or DNS ALIAS is required so that you can quickly flip over in the event something happens. Then, of course, there is sizing for the environment in terms of how many jobs are running, how many executions are happening, and so forth. This is how we plan.

What about the implementation team?

We've used the AMIGO program, and then we've performed the upgrades ourselves.

For its day-to-day administration, we have a team of five people. They're administrators and schedulers.

What was our ROI?

Its return on investment is quite high, and that's mostly because we use so many of Control-M's capabilities. We also extend those capabilities. We write our own scripts to be able to integrate Control-M with so many other applications such as Automation Anywhere, Alteryx. We have also done vice versa. We have helped other teams develop their capabilities in integrating with the REST API and Control-M. So, the ROI is quite high for our use case, but based on the conversation with some of the community partners out there, their ROI is probably quite low because they're not making use of all these new features. I don't know if it is because they don't have the skillset to make use of these new features, or their management structure or process structure is hampering them from going out there. A lot of large companies I know like to maintain the status quo, and that's why they're slow to adapt and slow to move, which is going to hurt them in the long run, but in the meantime, it can hurt the adoption of Control-M as well.

What's my experience with pricing, setup cost, and licensing?

Its pricing and licensing could be a little bit better. Based on my experience and discussions with other existing customers, everybody feels that the regular Managed File Transfer piece, not the enterprise one, is a little overpriced, especially for folks who already have licensed Advanced File Transfer. We understand that Advanced File Transfer is going away and is going to be the end of life, and there is some additional functionality built into MFT, but the additional functionality does not really correlate with the huge price increase over what we're paying for AFT already. This has actually driven a lot of people to look for alternative solutions.

I know they are now moving more towards endpoint licensing or task-based licensing. In my eyes, the value of Control-M is the ability to break down jobs from monolithic scripts. You don't want to have to wrap everything up in one monolithic script and say, "Hey, I'm executing one task because I want to save money." That defeats the purpose of controlling, and that defeats the value of Control-M. By being able to take that monolithic script and break it down into the 10 most basic components, you can monitor each step. It is self-documenting because, within Control-M, you can see how the flow will work, and you can recover from any one of those 10 steps rather than having to rerun the monolithic script should something fail. That being said, the endpoint licensing does make more sense, but maybe pricing or things like that can be more forgiving.

Which other solutions did I evaluate?

N/A

What other advice do I have?

It is worth the time and money investment to learn more about Control-M. You should learn all the features of Control-M and really explore and test out the capabilities of Control-M. That's the only way people get comfortable with what Control-M can implement. A lot of people aren't aware of just how flexible a platform Control-M is, especially with all the new features that are being added via the Automation API. These features are helping to drive Control-M and things developed in Control-M more towards a microservices model.

We are just beginning to explore using Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. Obviously, there is a little bit of a learning curve for developers as well in order to see the value of developing Jobs-as-Code. Currently, we're walking developers through it, and we're holding their hands a little bit in terms of developing Jobs-as-Code, but we are heading in that direction because it does provide artifacts that you can version control and change quickly and easily. You can redeploy much quicker than just having the jobs defined in the graphical user interface. Previously, when you had to modify it, you either did it via the GUI, or you exported it via XML and then modified those components. Once you get the developers closer to their job flows, then you can theoretically speed up the delivery of applications along with scheduled jobs.

I don't have a whole lot of experience with other scheduling orchestration environments, but from everything that I've heard while speaking with other colleagues, I would say Control-M ranks fairly high. I would rate it a nine out of 10. Control-M usually is the platform that people are moving to, not moving away from.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Control-M
August 2025
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: August 2025.
866,483 professionals have used our research since 2012.
Director Information Technology at a insurance company with 1,001-5,000 employees
Real User
Enabled us to consolidate and streamline our development process, while building on existing skills
Pros and Cons
  • "We used Control-M's Python Client and cloud data service integrations with AWS and, as a feature, it was very customizable. It gave us a lot of flexibility for customizing whatever data maneuver we wanted to do within a pipeline."
  • "I would like to see them adopt more cloud. Most companies don't have a single cloud, meaning we have data sources that come from different cloud providers. That may have been solved already, but supporting Azure would be an improvement because companies tend not to have only AWS and GCP."

What is our primary use case?

Our use case was mainly about consolidating our data pipeline from different sources and doing some data transformations and changes. We needed to get data from different sources into a state where we could act on it into one consolidated data set.

How has it helped my organization?

It gave us the ability to consolidate a diverse set of solutions into one comprehensive solution that streamlined our development processes. It was straightforward to adopt and we could build on existing skills without having to have 10 solutions for 10 problems.

And when it came to creating actionable data, it gave us the ability to move faster and at scale. By adopting a solution like Control-M, we were able to scale and deliver faster data transformations and maneuvers, turning data into insights in a more efficient and scalable way.

The ability to deliver faster and at scale was important. Business and management always wanted us to deliver faster and bigger and we were able to do both with the solution that we implemented using Control-M. We were able to respond faster to changes and business needs, at scale. 

Having a feature-rich solution enabled us to aggregate all of our processes into it, and that made the overall execution, from a project and portfolio perspective, a lot more efficient.

We were also able to respond to audit requests, because it's centralized, in a much more efficient way.

What is most valuable?

There isn't a single feature that is most valuable, but if I had to choose one, it would be the rich ability it gave us for making customized scripts. That was probably the most unique feature set for our situation. We used Control-M's Python Client and cloud data service integrations with AWS and, as a feature, it was very customizable. It gave us a lot of flexibility for customizing whatever data maneuver we wanted to do within a pipeline.

The Python Client and cloud data service integrations have a rich set of features with flexibility. It did not require additional, crazy skills or experience to deal with it. It was a nice transition into enabling a data scientist to leverage existing skills to build those pipelines.

Creating, integrating, and automating data pipelines with Control-M was straightforward. It did require some knowledge and training, but compared to other solutions, it was a lot simpler. Working with data workflows, with the data-coding language integrated into Control-M, was straightforward. The level of difficulty was somewhere between "medium" and "easy." It was not that hard to leverage existing skills and knowledge within this specific feature.

The user interface for creating, monitoring, and ensuring delivery of files as part of the data pipeline was very actionable. It was almost self-explanatory. Somebody with basic user-interface experience could navigate the calls to action and the configuration that is required. It was well-designed.

What needs improvement?

I would like to see them adopt more cloud. Most companies don't have a single cloud, meaning we have data sources that come from different cloud providers. That may have been solved already, but supporting Azure would be an improvement because companies tend not to have only AWS and GCP.

For how long have I used the solution?

I used it for a couple of years.

What do I think about the stability of the solution?

It's fairly stable. I don't recall any specific issues. 

What do I think about the scalability of the solution?

It's fairly scalable. For our needs, it scaled very nicely.

We have a shared model where we have a centralized, shared service organization when it comes to data. Different people will use it, but it's centralized.

How are customer service and support?

We used other solutions from BMC as well, and their customer support was always great. I give them a 10 out of 10.

Training or a Knowledge Base were available or you could ask a question by submitting a ticket.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We had DataStage from IBM and SSIS.

The switch was really about streamlining the process. We had other tools that only did partial processes or were not doing it with the speed and efficiency that we were looking for. We were looking for a solution that could streamline things and solve 90 percent of our data challenges.

What was our ROI?

The analysis that I saw validated that the ROI was within a couple of years.

What's my experience with pricing, setup cost, and licensing?

The pricing was competitive, from what I understand.

Which other solutions did I evaluate?

We looked at continuing to use the same solutions we had been using, and there were a couple of other cloud-based solutions that we evaluated. One of them was Matillion. The ease of use was one component of our decision, as was the flexibility of scripting with Python. Those were the key differentiators.

What other advice do I have?

For the on-prem solution, we had to do the patching and whatever was required by the vendor, but the cloud implementation was a model that was usable. The upgrades, changes, and patching are done directly by the vendor.

Control-M was a critical piece of the puzzle, to help us with all the data transformation and projects that we had to do. It was part of either one specific project or even a larger project that required that middle data transformation so that we could get to analytics or any other consumption of that data.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Control-M Administrator at Cognizant
Real User
User-friendly GUI, responsive support, and the BIM feature helps us meet our SLAs
Pros and Cons
  • "BIM is helpful because we do not miss any SLAs, as we get to know the issue well in advance. It is the topmost service that has helped us provide better solutions for the business."
  • "The reporting functionality needs a lot of work. We have faced problems with different versions where we run the right report, but it gives us blank entries. Then, when we run the same report again, it gives the correct data."

What is our primary use case?

Our organization has multiple projects that use Control-M, and I support the banking domain. In the past, I have worked on projects for retail organizations and medical companies.

We have approximately 150 applications in our current project. These include Loanpower, erwin, and OpenLink.

How has it helped my organization?

With the use of Control-M, our SLAs are met more often. If there is an issue, we identify it in advance, before the problem occurs.

Control-M helps us in terms of automation because it has various scripts in different formats. We can run a Python program or a shell script, and these allow us to automate almost everything.

This product helps to secure our business because we can restrict users.

We have automated several critical processes with Control-M. One is used during patching, where we log in and type one command that will stop and start the services on all of the servers that we have. We have approximately 10 servers in production and five in non-production, so it's a lot of work to restart all of the servers. We also have automation that performs a health check. It runs every day at a scheduled time and will delete all jobs in production that are older than five days. Similarly, we have jobs that check to ensure certain conditions are being met and will check the various alerts that can occur.

Automating these processes has improved our business because every morning, we have to send a status update to show that the components are working. This is something that we used to do manually. We would log into CCM and check everything. Now, we have automated that using a script, wherein it sends the status email automatically to whichever business users request it. It has helped to reduce a lot of manual activity.

Control-M has definitely helped us to resolve issues faster. I estimate that the improvement is between 60% and 70%.

Our service-level operations performance has improved by 80% with the use of Control-M.

What is most valuable?

The GUI is very user-friendly. It provides us with a single view and we have everything in the same UI. This is very important because we don't spend a lot of time switching tabs or opening Control-M for different purposes. We have a single GUI open and it saves a lot of time.

Two really helpful features are Forecast and Business Impact Manager (BIM).

BIM is helpful because we do not miss any SLAs, as we get to know the issue well in advance. It is the topmost service that has helped us provide better solutions for the business.

Forecast is useful in terms of patching, etc, because whenever we are looking for downtime or any team is looking for downtime, it's easy for us to use Forecast to find it.

Self-service is helpful and our business users appreciate it because they don't have to have Control-M installed on their machine. They can log in using the web portal.

What needs improvement?

The reporting functionality needs a lot of work. We have faced problems with different versions where we run the right report, but it gives us blank entries. Then, when we run the same report again, it gives the correct data. We have spoken with Customer Care and some of the issues are fixed in the latest version, 9.20.

For how long have I used the solution?

I have been using Control-M for 11 years and my company has been using it for longer than that.

What do I think about the stability of the solution?

This is a pretty stable solution. We have not had any downtime.

A couple of times, the agent has gone down unexpectedly. However, in terms of the EM and server, it's pretty stable.

What do I think about the scalability of the solution?

Our organization is pretty big, with approximately 250,000 employees, and we have multiple projects that use Control-M. We have approximately 150 applications in our current project, and there are about 175 employees that are actively using Control-M. That is across three different countries.

It is easy to scale. It can handle a lot of job flows and it's easy to create multiple jobs to run at the same time. We are expanding in terms of jobs for the same application because they have a lot of upgrades going on at the application level. 

We are not planning to expand the number of applications in our project as of now. We do have requests, but it's a slow process. We can add perhaps five or six applications a year.

Overall, we have no problems in terms of scalability. 

How are customer service and technical support?

When we can't find a solution to an issue, we reach out to BMC customer support and they respond almost immediately. Overall, the technical support team is very good and I would rate them a nine or ten out of ten.

Which solution did I use previously and why did I switch?

We did not migrate to Control-M from a competing solution. Some of our clients, although not my current project, migrated to Control-M from different products. The reasons for changing products are the additional features available in Control-M, as well as the ease of use. Also, some people are more confident in the security that Control-M provides, compared to other tools on the market.

Personally, I started my career with Control-M and have been using it ever since.

In the company, we have a couple of clients who use IBM Tivoli Workload Scheduler (TWS), AutoSys, and Stonebranch. However, the majority of our clients use Control-M. The choice of solution stems from requirements and input from the client.

One of the reasons that some clients are not using Control-M is because of the cost. For a client with 5,000 or more jobs, they definitely implement Control-M. However, if they are running only 200 or 300 jobs in a small environment, there are other native tools available.

How was the initial setup?

I was not part of the implementation at my company but I have implemented several Control-M projects. The initial setup is straightforward.

First, we download the files from the BMC site and then start the installation. This involves running the setup files and if there is any error, you have knowledge base articles and you also have AMIGO support if you enroll in it.

The deployment can be completed in a day or two, including the Enterprise Manager (EM), servers, and agents. There are also conversion tools that are available to assist with creating jobs.

Our implementation strategy began with installing the Enterprise Manager first, and then the server, and then the agents. We would raise a support ticket so that whenever we had any issues, we could reach out to them.

I did not look at the interactive guides or videos that Control-M provides for reducing time to full productivity. I had all of the documentation handy but I did not refer to any of the videos.

What about the implementation team?

Our in-house team is responsible for deployment.

What's my experience with pricing, setup cost, and licensing?

Control-M is priced accordingly for larger environments. It is expensive for smaller environments with only a few hundred jobs running.

There are two different types of licenses available. The first is based on the number of jobs that we run per day, and the other is based on the number of agents that we install. My current project has a contract for five years. During the first two years, we are allowed to run any number of jobs using any number of agents. However, in the last three years, we have to stick to whatever is defined in the contract.

In past versions, BIM and Forecast were separate components that were available at an additional cost. Since version 9, however, everything is included and there are no costs in addition to the standard licensing fees.

Which other solutions did I evaluate?

For my current project, the client has always used Control-M.

What other advice do I have?

The latest version of Control-M is 9.20 but we are working with 9.18 because our client has certain servers where the OS is not compatible with 9.20. It is running on Linux machines and at this point, our client hasn't given us approval for the OS upgrade.

Our business users don't typically use Control-M. They have access to it but only use it when a critical chain is stuck and they want to check it themselves. They can use self-service for this, although most of the time, they don't.

An example of why they would use self-service is when a critical batch has failed and is stuck for a long time, and they want to see the approximate time that it will be completed. Also, during an audit, they can use self-service to see which users have certain access, such as production access or write permissions.

The Control-M users in our company have different roles. We have administrators, and we have people who specialize in migration. We also have people who look into scheduling and we have a team that just takes care of monitoring.

The number of people that we require for the day-to-day administration depends on the size of the project. In my current project, we have approximately 8,000 jobs actively running. We have approximately 17,000 configured. In our L1 team, we have eleven people, and we have eight members for each of our L2 and L3 teams.

We do not use the Control-M integrated file transfer capability in our workflows, although we do use the File Watcher feature. We have a tool from Axway called SecureTransport, where they handle the file transfer, but we can define this as part of a Control-M job.

The biggest lesson that I have learned from using Control-M is that anything can be automated. You can control various applications and it is simple to schedule jobs for products like SAP and databases.

My advice for anybody who is considering Control-M is that it has a wide variety of features compared to other tools. It is flexible, easy to use, and the web portal makes it simple for business users or application teams to access it without having to install it on a Windows server or a Citrix platform. 

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
reviewer1899735 - PeerSpot reviewer
IT - VP at a financial services firm with 10,001+ employees
Real User
We have a better picture of our auditability
Pros and Cons
  • "We have a better picture of our auditability. When someone comes to us, and asks for sources, "How did the deltas occur?" We can provide answers quickly, or at least quicker than what we used to. We are actually sure of the information that we provide, where before it was like, "Hmm, I think it comes from over there. Let me double check, but it gets really convoluted over here and I think that is where it comes from." Now, if it is within the Control-M environment, it has a straightforward answer that we can provide with confidence."
  • "The community and the networking that goes on within that community need improvement. We want to be able to reach out to an SME, and say, "Hey, we are doing it this way. Does that make sense?" Ideally, they come back. and say, "Yes, it does make sense to do it that way. However, if you want to do it this way, then it is a little more efficient." We understand that one solution framework doesn't fit everybody. Depending on the breadth of the data and how broad it is, you may have different models for one over the other."

What is our primary use case?

It is controlling our workflows, ingesting data, and then putting it up into our database platforms. In turn, those are consumed by our internal clients.

We do integrate Control-M Python Client and cloud data service integrations with some of our cloud providers. We have pipelines going out to the public cloud and some pipelines that are internal.

We have public and private cloud channels as well as on-prem. The expectation for most large financial institutions is that we will get 99.9% to the public cloud eventually. We want everything to be in OpEx as opposed to CapEx. We don't want data centers. We just want access to our data and to be able to turn it into information, which in turn, turns it into actionable items. Ideally, we would love to not support any on-prem or hybrid solutions, having everything be public.

How has it helped my organization?

Control-M has improved our visibility and streamlining. We have better clarity into data flows. We can resolve issues faster by not trying to reverse engineer what pipeline the infraction may have come through. We are not completely there yet, but we have better clarity and visibility. 

We have a better picture of our auditability. When someone comes to us, and asks for sources, "How did the deltas occur?" We can provide answers quickly, or at least quicker than what we used to. We are actually sure of the information that we provide, where before it was like, "Hmm, I think it comes from over there. Let me double check, but it gets really convoluted over here and I think that is where it comes from." Now, if it is within the Control-M environment, it has a straightforward answer that we can provide with confidence.

The speed of our audit preparation process is faster. When questions come in about flow, data, or sources, we don't have to try to reverse engineer anything anymore. We are able to go straight to Control-M and find out what the flow is or what happened. The visibility is there. We see the endpoint on this, such as, "What is the reverse flow on it? Where did it come in? Where did that data flow come from?" So, it is not a spaghetti mess anymore. This makes auditability easier. We are able to provide answers more quickly, which in turn, makes the audit process quicker.

Control-M has improved our business service delivery speed. It is more reliable and has increased the release schedules. We are also working on testing standards, and it has shortened the window of getting things to us. It has shortened the window, not to market, but basically getting them live. 

Control-M is critical to our business. If the support ends, we are at risk in some of our critical flows. We have redundancy around it that has been purposely built. We do that with all of our solutions. That way, we are not tied into one specific vendor, then if something happens tomorrow, we don't have a fire drill. We have things in place, but to a certain extent, there is heavy reliance on this solution.

What is most valuable?

The most valuable feature is the Self Service tool. They have metrics in place almost all across the pipeline, which is really nice. 

What needs improvement?

We are not yet really a power user of it. You can take as many training classes as you need, but it is not until you are working with a subject-matter expert (SME) on it that you can find out how you can really make this tool sing. My engineers know how to work Control-M. However, if I ask them, "Oh, is this the most efficient way of doing it?" They may not be able to say, "Yes." It is doing what we want it to do. That is nice and okay, but is it the most efficient, effective way? So, we are not there yet.

For how long have I used the solution?

I have been using it for about four years.

What do I think about the stability of the solution?

The platform is good. We haven't had any major outages. The stability is there.

What do I think about the scalability of the solution?

We really haven't pushed it to any of its limits. No scalability concerns have come up for what we are doing.

If you came to me, saying, "Hey, I was looking at Control-M, but it has some issues." I am going to sit there, and go, "Tell me what the issue is." Right now, we are not using the far reaches of whatever cloud providers are out there. Control-M does well with the major providers.

How are customer service and support?

The community is not as robust as some of our other tools that were replaced. The problem was the other tools that we were using didn't do everything that Control-M is now able to do, like monitoring and the entire pipeline flow.

The community and the networking that goes on within that community need improvement. We want to be able to reach out to an SME, and say, "Hey, we are doing it this way. Does that make sense?" Ideally, they come back. and say, "Yes, it does make sense to do it that way. However, if you want to do it this way, then it is a little more efficient." We understand that one solution framework doesn't fit everybody. Depending on the breadth of the data and how broad it is, you may have different models for one over the other.

I would rate the technical support as seven or eight out of 10.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We had a patchwork set of solutions in place that were getting the job done. The problem with that was we had a lot of SMEs within certain verticals. Therefore, there wasn't one overall picture. Every time we went from one step to another step, we had to start talking to another person to figure out what was going on. So, we were trying to bring everything together under one solution with Control-M.

We are able to have a better picture of our data consumption, e.g., what files or data is brought in. Previously, we would ingest data at different points. The question that would always come back to us would be, "Where did this data come from?" Then, we would always have to reverse engineer and have some documentation on it, but the documentation would be outdated. Someone would change the pipeline and forget to change the documentation. With Control-M, we can see everything in one location. To a certain extent, it is not documentation.

I am an engineer by trade. I have been doing this for over 30 years. I know that it is nice that someone puts together a document describing the environment, but as soon as that document is saved that document is outdated.

We don't throw another tool into the toolbox just because it is a nice pretty tool. We try to figure out what the benefits are. Ideally, in our world, we try to reduce the number of tools because I don't need 50 different screwdrivers in my tool kit. I make sure that I have a flathead and a Phillips, but I don't need 50 screwdrivers. Here, we brought in this solution and it replaced some existing solutions. Now, my engineers don't need to know X number of products. They only need to know half of X number of products.

What about the implementation team?

The tool was vetted by another group before making it available to the organization and putting it into our toolbox. Then, when it was available, we looked to leverage it.

What's my experience with pricing, setup cost, and licensing?

One of the restrictions that we had was with some of the licensing, and not having any insight on the financials part of the product. I don't know what the licensing on the product is, but we don't have an unlimited enterprise license. So, there might be a limitation on either the cost of the licensing or the number of seats.

What other advice do I have?

There is always a learning curve any time you are using a new product. Our engineers who are using Control-M are kind of happy with it. There really are no negatives on its learning curve. I am always weary with new products since it is another thing that someone needs to learn, but now there are other products that we don't use because of Control-M. What I would not be open to is bringing in another product, where we need our engineers to know how to work it and make it efficient as well as support other products already in our environment. So, I like that we can get rid of three or four products and replace them with a single product. As long as the learning curve is not too steep, that is an advantage to me.

We are looking into using Control-M to deliver analytics for complex data. So, the solution is doing either machine learning or complex analytics on top of the data flow. While we do some analytics, it is not to the extent that we really want to.

I would rate this solution as a high seven or low eight (out of 10).

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr. Automation Engineer at a computer software company with 1,001-5,000 employees
Real User
Saves time, offers great auditing capabilities, and has good automation
Pros and Cons
  • "It has certainly helped speed things up."
  • "They can improve their interface."

What is our primary use case?

I've been with the same company for 22 years. The use case started out truly as a batch processing solution. That was what we originally got it for back in the day to help us automate what was being done manually or being done through homegrown tools or scripts, et cetera. The use cases evolved through the years. Now, we use it to orchestrate workflows that are touching traditional data centers and that are going out to the cloud and bringing it back.

From one spot, we have a single pane of glass. Like many companies, our systems are getting more complex and more diverse, with cloud and edge computing, containerization, et cetera. However, we have one place where we can go and look and see what's going on. If something happens, we can check what happened and where it happened. Today, we're dependent upon a lot of services and cloud technology that sometimes we don't know the ins and outs of.

A big challenge is to make sure that we have certain things run daily or on a periodic basis. That really was the driving use case. We had a lot of manual tasks going on and if someone, for example, left on vacation, something may not get done for two or three days, a week or two weeks. This solution takes all that away.

The main use case was to get away from having to stare at a system or a screen, and just let things run, let the workflows flow, and only be notified if there's something wrong. That was really a big driving use case.

How has it helped my organization?

It freed up people to work on exciting work instead of mundane work. No one has to sit around and stare at that screen all day long. No one has to reinvent the wheel for the 50th or 500th time to do tasks like maybe put a file out into an S3 bucket or out into an HDFS Hadoop file store since it's already there. It's already done for them. They just drag, drop, click and they're done. It's freed people up and they can do the exciting work that is really what we should be doing anyway. No one wants to be doing boring work.

What is most valuable?

I am a big proponent of an automation API and Jobs-as-Code. That is Control-M in the DevOps world. It opens up the tool to a traditional operations tool. Developers can jump right in there now, giving them that ownership, and integrating the existing DevOps tools that they have. That is a huge feature that I just love. 

There's an application integrator. It doesn't matter if you're trying to integrate with on-premises, off-premises, API, container, or serverless functions, it's easy for you to design. You just design that integration and then it's available instantly, and that's a huge time saver. 

It's rather easy to create, integrate and automate data pipelines with Control-M. I can give a broad answer. It can be as easy as drag and drop, or it can be as complex as designing the integrations. If you use customization, you can access a data lake that your organization developed. For the typical user out there, the difference is on a scale of one to five, with one being easy and five being hard, you're probably looking at a two and a half. For most people, it's very easy. It's getting easier as it's all web-based nowadays. Alternatively, it can be all code-based.

I have not explored Python Client too much. I've tinkered with it and that's been the limit of my exploration. Now, with the integrations like AWS, we've made extensive use of it, and it is very easy for anybody to do. Python Client has a lot of great possibilities, especially in the data science arena, however, sadly we have not had an opportunity as of yet to play with it.

The Control-M interface for creating, monitoring, and ensuring delivery of files as parts of your data pipeline has gotten better. It is not perfect. That said, it’s come a long way over the years. Nowadays, most of it is web-driven. A lot of it can be API driven if you so wish. There's still probably some future work to be done there, however, for the average user that's coming in, starting to use it for the first time, they're going to need a little training and handholding at the beginning for maybe the first week or so. Then you can start setting them free to go out and use it on their own.

The orchestration of our data pipelines and workflows has been able to give a single point of view too. Management doesn’t care about the bits and pieces. A workflow or a data pipeline could have 100 or 1,000 components behind it, and management does not care about that. Management cares whether the SLA has been met or not. They want that easy-to-see red light or green light. We can provide them with that. The solution drives self-service and it helps. A manager doesn't have to call somebody in IT and wait around for an answer.

They can immediately get that information for themselves, consume it and be able to understand that, "Hey, you know what, this data pipeline over here, we're going to be 15 minutes off our SLA for today." Then, they can start asking why. I like parts of Control-M like Batch SLA Impact, is they can start doing some of that analysis themselves, for example, “this late due to the fact that maybe the system was down for maintenance for two hours last night." That's really beneficial in today's business world.

The automation of Control-M has sped up everything. We can integrate directly into existing pipelines and the DevOps teams can get anything integrated with their Jenkins deployments. They don't have to wait for traditional operation functions. This is all built-in. It validates and checks. In some cases, it automatically deploys the agents and deploys the configurations. That's something that years ago you'd have to wait for. The speed of delivery has vastly improved.

Nowadays, auditing is as simple as running a report. If this falls under an auditable category, we can just hit a button and the report is done. Control-M audits everything, even if it is not under the regulatory or audit spotlight. Every process, every movement, and every change is logged by the system. If there's ever a question, you’ll be able to find a why and a when. There’s an audit trail.

It certainly helped speed processes up. I can eliminate what I call the manual gaps between certain features. I don't have to send an email to somebody to say, "Hey, guess what? That file's ready. Now you can run process X, Y, Z." The system just says "Hey, the file is there, let's go." It's eliminated those gaps between parts of the workflow. It also helped optimize the infrastructure needed as it's like a Tetris Puzzle. I have these ten different workflows that I'm trying to run and before I may have had ten dedicated systems for them. Now I know that I don't need that.

We use this model all the time. We can run those ten processes on three systems and be just fine. That saves money. The solution is not only speedy, but it also saves money.

They are doing a great job with continuing to drive the open-source model of it. Five years ago, if you looked for Control-M anywhere, you would not have found it. Today, that model has changed. They're actively publishing on GitHub.

You can download for free an entire container and run Control-M at home if you want to tinker with it. That was unheard of a few years ago. You can type a query in Google and start to see all sorts of documentation that is now available to the public. The major strides that they have made there are pretty darn good.

What needs improvement?

If you want to take it and ramp it up to doing some very heavy-duty integrations, you can find yourself at first dealing with a difficult integration. However, once you get that integration going for maybe a month or so, the next person after you will have less difficulty. That's the power. 

They can improve their interface. They're going through huge modernization efforts and they're getting there. They're probably 75% there, however, there's still another 25% to go.

For how long have I used the solution?

I've been using the solution for 22 years.

What do I think about the stability of the solution?

Since it supports business, it has to be stable. It's very stable. We have not had major outages or anything. That's always a good thing, however, like with any solution, its stability is going to depend on how you deploy it and what safeguards you put in place, including high availability and disaster recovery, et cetera. All the hooks for that are in the product, however, it's up to you to decide how you're going to use those hooks.

What do I think about the scalability of the solution?

It's highly scalable. You can run five things in it today and easily scale up to run 1,005 things tomorrow. In terms of scalability, there are no issues there.

How are customer service and support?

Technical support tends to be very helpful.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I used to work for an insurance company and I used Computer Associates. It was called CA-7 and CA-11, which are similar tools.

We tried to use Computer Associates before this, but it didn't support the systems we needed and the integration was next to impossible.

How was the initial setup?

I was involved in the deployment and initial setup of the solution right from the beginning.

We had jobs and workflows running within the first day. That was pretty good. We don't use the Helix model, however, there is a Helix model you can purchase, in which everything's hosted by BMC. You can be up and running literally in hours which is reasonable. There's a learning curve, however, if you do not get some value out of it within two days, you're probably doing something wrong.

At the time, there were only two of us deploying the solution. Today there are only three of us.

It's business-wide. Everything from data to marketing, to finance, even though it probably wouldn't make sense to anybody else, it touches everything. It's deployed across Windows, Linux, containers, VM, cloud, et cetera.

If anybody has a use case or wants to learn more about it, we'll show them. Anybody in our organization can get basic access and can tinker around in an alpha test environment. This includes non-technical people. We have non-IT people that use it.

If they can self-service and maybe design some parts themselves, that's a huge win right there. We have a very open model of deployment.

There are occasional patching and vulnerabilities that come out. Most of the patching nowadays can be automated if you're using the Helix-based solution. A lot of that is handled by BMC.

What about the implementation team?

We did not use an integrator, reseller, or consultant for the deployment.

What's my experience with pricing, setup cost, and licensing?

I can't speak to the exact licensing costs. 

Which other solutions did I evaluate?

Every few years we go through a reevaluation. We'll go through and look at what's on the market and what companies have come up with or released new versions. We'll go through and we'll say, "Okay, let's compare these, what do we need and what are all the tools offered out there?" We do that roughly every five years and it keeps us on our toes.

The biggest difference as of late is the API and Jobs-as-code. Control-M is light years ahead of others. It is light years ahead of the competitors and what they're offering. Other competitors are starting to get APIs, however, only BMC is working with Job-as-code and is in the lead. To my knowledge, they're really one of the only ones who can define your entire workflow as code.

What other advice do I have?

Control-M is pretty critical to our business as it runs many different business processes every day, and if it wasn't there, we would probably hire many more people, be a lot slower, and be more prone to error.

We use a hybrid deployment. We have parts in the traditional data center. We have parts in the cloud. We sometimes have parts that live on containers. They only exist for two minutes. It is very much a hybrid mix of goodies with our deployment.

I'd advise potential new users to examine it today and not think about what it did ten years ago. Control-M is an old product. It has been around since we all used mainframes, however, just because something's been around for a long time, doesn't mean it's a piece of junk or doesn't work with modern technologies. It has adapted and grown with the times. Control-M did cloud-based work before many of us were even talking about the cloud. It's hard to get rid of negative perceptions sometimes, however, the best thing for people to do is to head out to the internet, look it up, and go out to GitHub.

If you have a technical team, send them out to GitHub. You can download everything in an image or in a container and try it yourself. It doesn't cost you a nickel. 

I'd rate the solution nine out of ten.

The biggest advice I can give is to try it out. Don't only believe what the PowerPoints tell you. There's no excuse that you can't have a deployment running clearly within hours. Be willing to think about how it can solve problems in new ways. Sometimes we try to find a new tool as we have a square problem and we get upset as all the tools we're looking at only have round solutions. Sometimes the reason that it only has round solutions is due to the fact that that's the proper way to solve the problem. You have got to be willing to break down whatever you're trying to do, whatever workflow you're trying to automate or integrate, and take it in pieces.

If all you want to do is save yourself a lot of money, use Cron, and use Windows Task Scheduler. However, if you want to take your business to the next level and start to get to the point where you can automate to remediate and audit, that's where tools like Control-M come into play.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Chris Wahl - PeerSpot reviewer
Operations Engineer at West Bend Mutual Insurance Company
Real User
Saves us thousands of hours, is widely applicable, user-friendly, and features top-notch reporting
Pros and Cons
  • "The reporting is top-notch. I haven't found any other applications on the market that can replicate what Control-M offers. The alerting is very good, and I think their service monitoring is the best in the industry."
  • "The stability could be improved. I ran into an issue with a recent Control-M patch. The environment would become unstable if security ports were scanned. This is an area they need to improve on, but ultimately it's a relatively small improvement."

What is our primary use case?

We use the solution in Western Mutual Insurance Group's environment for the daily scheduling of around 11,000 jobs. Our number of end-users is in the hundreds, across 18 to 20 teams. We have three different physical locations as a company. Since COVID, we are a partially remote workforce as well, so we have multiple locations.

It's essential that the solution orchestrates our workflows. Regarding processes like file transfers and data workflows, we want one source for that. We want one area where we can check and see how things are progressing, and Control-M is invaluable. Everyone has access in our environment to Control-M, and we all use it heavily. We utilize a ton of plugins in our environment. We started the transition into servers and are seeing what our license allows in that area. We try to take advantage of everything we can.

We use Control-M to replace a lot of our manual logging of job data. It's been very valuable in terms of logs that can output alerts.

I just did an audit earlier this year, and it was a swift process using the product. It took me less than a few hours, and without the solution, it would potentially take a couple of days to a week.

We essentially have a nightly batch cycle. We process data overnight, so it's available for end-users during the day. Using manual execution, instead of Control-M, this nightly batch cycle would transition into a weekly or monthly batch cycle instead.

How has it helped my organization?

I recently took over as admin of Control-M about a year ago. Since then, the question has been how we can further utilize Control-M in our environment. We haven't yet found the limits of what Control-M can do. We're finding better ways to apply it every day. From the old days when we manually scheduled jobs to the current paradigm of using an automation tool. This made the process much more manageable.

We define Control-M internally as a "critical business application." I would say that if Control-M were not available, the impact would be catastrophic to our business.

What is most valuable?

The reporting is top-notch. I haven't found any other applications on the market that can replicate what Control-M offers. The alerting is very good, and I think their service monitoring is the best in the industry. 

The solution is a key part of our system and I have not seen any significant limitations with it. It's very reliable and performs as advertised.

We're just starting our data pipeline journey. Compared to other products in the market, I believe Control-M's is the easiest to use. Theirs came out ahead in terms of ease of use every time. I rate them very highly in that area. We're primarily an Azure corporation. We found that the solution's built-in integrations with Azure are straightforward to use.

We actively build out methods of alerting, for instance, when workflows in Control-M don't complete, as this impacts our end-users and our managers that support the teams attempting to provide data for the end-users. I think Control-M has a ton of built-in integrations that make alerting when that data is unavailable more visible to end-users. I think that's been very useful in our environment.

What needs improvement?

The stability could be improved. I ran into an issue with a recent Control-M patch. The environment would become unstable if security ports were scanned. This is an area they need to improve on, but ultimately it's a relatively small improvement.

For how long have I used the solution?

We have been using the solution for around seven years. 

What do I think about the stability of the solution?

One patch had some issues, but the fix pack was very helpful. Other than that, we haven't had any stability issues with this product. So I'd rate it very highly.

What do I think about the scalability of the solution?

The scalability is excellent, we're looking into options in Azure for scaling up and down in our environment, and Control-M has been essential in accommodating that.

How are customer service and support?

Technical support would be a 10. They're always available. They've been very helpful with any questions I have. There are multiple means of contacting them, and they've always been responsive. The technical account partner, Jake, has been very helpful. The account rep, Chris, has also been very responsive.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

Control-M in our environment predates my time. I believe the company first implemented the solution around 15 years ago.

How was the initial setup?

The initial setup was before my time. We started off as a mainframe exclusive influence of Control-M, and then we transitioned to distributed servers from there. I am a team of one.

What was our ROI?

The solution's automation has improved our business service delivery speed. Our big push this year has been toil reduction and automation of manual tasks that ultimately take time away from our engineers. Control-M is factored into probably north of 80% of those reductions with its ability to automate tasks. So far this year, we're at about 4,000 hours of toil reduced. I would say Control-M has played a factor in 3000 of those hours.

What other advice do I have?

I would rate this solution a ten out of ten. Control-M is critical to our business.

There are other solutions like Control-M out on the market, but in every recent market evaluation, Control-M has always come out on top. I think they are becoming more cloud-native as they progress with their Control-M Web Services. They're more reliable than the others on the market right now. 

I would advise anyone to start with a trial version of this product. I think they'll be very impressed with it. 

We don't use Python to a significant degree at all in our environment. We have been looking into that, but nothing solid yet. We don't use AWS but are looking to get into it in 2024.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Balabrahmam Chakka - PeerSpot reviewer
Integration Administrator at Sainsbury's Supermarkets Ltd
Real User
Reduced the number of jobs that we run daily
Pros and Cons
  • "Control-M has helped us resolve issues 70% to 80% faster. It provides us with alerts instead of having someone go to that particular server and check the logs to determine where the issue is. We can simply click on the alert information, then everything is in front of us. This provides us with time savings, human effort savings, and process savings."
  • "Control-M reporting isn't that good. It is very limited. We would like the ability to create our own reports as well as the ability to publish dashboards in the cloud, which would help us. Improved reporting will help us determine statuses and get the answers that we need. However, I personally think BMC is not focusing on the reporting. I have even visited the BMC office in India, and asked, "Why haven't you improved the reporting?""

What is our primary use case?

I work for the second largest chain of supermarkets in the UK. We are running about 90% of our jobs through Control-M. This applies for jobs and scripts on-premises and in the cloud.

When we used Control-M version 7, we were just doing scheduling. When we moved to Control-M version 9 six months ago, we started using the cloud plugins, like AWS.

How has it helped my organization?

Control-M is business-critical for our operations. It does all our monitoring and tracking.

Our command center people watch the Control-M job status and alerts. Since the pandemic started, and we are working from home, we have been providing them with Self Service. We started this two or three months back. Now, they can watch the jobs and alerts through their mobile and iPads instead of logging into their laptops.

We set up a file transfer mechanism because this will be easier for Control-M to track end-to-end.

We use Control-M as part of our DevOps automation toolchains. We have a four-person team for Control-M. We help the DevOps team create new jobs. We assign a dedicated resource to understand their requirements and how they can be integrated with other jobs. Because Control-M works end-to-end, it is critical for our DevOps daily jobs.

We use Control-M to streamline our data and analytics projects. Control-M has helped improve our data transfers. If there are no security concerns, the data can directly link to downstream systems. We use Control-M to watch all the transfers of files to their targets.

What is most valuable?

All our Control-M alerts go to our internal automation.

It has two-way integration. We now have a ServiceNow integration. 

What needs improvement?

Control-M reporting isn't that good. It is very limited. We would like the ability to create our own reports as well as the ability to publish dashboards in the cloud, which would help us. Improved reporting will help us determine statuses and get the answers that we need. However, I personally think BMC is not focusing on the reporting. I have even visited the BMC office in India, and asked, "Why haven't you improved the reporting?"

There are some latency issues with jobs between on-premises and the cloud. BMC is helping a lot to check the imports and exports from version 7 to version 9, including the EM server and the mainframe.

Control-M could improve agentless connectivity a little more. We are using it almost 100% with agents, but when we start using agentless, Sainsbury's Bank has different security mechanisms and we cannot install Control-M. For example, the agentless connection fluctuates a lot, which triggers alerts.

For how long have I used the solution?

I have worked with Control-M for almost 10 years, since 2010.

What do I think about the stability of the solution?

The stability is very good. 

What do I think about the scalability of the solution?

The scalability of the latest version is a drastic improvement compared to version 7.

How are customer service and technical support?

We are getting good help from them. When I use Support Central, I can also see tickets that have been created by my colleagues.

Which solution did I use previously and why did I switch?

We currently have IBM TWS as a job scheduler, but they don't automate their ticketing. Whereas, Control-M has automatic ticketing. 

We are using TWS for mainframe data. We are looking to start moving all our TWS jobs to Control-M now that Control-M is in the cloud. We are looking at moving these jobs around September or October, then we will have 200,000 jobs daily in Control-M.

How was the initial setup?

We are trying to import from Control-M version 7 to Control-M version 9, but have experienced a major problem with its new features (database-related stuff). We are slowly fixing this as we go, with the help of BMC. Right now, we are doing this process step-by-step, but we can't upgrade everything to the latest version. We can only move everything to Control-M version 9.5.

Initially, we were first-timers doing the cloud. We had so many trials and errors. For importing, we created virtual machines in AWS and set up a lot of automation. However, we needed a static IP address for Control-M. So, we had to start from scratch to create new virtual machines with static IP addresses.

We are currently importing step-by-step. We still have two mainframe servers that we need to do and should be done by the end of August.

What was our ROI?

We have 70,000 jobs running daily. Control-M has reduced the number of jobs that we are running daily. We used to have more than 500,000 jobs running daily. This is very important to us.

Control-M has helped us resolve issues 70% to 80% faster. It provides us with alerts instead of having someone go to that particular server and check the logs to determine where the issue is. We can simply click on the alert information, then everything is in front of us. This provides us with time savings, human effort savings, and process savings.

Which other solutions did I evaluate?

You can't compare other tools to Control-M, because Control-M is further ahead of any other tool.

What other advice do I have?

Once a year, as part of our disaster recovery, we restart Control-M and see what happens. Next, we will run those jobs through Control-M. Then, we will show management, "This is what happens if you use Control-M and if you don't use Control-M."

There are some areas of our business where we don't have Control-M. When we start doing those areas through Control-M, it will be an end-to-end solution.

We don't use Control-M for file transfers. We have proposed using Control-M for file transfer with version 9, which is in the cloud.

In the future, we will give control to the DevOps team through BMC AMI Change Manager. They will create the jobs, then send them to our BMC Control-M team for review, testing, and promotion to production. However, adopting this will take some time.

I would rate Control-M as a nine out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.
Updated: August 2025
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.