Control-M OverviewUNIXBusinessApplication

Control-M is the #1 ranked solution in top Process Automation tools, top Managed File Transfer (MFT) tools, and top Workload Automation tools. PeerSpot users give Control-M an average rating of 8.8 out of 10. Control-M is most commonly compared to AutoSys Workload Automation: Control-M vs AutoSys Workload Automation. Control-M is popular among the large enterprise segment, accounting for 76% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a financial services firm, accounting for 27% of all views.
Control-M Buyer's Guide

Download the Control-M Buyer's Guide including reviews and more. Updated: April 2023

What is Control-M?

Control-M simplifies application and data workflow orchestration on premises or as a service. It makes it easy to build, define, schedule, manage, and monitor production workflows, ensuring visibility, reliability, and improving SLAs.

  • Accelerate new business applications into production—by embedding workflow orchestration into your CI/CD pipeline
  • Scale Dev and Ops collaboration, with a Jobs-as-Code approach
  • Simplify workflows across hybrid and multi-cloud environments with AWS, Azure and Google Cloud Platform integrations
  • Deliver data-driven outcomes faster, managing big data workflows in a scalable way
  • Take control of your file transfer operations with integrated, intelligent file movement and visibility

Control-M was previously known as Control M.

Control-M Customers

CARFAX, Tampa General Hospital, Navistar, Amadeus, Raymond James, Railinc

Control-M Video

Control-M Pricing Advice

What users are saying about Control-M pricing:
  • "Its pricing and licensing could be a little bit better. Based on my experience and discussions with other existing customers, everybody feels that the regular Managed File Transfer piece, not the enterprise one, is a little overpriced, especially for folks who already have licensed Advanced File Transfer. We understand that Advanced File Transfer is going away and is going to be the end of life, and there is some additional functionality built into MFT, but the additional functionality does not really correlate with the huge price increase over what we're paying for AFT already. This has actually driven a lot of people to look for alternative solutions."
  • "This is now from my previous years as support for banks and big companies. If it's not enterprise scale, I find that it's too expensive for smaller companies. You really have to be quite big and need to have a dedicated support staff to run it, then you'll be fine. What we've seen at smaller companies, it's too expensive because they want to automate everything. Now, stuff that can literally run once a day for the rest of their lives is costing them $3 a job a day. It becomes too expensive, eventually. They are not seeing the return on investment because it's not business critical. Nobody is going to die or they're going to lose money if that job didn't run exactly at 11 minutes past 4:00. It's definitely for bigger enterprise companies, especially banks or healthcare providers. We have had an instance where Control-M was unavailable due to external factors for 20 minutes and there was a loss of almost a million euros because the solution involved logistics."
  • "The cost is basically $100 a job, give or take."
  • "We are paying way more for Control-M than we've paid for any of our other scheduling tools."
  • "You must accept that BMC licensing can be very confusing. No one can easily understand how they calculate things, whether it is user-based, job-based, or server-based. The calculation is quite tough. How BMC calculates licensing is not easily available anywhere."
  • "Initially, our licensing model was based on the number of jobs per day. That caused some issues because we were restricted to a number. So at our renewal time we said, 'We want to convert from number of jobs to number of endpoints.' That cost us extra money but it gave us additional capabilities, without worrying about the number of jobs."
  • "This is an area where it is a little difficult to work with BMC. They want to do licenses by job, which is what we have. For example, the simplest is to license by job, but they can also license by nodes. While the licensing is simple to use, it might not be the correct licensing model for the customer. It is okay because we want to license by job, which is something measurable. At the end of the day, licensing by job is the most important."
  • Control-M Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    AVP - Systems Engineer at a financial services firm with 10,001+ employees
    Real User
    Top 10
    Allows us to integrate file transfers more readily, resolve issues quickly, and orchestrate a diverse landscape of vendor products
    Pros and Cons
    • "The File Transfer component is quite valuable. The integration with products such as Informatica and SAP are very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit."
    • "A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions."

    What is our primary use case?

    Control-M supports a lot of business processes. It supports some of the HR functions. I don't know if payroll is directly supported, but we do run jobs through PeopleSoft, which obviously impacts HR. Recently, we've started using the SAP module. So, we're making a transition from PeopleSoft to SAP, and I also see some payroll functions happening there.

    How has it helped my organization?

    We use Control-M to orchestrate a diverse landscape of vendor products such as Pega, MuleSoft, etc. File transfers and data feeds fetching are quite important for us. So, a lot of data processing happens through Control-M.

    Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all of our application workflows and data pipelines. Of course, such a diverse landscape requires you to make the effort to utilize Control-M to tie everything together or to act as the glue. Once you do that, everything is clearly defined, and you can view these disparate systems using one unified pane. If you don't define it correctly, then obviously Control-M won't have that insight, and so you'll have to go to multiple locations to go look at your job statuses.

    We use its web interface. It is primarily for the application support teams to go monitor their own jobs. The jobs defined within Control-M are tightly controlled by a specific group of people. There are also people who need access to view that the jobs were completed successfully or why the jobs may have failed. These people are given access through Control-M web to view and monitor the jobs that they support or the applications they support. They're usually able to log on without having to install any client on their personal workstations. So, it's quite convenient. We have not implemented its mobile interface.

    The integrated file transfers with our application workflows have certainly sped up our business service delivery by 80%. It has allowed the business to integrate file transfers more readily. Prior to utilizing the Control-M module, people had to write their own file transfer scripts in a scripting language of their choice to vary degrees of effectiveness. With the integrated File Transfer solution within Control-M, there is a standardized way of performing file transfers along with the capability of file watching and grabbing the file names that were transferred, making it much more versatile.

    Control-M can immediately report when a job fails. If you have proper monitoring in place, you're notified immediately when your business flows are impacted. In the past, when you run jobs using Cron or just wrote shell scripts, you're really left in the dark because they don't necessarily report even from within Control-M. Implementing Control-M has made the business realize how critical and important it is to have proper error coding within the scripts that they schedule. If the scripts don't necessarily report any errors or redirect the system output into log files, when a job fails, there is no way to detect that.

    We've automated many time-consuming business reports and other things that were very manual and took a tremendous amount of manhours. We've also automated a lot of maintenance using Control-M. We've integrated with Ansible Tower. So, we now are able to run Ansible playbooks and Ansible job templates. With the scheduling capability and the multitude of integrations that Control-M offers, it really acts as the unifying glue and as a communicator and orchestrator across the enterprise. With Ansible Tower, you can run a number of playbooks through it to perform patching and reboots and whatever maintenance that the infrastructure teams require, but you can't really do it when the business is still operating, or you can't do it when that business is operating, but you could do it for another business that's not operating at the moment. It is very hard to coordinate that without knowing which lines of business have jobs running or things like that. With Control-M, you can see that and you can actually enact workload policies to put jobs on hold prior to running Ansible playbooks. Once your Ansible playbook is complete, you can release the jobs again by deactivating the workload policies. So, it makes those processes very streamlined.

    We do use the Role-Based Administration feature. We have been allowing other groups to gain more control over their agents so that they can define connection profiles, and they can do a little bit more on their side without inundating the main team with a lot of tasks. Everybody is happier. They can get things done faster, and they have immediate feedback and response because they're in control. The main Control-M team is not inundated with a lot of different requests from various teams to do a number of mechanical tasks. They don't get asked to create the connection profile for a database. People have all the information there, and they can do it themselves. They can define it in a way so that only they have access to it.

    It has helped us to achieve faster issue resolution. Control-M reports on the error. It is easier to view the system output of that job. Whether it is an Informatica job, a scripted job, or a database job, it is easier to go in and view the issue and then troubleshoot from there. Most of the time, you can be running from the point of failure if the jobs aren't defined correctly. In a properly defined job, I would estimate that there is a 70% to 90% reduction in the meantime to resolution.

    It has helped us by improving our service-level operations performance. We've built integration between Control-M and our ITSM, which is ServiceNow, and that has certainly allowed us to gain more visibility within our community through ServiceNow. Every time a production job fails, an incident ticket is cut, and that's highly visible. That needs to be escalated too, and there is a much more defined process to be able to resolve that issue. In the past, obviously, when you didn't have that level of visibility or that integration, there was always time lost in identifying what the issue is.

    What is most valuable?

    The File Transfer component is quite valuable. The integration with products such as Informatica and SAP is very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit. The file watcher component is also indispensable when integrating with other applications that generate files, instead of triggering a workflow based on time.

    What needs improvement?

    We have been experimenting with centralized connection profiles. There are some bugs to be worked out. So, we don't feel 100% comfortable with only using centralized connection profiles. We do have a mix of control on agents out there, which leads to some complications because earlier agents do not support centralized connection profiles.

    A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions. One particular example is that we were trying to use the Automation API to fetch a number of run ads users from the environment. The username had special characters and backspace characters because it was a Windows User ID. In the documentation, there is a documented workaround for that. However, that relied on two particular settings in the Tomcat web server. I later found out that these settings work out-of-the-box for version 9.0.19, but those two options were not included in the config file for 9.0.20. So, it led to a little bit of confusion and a lot of time trying to diagnose, both with support and the BMC community, what is the issue. Ultimately, we did resolve that, but that is time spent that really shouldn't have been spent. It had obviously been working in 9.0.19, and I don't know why that was missed in 9.0.20, but that's a primary example of an improvement that can happen.

    We've also noticed that the Control-M agents themselves now run Java components. Over time, they tend to destabilize. It could be because garbage collection isn't happening, or something is not happening. We then realize that the agent is consuming quite a large amount of memory resources on the servers themselves. After recycling the agents and releasing that memory, things go back to normal, but there are times when the agent becomes unresponsive. The jobs get submitted, and nothing executes, but we don't know about it until somebody says, "Hey, but my job isn't running." When we look at it, it says Executing within the GUI, but there is no actual process running on the server. So, there is some disconnect there. There is no alerting function or the agent there that says, "Hey, I'm not responding." It is not showing up in the x alerts or anything like that.

    The integrated guides have not been that helpful to us. I do find a lot of the how-to videos on the knowledge portal to be useful. However, there are some videos where the directions don't always match with some of the implementations. There are some typos here and there, but overall, those have been more helpful for us.

    Its pricing and licensing could be a little bit better.  The regular Managed File Transfer piece, is a little overpriced, especially for folks who already have licensed Advanced File Transfer.

    What I'm also noticing when I'm trying to recruit for Control-M positions is that the talent pool is quite small. There's not a whole lot of companies that utilize Control-M, and if they do, most people don't want to let their Control-M resources go if they're good. There is a high barrier of entry for most people to learn Control-M. There are Workbench, Automation API, and so forth mainly for developers to learn, but there are not a whole lot of resources out there for people to get more familiar with administering Control-M or things like that in terms of the technology or even awareness. So, it becomes very challenging to acquire new resources for that. A lot of the newer people coming out of college don't even know what is Control-M. If they do, they think of it as a batch scheduler, which is certainly not true in its current transformation.

    Control-M is a very powerful enterprise tool, but the overall perception has not changed in the past five to six years that I've been working with Control-M. There's not much incentive for people to dive into that world. It is a very small community, and overall, the value of Control-M is not being showcased adequately, maybe at the C-level for corporations. I've had multiple conversations with other people and other companies who have already exit using Control-M. About 70% of the companies out there do not take full advantage of the capabilities in Control-M. So, that type of utilization really hampers and hinders the reputation of Control-M. That's because people then acquire this untrue concept that Control-M can only do X, Y, and Z, rather than the fact that Control-M can do so much more. I don't know if it needs a grassroots marketing movement or a top-down marketing movement, but this is what the perception is because that's what I'm hearing and that's what I'm seeing. For some of the challenges that I face working in Control-M, when I go back to my management and say, "Hey, I want to spend more money in this space," they're like, "Why? Can you justify it? This is what we see Control-M as it is. It's not going to bring us value in this area or that area." I have to go back and develop a new business case to say, "Hey, we need to upgrade to MFT enterprise or something like that." So, it definitely requires a lot more work convincing management in order to get all these components. In the past, we had to justify acquiring a workload change manager. We had to justify acquiring the workload archive. All of these bring benefits not only to our audit environment but also to the development environment, but the fact that we had to fight so hard to acquire these is challenging.

    Buyer's Guide
    Control-M
    April 2023
    Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: April 2023.
    690,226 professionals have used our research since 2012.

    For how long have I used the solution?

    I've been using Control-M for about eight years.

    What do I think about the stability of the solution?

    Version 9 was very stable. Once they started adding a lot of the newer Java components, the stability suffered. It seems to have gotten better in version 9.0.20, but that's could be my basic perception. 

    We run a lot of database client jobs. There are some things that we've implemented that I understand can contribute to the agent instability. We sometimes extract a lot of database output and massage that output using other scripts. I've noticed there are certain things that you cannot do with it, or there are some things that contribute to the instability. For example, in the output scanning functionality, there certainly is a size limit. You probably don't want to scan anything too large because that's going to put a lot of resources on the environment.

    In addition, there are times when the agent becomes unresponsive. The jobs get submitted, but nothing executes. There is no alerting function. These are the examples of instability that I've noticed. Overall, the main application itself, the EM, and the scheduler have been pretty stable.

    What do I think about the scalability of the solution?

    It is very scalable in terms of job execution. I haven't really explored scaling Control-M and the EM environment to a point where we have hundreds of users accessing it at a given time. That's because I don't have a hundred users who want to access that at a given time, but I do understand that you can distribute the web server more, and then have a load balancer to balance the load. I would think Control-M is a fairly scalable application.

    In terms of its users, we have a lot of application support folks. We do have some developers who access Control-M mostly for the non-prod environments to execute and monitor their own jobs. There are some software engineers and operational engineers who are part of the application support teams that access Control-M. As for size or concurrent users, we have about 50 concurrent users at the max.

    How are customer service and support?

    I would probably give them a nine out of 10. For the most part, they're very helpful, but there's always an initial standard dialogue. For an issue, you have to collect from EM logs, agent logs, and so forth, and you submit that. Sometimes, we have done all the advanced work and submitted it, but they still come back and say, "Hey, we need the logs." It seems like that's a canned response without looking at the tickets.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We've been with Control-M for quite a long time. We have not been using anything else in my history with this organization. 

    I have not looked at anything recently. I am aware there are other application orchestration solutions out there, but I have not felt the need to go explore those options at the time.

    How was the initial setup?

    If you're deploying using out-of-the-box options, the process is fairly straightforward. If there is some customization that needs to happen, then the process can be complex, and the documentation does not cover some of those complexities.

    For the most part, we are standard out of the box. We have run into some performance issues where we had to, later on, go in and maybe make some modifications. For example, we had to stand up different gateways for various purposes just because one singular gateway was not enough to take the load in particular because we had installed a workload archive, and that was just taking up a lot of resources. Other human users were not able to perform their actions because the archive user was consuming so much of the server's resources. So, there was a lot of tweaking there, and we had to basically break out and distribute some of the components.

    In terms of implementation strategy or deployment plan for Control-M, the environment always had Control-M, and we just had to upgrade the Control-M environment. We've had Control-M in our environment for quite a long time, probably when it was still version 6. So, as we progressed through different versions, we obviously had to expand the environment and the platforms. We initially started off with Control-M on AIX, and we later moved to Control-M on Linux. As you go to Linux, obviously, there is planning for high availability and production environments, disaster recovery environments, and so forth. So, you have to plan for marrying a lot of the BMC Control-M components and identifying where a load balancer may be required, or DNS ALIAS is required so that you can quickly flip over in the event something happens. Then, of course, there is sizing for the environment in terms of how many jobs are running, how many executions are happening, and so forth. This is how we plan.

    What about the implementation team?

    We've used the AMIGO program, and then we've performed the upgrades ourselves.

    For its day-to-day administration, we have a team of five people. They're administrators and schedulers.

    What was our ROI?

    Its return on investment is quite high, and that's mostly because we use so many of Control-M's capabilities. We also extend those capabilities. We write our own scripts to be able to integrate Control-M with so many other applications such as Automation Anywhere, Alteryx. We have also done vice versa. We have helped other teams develop their capabilities in integrating with the REST API and Control-M. So, the ROI is quite high for our use case, but based on the conversation with some of the community partners out there, their ROI is probably quite low because they're not making use of all these new features. I don't know if it is because they don't have the skillset to make use of these new features, or their management structure or process structure is hampering them from going out there. A lot of large companies I know like to maintain the status quo, and that's why they're slow to adapt and slow to move, which is going to hurt them in the long run, but in the meantime, it can hurt the adoption of Control-M as well.

    What's my experience with pricing, setup cost, and licensing?

    Its pricing and licensing could be a little bit better. Based on my experience and discussions with other existing customers, everybody feels that the regular Managed File Transfer piece, not the enterprise one, is a little overpriced, especially for folks who already have licensed Advanced File Transfer. We understand that Advanced File Transfer is going away and is going to be the end of life, and there is some additional functionality built into MFT, but the additional functionality does not really correlate with the huge price increase over what we're paying for AFT already. This has actually driven a lot of people to look for alternative solutions.

    I know they are now moving more towards endpoint licensing or task-based licensing. In my eyes, the value of Control-M is the ability to break down jobs from monolithic scripts. You don't want to have to wrap everything up in one monolithic script and say, "Hey, I'm executing one task because I want to save money." That defeats the purpose of controlling, and that defeats the value of Control-M. By being able to take that monolithic script and break it down into the 10 most basic components, you can monitor each step. It is self-documenting because, within Control-M, you can see how the flow will work, and you can recover from any one of those 10 steps rather than having to rerun the monolithic script should something fail. That being said, the endpoint licensing does make more sense, but maybe pricing or things like that can be more forgiving.

    Which other solutions did I evaluate?

    N/A

    What other advice do I have?

    It is worth the time and money investment to learn more about Control-M. You should learn all the features of Control-M and really explore and test out the capabilities of Control-M. That's the only way people get comfortable with what Control-M can implement. A lot of people aren't aware of just how flexible a platform Control-M is, especially with all the new features that are being added via the Automation API. These features are helping to drive Control-M and things developed in Control-M more towards a microservices model.

    We are just beginning to explore using Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. Obviously, there is a little bit of a learning curve for developers as well in order to see the value of developing Jobs-as-Code. Currently, we're walking developers through it, and we're holding their hands a little bit in terms of developing Jobs-as-Code, but we are heading in that direction because it does provide artifacts that you can version control and change quickly and easily. You can redeploy much quicker than just having the jobs defined in the graphical user interface. Previously, when you had to modify it, you either did it via the GUI, or you exported it via XML and then modified those components. Once you get the developers closer to their job flows, then you can theoretically speed up the delivery of applications along with scheduled jobs.

    I don't have a whole lot of experience with other scheduling orchestration environments, but from everything that I've heard while speaking with other colleagues, I would say Control-M ranks fairly high. I would rate it a nine out of 10. Control-M usually is the platform that people are moving to, not moving away from.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Maintenance Manager at a transportation company with 10,001+ employees
    Real User
    We have seen quicker file transfers with more visibility and stability
    Pros and Cons
    • "If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it."
    • "The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability. We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better."

    What is our primary use case?

    We schedule the majority of our SAP jobs Control-M. We do that globally for all the production plants. We have tens of thousands of SAP jobs and managed file transfer.

    SAP batch and managed file transfer are critical processes that we have automated. We are in the process of replacing Connect:Direct and SecureTransport, the legacy file transfer solution, with Managed File Transfer (MFT). That is on the global scale. 

    The Control-M for Informatica is gaining a lot of popularity, primarily in the financial side of the business. They have a lot of security restrictions that make their jobs very difficult. Also, there are cost issues for Informatica, e.g., anytime they execute a workflow in Informatica, they get billed for it. We are adapting the solution to not scrum the workflow every half an hour or hour because they pay for it, but only when it is needed. Therefore, we can do a database query and check if there are new records that need to be processed. If there are no records to be processed, then depending on that output, we either run the Informatica job or leave it and check again for maybe half an hour. We are optimizing, saving money for the customers and ourselves, while reducing the number of executions, jobs, etc.

    We are using on-premises. We have been for many years. We are aware of the new Helix offering, which is a SaaS/cloud offering from BMC, but it is not really ready for enterprise yet, not at our scale. We are doing some cloud, though not the Helix offering. I have installations in the cloud using Azure and AWS. We are not fully functioning there yet. We are waiting for the demand, but we are aware of the cloud opportunities and making use of them.

    We have been busy upgrading to version 9.0.20 Fix Pack 100 but our production environment is still on 9.0.19 Fix Pack 200.

    How has it helped my organization?

    We use Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. We have found that a lot of the new customers who are developing for cloud prefer to use the API and would like to test for themselves. That is really where Jobs-as-Code comes in. They can test and fail quickly the agile way. We definitely have some customers who are using that.

    We have seen quicker file transfers with more visibility and stability. Because data transfers are part of the Control-M tool, they form as part of the normal workflow. We see the value in that.

    If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it.

    What is most valuable?

    A new feature, which we deployed about two years ago, is the Managed File Transfer (MFT). We also use Managed File Transfer Enterprise (MFTE) for external transfers of our biggest use cases. 

    Another valuable feature would definitely be the MFT dashboard that is now available in Control-M natively. It is easy to just search for jobs, files, etc. Instead of the customers contacting us to find out what happened, when it happened, and why it happened, they are able to service themselves. This allows us to cut down on operational staff, costs, and time because customers can manage it themselves to a degree.

    The most valuable feature is definitely the Self Service. A couple of years ago, it was available, but not with the features that it is today. There wasn't really uptake on it, although it was available. We have seen a steady growth in the number of users using it and what they are using it to do. They are using Self Service to schedule by themselves and do monitoring by themselves. They interact with their schedules. Also, the performance of Self Service is very user-friendly and more accessible. That is one of the features that we use a lot lately.

    The reporting has definitely improved over the years. We are definitely doing more of that as well. We are definitely seeing more value in reporting on the batch schedules, optimizing it and seeing if we can cut costs. 

    What needs improvement?

    The reporting has improved. It is not where it should be yet, but we have seen improvements. The biggest thing for me is the restrictions regarding templates for reporting. You can't create your report with your own parameters. We have a meeting weekly with BMC and our customer lifecycle architect, and this comes up quite frequently. We have been privileged enough to do work with the developers. They are aware of the requirements regarding reporting and what our customers are asking for.

    What I found lately about the YouTube videos, specifically, is that they are very simple. Usually, when I watch a video, I would read the manual, instructions, etc. to see if I understand it. I would hope that the interactive sessions, Q&As, or videos could be used to handle more complex issues of what they're discussing. An example would be the LDAP authentication for the Enterprise Manager. They would typically just go through the steps that are in the documentation. What people typically looking at those videos are looking for is how to do the more complex setup, doing it with SSL and distributed Active Directory data mines. Things that are not documented. I find those videos helpful for somebody who is too lazy to read the manual. I expect them to handle more than what is available in the documentation and the more complex situations.

    The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability.
    We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better.

    For how long have I used the solution?

    I have been using Control-M for 12 years.

    What do I think about the stability of the solution?

    Control-M is really stable. We have seen that throughout the years. I have had customers who have been running version 6.3 for seven years after support stopped. It has been running for three years straight, without a reboot or restart, doing its job. We have actually had issues with customers who don't want to upgrade. They have said, "This stuff is working perfectly. Just leave it alone because it just doesn't go down." 

    We have a saying in our department as well. When somebody says there is a problem, we say, "It's not Control-M. Check everything else. Check the server, network, and database. It's not Control-M." 99 out of 100 times, we are right. It is either infrastructure or something else, but it is not Control-M.

    What do I think about the scalability of the solution?

    I have never run into any problems scaling, either vertically or horizontally, with Control-M. In each version, it just gets better. I am really happy with that.

    We were one of probably the first companies who bought MFTE, and it was not ready yet. It didn't scale properly. It didn't offer the functionality that the competing tools that we were currently using had. It's grown tremendously because of our input and feedback directly to the developers and BMC. I'm not complaining about it, but it put us back a bit. We have learned not to be a very early adopter. We have seen the same with the cloud. Everybody wants to jump on the cloud, but nobody knows why. They just want to do Cloud. We've made a substantial investment with MFTE. It was a couple of hundred thousand euros, and it was not ready yet for our enterprise requirements.

    Our monitoring team who does 24/7 monitoring. They handle the alerts. They check their job flows. They make sure escalations are going through. If tickets need to be logged, make sure that gets done. They also interact with ad hoc requests from customers. 

    There is the scheduling team who does the job definitions, updates, etc. 

    There is the administration team, which I'm part of, with administrators who look after the infrastructure, Enterprise Manager, servers, agents, gateways, etc. Recently, we also have a dedicated MFT team that only looks after MFT because of the huge number of customers, requests, and requirements.

    Other customers who use it are really all across the board. We had a presentation last week to our bigger department that is worldwide, but which we are a part of in South Africa. We have noticed about 52 main departments, then the sub-departments, between them. A lot of them sit right across the enterprise. Typically, the most active users would be SAP users who checks for output on the jobs running on Control-M. It is just 10 times easier to do it in Control-M than on SAP itself. We also manage to keep the output for longer than SAP. What they can't find on SAP after seven or 14 days, they can usually find with us, e.g., outputs for the jobs or logs. 

    There are the MFT users who love being able to see each morning that their file was transferred, how long it took, and how big the file was. A lot of self-service users are using the Self Service function. Team leads and operational staff use it most.

    How are customer service and technical support?

    I love support and the support people. It is very good. Because we are quite a mature customer and the whole team has a lot of experience (sometimes more than the support people), if they don't realize the seriousness of the situation, then we would not escalate but just to make our customer lifecycle architect aware by saying, "We are not feeling this case is getting the required personnel on it. We need somebody more senior. We don't have time to cover the basics that the first line support is trying to deal with. We've been over that." Overall, I would rate the technical support as nine out of 10.

    Which solution did I use previously and why did I switch?

    Previously, we used a big SAP solution, which was not a commercial, and specifically designed for our company.

    We have recently taken over a mainframe migration as well as the scheduling was on TWS, which is IBM's scheduling software on the mainframe z/OS. We moved that all over to Control-M. That was a combination of SAP jobs, Informatica jobs, database jobs, and normal script jobs. So, we use a bit of everything. We have also used the automation API a lot for interfacing with Control-M and other middleware tools, but primarily it is SAP and file transfer.

    We use Control-M to integrate file transfers within our application workflows. It integrates with the tools that we are replacing, i.e., Connect:Direct, which is quite a legacy tool, and our old IBM tool, which we have been using for more than 15 years and has no visibility. With Control-M, you get visibility on your file transfers and how it mostly interacts with your batch schedule. Something gets created, it's sent over, and then it gets processed. Control-M has already been part of the executing, extracting, import, or processing. Now, with the file transfer, customers can see the entire workflow from the data being generated, transferred, and processed. This resolves a lot of complexities because you used to need to contact three different teams to find out if the file arrived and was processed. One tool does all of that now.

    There isn't a lot of new functionality that our previous tools didn't have. It is just re-consolidating all the tools that we need into a single one. That makes it much simpler. There is one team to contact globally for file transfers, and that makes it easy. It provides visibility with its Self Service that wasn't available with Connect:Direct or SecureTransport. Our customers are quite happy to have that. We can also provide reports. 

    SecureTransport competes with MFTE. There isn't a conversion tool for that yet. Connect:Direct simply provides the means for a conversion tool, but it gets integrated into scripts and applications. It's very difficult to migrate or extract that data.

    How was the initial setup?

    The initial setup is straightforward. It changed a lot over the years as well, but in the nicest way. You have minimal downtime with the upgrades on Enterprise Manager as well as the Control-M servers. A lot of preparation is done before the tool is shut down for the upgrade. Our downtime used to be at least an hour for upgrades or migrations. That has typically come down to 10, 15, or 20 minutes, depending on the size of the server. It is definitely more stable and understandable.

    We have also noticed that the exception handling is much better if there are issues. We don't get that many surprises. The errors are understandable. The agent upgrades have zero downtime, so that is just amazing. All the patching and maintenance is centralized. We have migrated our development and integration environments to 9.0.20 in the last month or two. That went very smoothly. We will start with production next week. We have been through this quite a number of times. We came from version 7 to version 9 to versions 9.0.19 and 9.0.20. We do all the upgrades in-house.

    What about the implementation team?

    We do it all ourselves. If we get stuck, we would contact BMC. At my previous job, we were a partner for BMC in South Africa, and I was on the support side for BMC. It is only we need to open tickets for bugs or problems that we contact BMC. Typically, upgrades and migrations, we handle those in-house.

    There are three people full-time on the administrative side. We have a global setup: Europe, Mexico, America, Africa, and China. We have tons of virtual machines and hundreds and hundreds of agents, and even more that we might host.

    What was our ROI?

    I know we have already budgeted for more tasks. The company is very happy with the performance of our teams, specifically the South African team. We are really doing more with good tools and less people. There is definitely a return on investment, just from the stability and visibility which has improved a lot.

    On the effort side, we have definitely seen a lot of savings. We have some bigger projects that are automating the schedule and removing human intervention. These have reduced department staff/headcount, by about 50%, when we were able to automate the batch side of it, because also our department offers monitoring and operations as part of our service. We have a dedicated monitoring team. Whatever runs in Control-M, that is monitored by us and escalated, if needed. 

    Departments now have multiple scheduling tools between the mainframe, distributed systems, and cloud. Control-M brings all of that, e.g., we have it on a single pane of glass so we can see the exact execution on the mainframe, the execution on the line, and the execution in the cloud. This is instead of using three or four different tools. Therefore, the complexity of batch monitoring and scheduling has decreased as well with the standardization of Control-M. That is definitely one of the big advantages that we have seen.

    What's my experience with pricing, setup cost, and licensing?

    It is expensive. We have a lot of customers who complained initially about the costs. Because it's not just the licensing, unfortunately. It's the infrastructure, salaries, etc. I like the licensing model. It is pretty straightforward. We are on the task license. I know that we have some really good discounts. Our BMC account manager makes sure that we stay below the license count as well as checking for growth. Overall, it's good. The licensing is simple enough for me. It is a bit expensive. Especially with the cloud coming in, we might see the licensing change in the future, but I'm guessing.

    This is now from my previous years as support for banks and big companies. If it's not enterprise scale, I find that it's too expensive for smaller companies. You really have to be quite big and need to have a dedicated support staff to run it, then you'll be fine. What we've seen at smaller companies, it's too expensive because they want to automate everything. Now, stuff that can literally run once a day for the rest of their lives is costing them $3 a job a day. It becomes too expensive, eventually. They are not seeing the return on investment because it's not business critical. Nobody is going to die or they're going to lose money if that job didn't run exactly at 11 minutes past 4:00. It's definitely for bigger enterprise companies, especially banks or healthcare providers. We have had an instance where Control-M was unavailable due to external factors for 20 minutes and there was a loss of almost a million euros because the solution involved logistics. 

    Which other solutions did I evaluate?

    We have done the usual crontab migration. Everything is in crontab or Windows Scheduler. Typically, we end up with a migration, even if it's from a known tool, where we end by exporting it into Excel and converting it into job definitions with a script. We have been involved in that, but nothing using BMC tools.

    When I joined the company, I first supported them through the local partner. Because we have such a vast array of scheduling tools, they went through a PoC and business case. We evaluated three or four tools, where BMC Control-M was one. Quite soon, because the company was already using Control-M in Africa and China, they were looking for global solutions to see if it really could create change.  

    What it came down to was ease of use, enterprise capability, and BMC was already in the company with ITSM and a couple of other products as well. They had a good relationship with us. We consulted with other customers who have used it as well as references because it was expensive. It was definitely the most expensive solution then, out of the four. However, we didn't want to go five years down the line and then have to change again because of issues.

    What other advice do I have?

    We have had a very good run with Control-M. I love it.

    With the move to big data and especially with our AWS Cloud presence, we have a data lake. We are in discussions with the analytics teams about how they can utilize Control-M in the cloud for analytics, big data, etc. However, at the moment, it is not a big deal.

    What we have found with the Jobs-as-Code is that customers need to understand Control-M better, how the scheduling works, the knowledge around it, its conditions, etc. It took some time for the developers to get used to Control-M, then Jobs-as-Code. They are now confident with it. We are presenting twice weekly. We have an open forum for interested parties about Control-M or our department, Enterprise Scheduling and File Transfer, where we have a dedicated session about Jobs-as-Code. If there are questions about how other departments are doing it, if there is a better way to do it, if they are able to save on the number of jobs, can we make them rerun, or instead of creating 10 jobs, can it be done with five jobs? So, there is not a lot going from Jobs-as-Code directly into production, but we have a couple of parties, especially on the cloud front, who are very interested in it.

    The solution is enterprise scale. Also, if you want to integrate all your applications into one view and offer all the functionality across the board, such as file transfer, scheduling, cloud, and on-prem, then you can create your own application integrations to integrate with applications that's not supported currently by BMC, like APIs. For top 100 enterprises, there isn't another better tool on the market for enterprise.

    I would rate it as a nine out of 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Control-M
    April 2023
    Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: April 2023.
    690,226 professionals have used our research since 2012.
    Sr. Systems Engineer at a financial services firm with 1,001-5,000 employees
    Real User
    Top 10
    Easy to use, extremely stable, and offers excellent technical support
    Pros and Cons
    • "Technical support is very helpful and available 24/7."
    • "While they have a very good reporting facility, the reports that I'm asked to produce, a lot of times aren't necessarily what we need."

    What is our primary use case?

    A lot of the things we've done are just based on our needs, not so much because the product allows you to do it. Basically, I can do everything in Control-M. I mean, we've got plugins for Oracle, SQL, and Informatica, and I can go on and on and on. However, we don't use any of them as our developers prefer not to. A lot of what they do is they do the necessary connections through the batch files themselves.

    It's used for our daily batch. It handles all the batch processes and a lot of our maintenance processes. I would say most of it is file movement of some sort. A lot of it is daily processing, to get it in. Our data warehouse runs through Control-M. The big impetus behind it, when we purchased it, was due to the fact that the auditors wanted a more robust system and something that they could audit. Control-M gives you everything you need for that.

    How has it helped my organization?

    It allows us to automate a lot of the jobs that used to run manually. Everything is automated. We can automate a lot of different processes using Control-M. You can know where it's at, and you can follow it, follow the job flow, from one job to the next, and whatnot, very easily. 

    We used to run a lot of stuff in AT scheduler and Cron which really didn't meet the needs, especially for the auditors. We've taken that, and we've made the system where you know immediately if you got a problem with a job string. Our operations department will page it out overnight if we have a problem, and we take care of it. It's like any other system. If it allows you to do what you need to get done, it's the same every day, you know that you're going to get the same process. It drives the process.

    Like most schedulers, you can bring jobs in many different ways. There are different ways to execute things. One of the things we had was when we were taken over. They were using a combination of the CA scheduler that they had, and they were also using SQL scheduler to do a lot of it. Prior to us converting our data warehouse system to Control-M, they were using the Informatica scheduler. None of this met any of the auditors. The auditors didn't like it as everything was spread out on different systems. They couldn't keep track of jobs. Everything's consolidated now. Everything's running off Control-M. You can follow everything through the entire process. We kick off all SQL jobs using Control-M. They were using SQL to launch just batch files, which had nothing to do with SQL - they were just scheduling it through SQL.

    What is most valuable?

    The capabilities of auditing have been great. 

    The ease of use is one of its great aspects. It's very easy to use and very easy to pick up. 

    It's got an excellent graphical interface. I haven't seen that in anything else that I've looked at, however, that said, I haven't looked at many lately. 

    I know that in 20 years, I have had probably two problems where I've had to call the company to get immediate assistance from them, where we had a system down or something. Its performance is very reliable.

    It integrates with other applications. You can use PowerShell, you can use Perl, you can use whatever. It doesn't really care. It's just running a process.

    The product scales quite well.

    Technical support is very helpful and available 24/7.

    The stability is excellent.

    What needs improvement?

    I will say that at one time we used to run on Solaris and not Windows, however, we were taken over by a company that decided that everything had to be on Windows. We put this in when we were the previous company, and then we were more or less given to the current bank by the FDAC, during the 2009 banking crisis. At that point, they wanted us to implement their solution, which was rudimentary at best. It was a CA product that did not meet the needs. I could not convert what we had in Control-M, to run in that system at that time.

    While they have a very good reporting facility, the reports that I'm asked to produce, a lot of times aren't necessarily what we need. They need to be better customized. I haven't been able to produce the right reports through their reportings facility. I was a Perl programmer and a C programmer at one time. Perl just worked right in there. A lot of our reports were written in Perl, which right now they don't like at all as Perl's not ideal for our company. 

    I can't get to the database tables I want to get to. The database tables they allow me to get to aren't the ones I'm looking for, as, usually, I'm going right into the database, into the raw database, and pulling things out for the reporting I need. I can't do that through their reporting facility, Crystal Reports.

    For how long have I used the solution?

    We've been using the solution for two decades or so. It's been about 20 years. We've used it for a long time. We started using it around 2000 or 2001.

    What do I think about the stability of the solution?

    We've had issues only twice in 20 years. It is very stable. I will say that they have improved it. Originally, when we put in a Windows version of it, we had problems with the database that they were using at the time, which was a Postgres database. Then, at one point, we decided to go to Solaris and run it on Solaris. We had it on Solaris for six years. In six years, I don't think we ever rebooted the server. It ran for six years without any hiccups, any problems. The Solaris system was rock solid. 

    Now, the problems we run into, if we run into any problems, are Windows-based problems. Those Windows-based problems are, for example, if you don't reboot a server once a month, which, thank God we do, you can have issues. We reboot as we have to patch monthly now and we have to reboot it every month. However, we would see if we went two, three months on Windows, that we would start seeing some problems. Rebooting it took care of it.

    That said, that's a Windows problem, not so much a Control-M issue, as we see problems on Windows servers that run for two or three months in any application.

    What do I think about the scalability of the solution?

    Right now, we are running on their small database model. We, at one time, had about 2,500 jobs, and we were on a medium model then. Now, we're down to about 800 jobs a day. It's just a matter of the requirements we have. In terms of scalability, it scales up very nicely. It works very well. You can have multiple servers if you need multiple servers. Currently, we have one Control-M server and one EM server. We used to have two Control-M servers and one EM - EM being the enterprise manager, which is really what's running the system. The Control-M servers basically take care of the current runs, what's currently running on a system. Adding more jobs and adding more resources to it is not a problem.

    It does high availability. We don't use the high availability due to the fact that we have another solution. We run everything in a virtual environment, and take regular snapshots if the system goes down. Should that happen, the snapshots are replicated from our production site to our DR site. We bring up the latest snapshot in the DR site if we lose the production site. It's up and running within minutes, literally. It's just a matter of going in and saying, "Bring these servers up." And they come up.

    Currently, we've got three schedulers using the solution. They have more or less God rights, although they can't change user permissions. Those three schedulers can do anything with the jobs - delete, add, create, whatever. We have about 10 operators that have access to it as well. The operators have a somewhat reduced role from the schedulers. They can do a lot of it. They can bring in jobs, they can rerun jobs, they can kill jobs, however, there's a lot that they can't do. Then we have probably about 60 users that are developers, and they're basically read-only. They can see the jobs, they can see what happens. A lot of it has to do with corporate decisions on control. They didn't want the developers to be able to define jobs and items of that nature. They wanted the developers to define the job through a worksheet, and then the schedulers would actually implement the job. That's just a matter of policy, basically. They monitor their jobs that way. I'm trying to allow them to be able to at least bring in their jobs, for test - not for production - so that they can make it policy change here. If they could do that, it would greatly enhance their ability to get testing done. The downside to that is that you might have a developer that just keeps running the job over and over, and over, and over again, which I've seen happen too. Personally, I can do everything in test. I can't do anything in production at all, except view jobs. I have read-only on everything in production, except for the configuration part of it, to which I have full rights. I used to almost be a fourth scheduler at one time. At this point, there's no need. The limits of my job have been redefined several times.

    Overall, the usage of the product in the company is very extensive. There's not a part of our daily businesses that's not reliant upon Control-M. If Control-M was done, the company would be at a standstill, literally.

    That said, likely we won't increase usage. The company we just merged with, another organization and it's debatable as to how these things go. They have about 5,500 jobs. We used to have a lot of jobs like that, however, the business drives what we do. 

    How are customer service and technical support?

    The technical support is probably the best I've ever worked with.

    If I need support help from them, if we are down, they get back to me, if not immediately, within an hour. 24/7. And usually, we're up within an hour, after the first contact. They help greatly with planning for upgrades. I need to contact them here in the near future. They have a group called the AMIGO group, that does nothing but migrations and upgrades. I need to get with them to go over my plans for transitioning from the old servers to new servers. They will verify that what I'm doing is the right way to do it. If it's not, they will tell me how to do it, which is an excellent resource. 

    They have a very large knowledge base. It's integrated with everything I've ever had to have it integrate with. Their support's been very good.

    When I call BMC, I get an immediate response. I've had products that I've supported, that I've called companies and been on hold overnight. I've literally gone home for the night and left my phone on my desk, off the hook, on hold, and come in the next morning, and I'm still on hold, listening to the hold message due to the fact that the support hasn't answered yet.

    Which solution did I use previously and why did I switch?

    We have recently merged with a company that uses Tidal, and of course, they want to hang on to theirs. We use Control-M. I've actually used several other scheduling products in the past, however, we've been on Control-M now for over 20 years.

    How was the initial setup?

    I'm actually in the process of doing an implementation right now. I'm replacing our current production system. We're replacing EOS, actually, therefore, I'm doing a straight install of everything on the new servers. It is very straightforward. The install is not really difficult. It's fairly simple if you understand how databases work and whatnot. There's really no problem doing it.

    In my case, I can bring up a Control-M server within hours. I only say that as I've done that, as we were not DR prepared back during Hurricane Sandy. I had to bring up a production version of it in Cleveland, in our DR site in Cleveland. Within 24 hours, we were up and running. Therefore, if you need it done fast, it can be done. It's just a matter of, are you willing to put in what you need to put in to do it.

    It's a fairly easy install, really. I personally have never had any training on Control-M. Other people in my organization have had training. That said, I'm the one that put it in and I'm the one that read the manual. That's where I got all my information from, was from reading manuals and whatnot, and directly working with it.

    What's my experience with pricing, setup cost, and licensing?

    I can't speak to what our support costs are. That's out of my realm at this point. At one point, I had an idea, however, I couldn't even tell you what that is anymore. I know that our licensing is based on jobs. We buy licenses based on the number of jobs. Currently, we have about 2,500 licenses. We used to run more jobs than we do right now. We did not get rid of those licenses. 

    It's basically $100 a job, give or take.

    They also don't charge us for items such as the plugins for MFTP, which we don't use, although we could. They wouldn't charge us for Oracle, SQL, or Informatica. It's a reporting product. 

    There's no licensing for the server, there's no licensing for the EM server. All that stuff comes as part of the product. It's all-inclusive.

    From what I've seen and heard from the other company about Tidal, that's where they're making their money from - the plugins. Whereas Control-M doesn't charge us. The plugins are basically free for us. I'm sure there is a charge for support every year. I have no idea what that is. I don't get down into that level.

    I just tell them, "Yes, we need this" and then the purchasing staff takes care of the actual details.

    Which other solutions did I evaluate?

    At the time we were looking for a product, I looked at five or six different scheduling packages. By far, at that time, Control-M was leaps and bounds above all the rest of them.

    What other advice do I have?

    We're customers and end-users.

    We're using the latest version of the solution.

    By far, BMC, from what I have seen, is the industry leader and they are the Cadillac of scheduling. I've worked with a lot of different scheduling systems over the years. When I first got into IT, years and years and years ago, as a JCL programmer, basically you had access to the scheduling system and you took care of the jobs. When jobs failed, you would do the restarts on them, do whatever fix needed to be done, and get them restarted, and get them to rerun. That was on a mainframe. 

    I've used Cron, and I've worked with a number of different schedulers. In the Windows world, other than AT scheduler and Control-M, that's about all I've ever used. I did review five different products back when we put this in.

    Having worked with so many products, and with this one for so long, I can advise that new uses should follow the installation instructions and notes. They're very simple, very straightforward. I would advise others to not get scared off by the price as, initially, the pricing seems rather steep, compared to some of the others. However, they all have their pricing quirks, and they're all making money in one way or another. The way they make their money is based on the way they license it. The per-job style actually works out very well.

    I'd rate the product at a perfect ten out of ten. It has been one of the most stable products that I have supported, and I have supported a lot of different products. I've had fewer problems with it than I have with just about anything else I've supported. 

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Richard Meyer - PeerSpot reviewer
    System Engineer at a healthcare company with 10,001+ employees
    Real User
    Top 5
    Gives business users visibility into and control over their jobs, freeing up IT personnel
    Pros and Cons
    • "It gives us the ability to have end-to-end workflows, no matter where they're running."
    • "The stability of Control-M has Not been great. A big thing we've been trying to work on with BMC is observability. Modern applications should be observable and resilient, but we're finding that sometimes Control-M is not very resilient and many times Control-M is not very observable."

    What is our primary use case?

    The major use cases we have are batch processing and MFT. We are heavy users of the MFT plugin.

    How has it helped my organization?

    One of the benefits of Control-M is that it's helping to give business users visibility into and control over their jobs, and freeing up IT personnel to focus on other operations. Here, I'm mainly thinking of MFT. Our MFT end-users did not have access to our prior MFT tools at all, so they couldn't see the jobs. They would just request a job be built and then we would publish job reports so that they could see what was out there. Now, in Control-M, we're able to give them job-control access. We still lock down the building of file transfer jobs, but they now have the ability to see a job and see how it's built. They can run a job and hold a job if they need to.

    But even for some of the batch jobs, we've written some orderable services that are allowing them to run jobs on-demand, jobs that they used to have to log in to a server and go through a menu to do. Our business users definitely have much higher capabilities in our product now.

    And while we are primarily on virtual servers, we are in the process of standing up some agents in the cloud. We have our first agent in AWS up and we're getting ready to do some testing on it. That's pretty critical. There's a really big push within our organization to move into cloud. A lot of our next-gen apps that are going to be replacing the current ones are being built in the cloud. We have that first agent out there, but I assume there are going to be many more to follow as these new applications are stood up in the public cloud. Today we're on-prem, but I definitely envision us moving the entire Control-M stack to the cloud. Eventually, it will be in the cloud and we'll just have a couple of agents on-prem, versus being on-prem and having just a couple of agents in the cloud.

    Control-M has also helped to make it easier to create, integrate, and automate data pipelines across on-premises and cloud technologies. It's due to the ability to orchestrate between workflows that are running in the cloud and workflows that are running on-prem. It gives us the ability to have end-to-end workflows, no matter where they're running.

    What is most valuable?

    The automation is one of the most valuable features.

    What needs improvement?

    New plugins could be tested better. We've had a lot of problems with the MFT plugin. We've been working through a lot of issues with BMC on it.

    The functionality that has existed for long periods is very stable. But the problems with the MFT plugin specifically, and problems we've had with MFT in general, have unfortunately caused the entire stack to be affected enough that our end-users couldn't even log in to the application. 

    I wish we would have known better about how MFT impacts the application as a whole, and I wish they would have done more load testing around that. That seems to be where most of our issues have been. The issues have been so bad sometimes that the entire app goes down, not just MFT.

    For how long have I used the solution?

    I've been using Control-M for about two and a half years.

    What do I think about the stability of the solution?

    The stability of Control-M has Not been great. A big thing we've been trying to work on with BMC is observability. Modern applications should be observable and resilient, but we're finding that sometimes Control-M is not very resilient and many times Control-M is not very observable. We're working with BMC to try and figure out how we can externally monitor this application. 

    We are using Dynatrace because of the problems we've had with Control-M. If we stood up Control-M and never had any problems, we probably wouldn't be too worried about being able to observe the processes and the queues and the communication between processes. But because we've had so many problems, it has forced us to dig in. We can't wait for a problem to happen and wait for a week for support to tell us how to fix it. We can't do that in a production environment. We have to know before a problem happens so that we can be proactive and not reactive. That's been a big struggle that we're continuing to work with BMC on.

    What do I think about the scalability of the solution?

    It's pretty scalable. You can stand up a ton of agents and you can stand up a ton of servers, if you need scheduling servers. Scheduling and agents are definitely very scalable.

    There isn't the ability to really scale the EM (Enterprise Manager) a ton, although the GUI can be scaled somewhat. I don't know how much of a need there is to be able to scale the EM. We don't seem to have issues on the EM side, for the most part.

    We're definitely having issues with the gateway between the EM and the scheduling server, but BMC is telling us that it's because we're running too many file transfers on the scheduling server. They say that if we stand up more scheduling servers, that should resolve that issue. We'll see if it does, if we still have any issues after we spread the load of MFT, not only over more agents, but also over more schedulers. If we still have issues after that, I think that would mean you're pretty limited in how you can scale your EM. That is the one thing about which I'm not sure how well it scales.

    How are customer service and support?

    Technical support is very back-and-forth. That's one of my gripes about the support. We open a case, they ask us for logs, we upload logs, and they come back and ask us for something else. 

    At times, there isn't a lot of what I would call working together with them. We do now, but that's because we had a ton of support cases piling up and we started escalating with their internal leadership. Now, there are weekly meetings between our leadership and their leadership and our account managers, as well as weekly meetings with the support team and the dev team, to talk through our cases and any updates on them.

    It took a lot of pushing from our end to get them to work with us. Otherwise, they just asked for logs and then we were waiting for a couple of days for them to look through all the logs and get back to us. We can't be doing that, especially if the issue is a production problem. We can't just upload logs every time we open a case and wait around for two weeks to get an answer.

    Another gripe is that they're very siloed in what they know. Something that I've been asking for for a long time, from BMC, is somebody who can take a look at our environment as a whole, and not just in pieces. Every time we open a case with support, they want to assign it to a specific area. If it's a problem with the agent, then an agent person will look at it. If it's a problem with the EM, then an EM person will look at it. But nobody is looking at the environment as a whole. That's an issue because a lot of our problems, as I've mentioned, with MFT, are impacting the entire environment. It's not just one component. It's the entire environment and how those components relate and how they communicate that have been impacted. Nobody has really looked at the environment as a whole, in support. I think it would benefit BMC to have more experts on the entire application and not have everybody so siloed.

    How would you rate customer service and support?

    Neutral

    How was the initial setup?

    The initial setup was a little complex, due to some of the requirements. It requires that you have C shell as it doesn't work with the regular BASH shell. There are some old mainframe requirements that have carried through the product, even though we don't run it on mainframes. For example, the user that you use to run it has to be under seven characters long. We had to modify the account we use because the name was too long.

    We're still really trying to get our environment squared away. We started two and a half years ago, but we've got a laundry list of applications that we're migrating out of and we've only completed one of those migrations. We're having to modify our architecture now because of the load that we are running. I'm working with professional services at BMC to review our existing architecture so that they can give us a BMC-supported design recommendation.

    One of the competitors we are migrating from is Broadcom/CA. Broadcom bought a couple of products. They own both AutoSys and Automic, and we are migrating out of both of those solutions. AutoSys has been pretty straightforward to migrate into Control-M because the job configuration is pretty simple. However, the Automic workflows are very complex. They utilize certain features that only Automic offers, things that we can't replicate in Control-M. That is causing a lot of issues and has caused us to put that project on hold for the time being, until we can work through some of the problems that are being presented. We've been migrating Broadcom for at least a year now.

    Some applications are pretty straightforward. MOVEit is an example of one that's a pretty straightforward conversion. However, another tool we have, Diplomat MFT, has a backup file structure that is not what the conversion tool was expecting. We ended up writing a custom Python script to do that conversion for us. The ease of migration really depends on what application you're migrating out of. It could be very complex or very easy.

    The migration process is a very high concern. We selected Control-M due to the ability to migrate everything into it and have everything in one tool. If we can't get our migrations completed, then Control-M will just be another tool on top of all the other ones that we have to support.

    What about the implementation team?

    We used VPMA for the deployment. Our experience with them went pretty well. They're definitely very knowledgeable about the product

    I don't know that they, or really, as I said earlier, even BMC had all the knowledge around how MFT could impact the application as a whole, back when we originally bought this. MFT was very new back then. VPMA did their best and guided us as much as they could, but I just don't think the plugin for MFT, specifically, was very mature yet. There were probably a lot of unknowns there.

    We had a pre-sales team from BMC that helped us in the very beginning, before we worked with VPMA. They were nice, but I wouldn't say they were very knowledgeable. They had a very surface-level knowledge of the application. They didn't know anything that was deep. They would have to find out for us and get back to us.

    What was our ROI?

    It's not my realm, but I would assume Control-M has not helped us realize any savings on renewal costs after switching from Broadcom. The cost of an agent is significantly higher for Control-M than it is for Automic or AutoSys.

    What's my experience with pricing, setup cost, and licensing?

    We are paying way more for Control-M than we've paid for any of our other scheduling tools. We have an inside joke that Control-M is sold as the "Bentley" of schedulers, but we feel that we got a "Pontiac" because it's falling apart half of the time.

    BMC has two licensing models. One is where you pay by job execution and the other is where you pay by endpoints. I'm sure the specifics vary depending on the customer, but we opted to go with endpoint licensing. I'm not sure if that was the best decision, knowing what we know now.

    With endpoint licensing, we pay per server. That means it behooves us to run as many jobs as we can on each of those servers. But we're very much finding that even if we make those servers very large and give them a ton of resources, they're still not able to perform because Control-M doesn't scale very well vertically. If you make the agent bigger, if you double the CPU and RAM, that doesn't necessarily mean you can run twice as many jobs. It's going to choke in other areas. 

    We will see if we end up switching our licensing model. I think the endpoint licensing model we chose is quite a bit more expensive than an equivalent model where we would pay per execution. We would definitely have to change a lot about our environment if we were to change our licensing model from endpoint to execution, because today we give all of our end-users the ability to run jobs on-demand. If we were to change our licensing model to be based on executions, we would probably want to restrict that a little. 

    The way you license is a very large consideration when moving to Control-M.

    What other advice do I have?

    We really haven't taken advantage of some of the features that Control-M offers yet. The main thing I'm thinking of is SLA management. We haven't implemented that yet on a lot of our business-critical workflows because we just lifted and shifted everything into Control-M from the old app. As of today, things are pretty much equal until we are able to implement some of those additional features.

    There are capabilities that Control-M offers that are good and I can see it being a very good product. BMC, as a company, has some maturing it needs to do in a lot of its processes. They have a very good sales team, but a lot of things after that can use some work.

    We definitely haven't bailed on it, but I've heard a little bit, back and forth, from people at BMC that they might not be too upset if they lost us as a customer because we've been having so many problems. We've been on them about helping us get this environment corrected and functioning as we expect it to. But in a year from now, it's possible we could be in a really good place. I'm excited to see where it all goes.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Sr. Automation Engineer at a computer software company with 1,001-5,000 employees
    Real User
    Top 5Leaderboard
    Saves time, offers great auditing capabilities, and has good automation
    Pros and Cons
    • "It has certainly helped speed things up."
    • "They can improve their interface."

    What is our primary use case?

    I've been with the same company for 22 years. The use case started out truly as a batch processing solution. That was what we originally got it for back in the day to help us automate what was being done manually or being done through homegrown tools or scripts, et cetera. The use cases evolved through the years. Now, we use it to orchestrate workflows that are touching traditional data centers and that are going out to the cloud and bringing it back.

    From one spot, we have a single pane of glass. Like many companies, our systems are getting more complex and more diverse, with cloud and edge computing, containerization, et cetera. However, we have one place where we can go and look and see what's going on. If something happens, we can check what happened and where it happened. Today, we're dependent upon a lot of services and cloud technology that sometimes we don't know the ins and outs of.

    A big challenge is to make sure that we have certain things run daily or on a periodic basis. That really was the driving use case. We had a lot of manual tasks going on and if someone, for example, left on vacation, something may not get done for two or three days, a week or two weeks. This solution takes all that away.

    The main use case was to get away from having to stare at a system or a screen, and just let things run, let the workflows flow, and only be notified if there's something wrong. That was really a big driving use case.

    How has it helped my organization?

    It freed up people to work on exciting work instead of mundane work. No one has to sit around and stare at that screen all day long. No one has to reinvent the wheel for the 50th or 500th time to do tasks like maybe put a file out into an S3 bucket or out into an HDFS Hadoop file store since it's already there. It's already done for them. They just drag, drop, click and they're done. It's freed people up and they can do the exciting work that is really what we should be doing anyway. No one wants to be doing boring work.

    What is most valuable?

    I am a big proponent of an automation API and Jobs-as-Code. That is Control-M in the DevOps world. It opens up the tool to a traditional operations tool. Developers can jump right in there now, giving them that ownership, and integrating the existing DevOps tools that they have. That is a huge feature that I just love. 

    There's an application integrator. It doesn't matter if you're trying to integrate with on-premises, off-premises, API, container, or serverless functions, it's easy for you to design. You just design that integration and then it's available instantly, and that's a huge time saver. 

    It's rather easy to create, integrate and automate data pipelines with Control-M. I can give a broad answer. It can be as easy as drag and drop, or it can be as complex as designing the integrations. If you use customization, you can access a data lake that your organization developed. For the typical user out there, the difference is on a scale of one to five, with one being easy and five being hard, you're probably looking at a two and a half. For most people, it's very easy. It's getting easier as it's all web-based nowadays. Alternatively, it can be all code-based.

    I have not explored Python Client too much. I've tinkered with it and that's been the limit of my exploration. Now, with the integrations like AWS, we've made extensive use of it, and it is very easy for anybody to do. Python Client has a lot of great possibilities, especially in the data science arena, however, sadly we have not had an opportunity as of yet to play with it.

    The Control-M interface for creating, monitoring, and ensuring delivery of files as parts of your data pipeline has gotten better. It is not perfect. That said, it’s come a long way over the years. Nowadays, most of it is web-driven. A lot of it can be API driven if you so wish. There's still probably some future work to be done there, however, for the average user that's coming in, starting to use it for the first time, they're going to need a little training and handholding at the beginning for maybe the first week or so. Then you can start setting them free to go out and use it on their own.

    The orchestration of our data pipelines and workflows has been able to give a single point of view too. Management doesn’t care about the bits and pieces. A workflow or a data pipeline could have 100 or 1,000 components behind it, and management does not care about that. Management cares whether the SLA has been met or not. They want that easy-to-see red light or green light. We can provide them with that. The solution drives self-service and it helps. A manager doesn't have to call somebody in IT and wait around for an answer.

    They can immediately get that information for themselves, consume it and be able to understand that, "Hey, you know what, this data pipeline over here, we're going to be 15 minutes off our SLA for today." Then, they can start asking why. I like parts of Control-M like Batch SLA Impact, is they can start doing some of that analysis themselves, for example, “this late due to the fact that maybe the system was down for maintenance for two hours last night." That's really beneficial in today's business world.

    The automation of Control-M has sped up everything. We can integrate directly into existing pipelines and the DevOps teams can get anything integrated with their Jenkins deployments. They don't have to wait for traditional operation functions. This is all built-in. It validates and checks. In some cases, it automatically deploys the agents and deploys the configurations. That's something that years ago you'd have to wait for. The speed of delivery has vastly improved.

    Nowadays, auditing is as simple as running a report. If this falls under an auditable category, we can just hit a button and the report is done. Control-M audits everything, even if it is not under the regulatory or audit spotlight. Every process, every movement, and every change is logged by the system. If there's ever a question, you’ll be able to find a why and a when. There’s an audit trail.

    It certainly helped speed processes up. I can eliminate what I call the manual gaps between certain features. I don't have to send an email to somebody to say, "Hey, guess what? That file's ready. Now you can run process X, Y, Z." The system just says "Hey, the file is there, let's go." It's eliminated those gaps between parts of the workflow. It also helped optimize the infrastructure needed as it's like a Tetris Puzzle. I have these ten different workflows that I'm trying to run and before I may have had ten dedicated systems for them. Now I know that I don't need that.

    We use this model all the time. We can run those ten processes on three systems and be just fine. That saves money. The solution is not only speedy, but it also saves money.

    They are doing a great job with continuing to drive the open-source model of it. Five years ago, if you looked for Control-M anywhere, you would not have found it. Today, that model has changed. They're actively publishing on GitHub.

    You can download for free an entire container and run Control-M at home if you want to tinker with it. That was unheard of a few years ago. You can type a query in Google and start to see all sorts of documentation that is now available to the public. The major strides that they have made there are pretty darn good.

    What needs improvement?

    If you want to take it and ramp it up to doing some very heavy-duty integrations, you can find yourself at first dealing with a difficult integration. However, once you get that integration going for maybe a month or so, the next person after you will have less difficulty. That's the power. 

    They can improve their interface. They're going through huge modernization efforts and they're getting there. They're probably 75% there, however, there's still another 25% to go.

    For how long have I used the solution?

    I've been using the solution for 22 years.

    What do I think about the stability of the solution?

    Since it supports business, it has to be stable. It's very stable. We have not had major outages or anything. That's always a good thing, however, like with any solution, its stability is going to depend on how you deploy it and what safeguards you put in place, including high availability and disaster recovery, et cetera. All the hooks for that are in the product, however, it's up to you to decide how you're going to use those hooks.

    What do I think about the scalability of the solution?

    It's highly scalable. You can run five things in it today and easily scale up to run 1,005 things tomorrow. In terms of scalability, there are no issues there.

    How are customer service and support?

    Technical support tends to be very helpful.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    I used to work for an insurance company and I used Computer Associates. It was called CA-7 and CA-11, which are similar tools.

    We tried to use Computer Associates before this, but it didn't support the systems we needed and the integration was next to impossible.

    How was the initial setup?

    I was involved in the deployment and initial setup of the solution right from the beginning.

    We had jobs and workflows running within the first day. That was pretty good. We don't use the Helix model, however, there is a Helix model you can purchase, in which everything's hosted by BMC. You can be up and running literally in hours which is reasonable. There's a learning curve, however, if you do not get some value out of it within two days, you're probably doing something wrong.

    At the time, there were only two of us deploying the solution. Today there are only three of us.

    It's business-wide. Everything from data to marketing, to finance, even though it probably wouldn't make sense to anybody else, it touches everything. It's deployed across Windows, Linux, containers, VM, cloud, et cetera.

    If anybody has a use case or wants to learn more about it, we'll show them. Anybody in our organization can get basic access and can tinker around in an alpha test environment. This includes non-technical people. We have non-IT people that use it.

    If they can self-service and maybe design some parts themselves, that's a huge win right there. We have a very open model of deployment.

    There are occasional patching and vulnerabilities that come out. Most of the patching nowadays can be automated if you're using the Helix-based solution. A lot of that is handled by BMC.

    What about the implementation team?

    We did not use an integrator, reseller, or consultant for the deployment.

    What's my experience with pricing, setup cost, and licensing?

    I can't speak to the exact licensing costs. 

    Which other solutions did I evaluate?

    Every few years we go through a reevaluation. We'll go through and look at what's on the market and what companies have come up with or released new versions. We'll go through and we'll say, "Okay, let's compare these, what do we need and what are all the tools offered out there?" We do that roughly every five years and it keeps us on our toes.

    The biggest difference as of late is the API and Jobs-as-code. Control-M is light years ahead of others. It is light years ahead of the competitors and what they're offering. Other competitors are starting to get APIs, however, only BMC is working with Job-as-code and is in the lead. To my knowledge, they're really one of the only ones who can define your entire workflow as code.

    What other advice do I have?

    Control-M is pretty critical to our business as it runs many different business processes every day, and if it wasn't there, we would probably hire many more people, be a lot slower, and be more prone to error.

    We use a hybrid deployment. We have parts in the traditional data center. We have parts in the cloud. We sometimes have parts that live on containers. They only exist for two minutes. It is very much a hybrid mix of goodies with our deployment.

    I'd advise potential new users to examine it today and not think about what it did ten years ago. Control-M is an old product. It has been around since we all used mainframes, however, just because something's been around for a long time, doesn't mean it's a piece of junk or doesn't work with modern technologies. It has adapted and grown with the times. Control-M did cloud-based work before many of us were even talking about the cloud. It's hard to get rid of negative perceptions sometimes, however, the best thing for people to do is to head out to the internet, look it up, and go out to GitHub.

    If you have a technical team, send them out to GitHub. You can download everything in an image or in a container and try it yourself. It doesn't cost you a nickel. 

    I'd rate the solution nine out of ten.

    The biggest advice I can give is to try it out. Don't only believe what the PowerPoints tell you. There's no excuse that you can't have a deployment running clearly within hours. Be willing to think about how it can solve problems in new ways. Sometimes we try to find a new tool as we have a square problem and we get upset as all the tools we're looking at only have round solutions. Sometimes the reason that it only has round solutions is due to the fact that that's the proper way to solve the problem. You have got to be willing to break down whatever you're trying to do, whatever workflow you're trying to automate or integrate, and take it in pieces.

    If all you want to do is save yourself a lot of money, use Cron, and use Windows Task Scheduler. However, if you want to take your business to the next level and start to get to the point where you can automate to remediate and audit, that's where tools like Control-M come into play.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Subject Matter Expert at a consumer goods company with 10,001+ employees
    Real User
    Top 10
    Workflow dependencies work well, and automated audit reporting helps out sort out issues quickly
    Pros and Cons
    • "Workload Archiving is a very good feature for us. It helps with our customer requirements in terms of reporting and auditing... Previously, when we didn't have any archive server, we managed the data in Control-M with man-made scripts, and we would pull the data for the last 365 days, or three or four months back. Since we installed the archiving, we have been able to pull the data, anytime and anywhere, with just one click."
    • "With the current version update, I'm not sure why we needed a separate database upgrade. Why not put it all in one package? Previously, you could do it either via a manual upgrade or an in-place upgrade but it wasn't separate."

    What is our primary use case?

    I am working with a beauty products company and we are dealing with supply chain issues. Most of the jobs in Control-M are through SAP.

    Right now we have it deployed on-prem, but we are planning to move to the cloud very soon. We are using Control-M Workload Archiving, Control-M Enterprise Manager, Control-M servers, agents, APIs, REST APIs, and Control-M Forecast. We use all the services Control-M provides except Control-M Workload Change Manager.

    How has it helped my organization?

    Since moving to Control-M we have seen a lot of reductions in the manpower needed. For notification, ticketing, and integration, we have different teams. We have Azure teams and some Windows teams. Previously, they were using and managing their own scripts and manually running them. After the migration to Control-M, there were no limitations. Where there are different protocols we can use the APIs and integrate things with Control-M. There are no worries about integrations with Control-M. In UC4 there were lots of limitations because we needed the same protocols to integrate things. With Control-M, there are no such limitations.

    In our current environment, there are three sets of applications. The first, an online application, is dependent on some 45 files that have to be generated on Saturday. Our middleware job is supposed to run once all the 45 files have been generated by SAP jobs. There are sequences running through Control-M: First are the SAP jobs that generate the files in a certain location. Once those files are there, the sequence initiates the middleware that moves the files to the proper IT server. All these process flow dependencies go through Control-M very easily.

    We have also automated daily audit reports through the solution's reporting facility. Through scripting, we get an alert when anything happens in the Control-M environment. An issue might occur with the agent, the process, or the Control-M server. We have everything reported via email. We can easily see what happened on a given day and sort out any issues.

    As a result of using Control-M we have also seen an improvement in Service Level Operations performance. We have some monitoring tools in Control-M and our service SLAs have definitely improved. We have a ticketing system integrated with it and we can easily monitor the SLAs for tickets generated through Control-M. If the person responsible for a ticket will not handle it in the right amount of time, the ticket will pop up with a message saying it's in danger of breaching the SLA. Our service levels are much higher with Control-M, when compared to other tools.

    What is most valuable?

    Control-M Workload Archiving is a very good feature for us. It helps with our customer requirements in terms of reporting and auditing. We have internal audits every quarter, and every six months we have external audits. During these audits, the auditors get historical data through Control-M. Previously, when we didn't have any archive server, we managed the data in Control-M with man-made scripts, and we would pull the data for the last 365 days, or three or four months back. Since we installed the archiving, we have been able to pull the data, anytime and anywhere, with just one click.

    Control-M gives us a unified view where we can easily define, orchestrate, and monitor all our application workflows and data pipelines. We are mostly using SAP and other business warehouse jobs, and we can easily see the systems through Control-M. It gives us a very good view of geographical data. If I go through the Web Services to show things to my customers, they are very satisfied with the Control-M views. They can check historical data and they can see the current view. They can easily pull these up. We are satisfied with the fact that, with one click, we can see all the applications within one view.

    Our line-of-business personnel use Control-M’s web interface to support their business initiatives. One of our big applications is our JG application, where a user needs a data pipeline and Power BI jobs with refreshed data. Instead of the user having to send a request to our Control-M team, they can use the Web Services directly to check their data. If they're using an iPad or a desktop, they can easily check on it themselves. They're not dependent on the Control-M team directly. We educate users on how to check things and how to pull the reports. It is very easy to use. Also, we don't have 24/7 support within our company. Suppose a user needs something at midnight. They don't have to wait for the Control-M admin team to give them the report. They can directly pull the details.

    What needs improvement?

    With the current version update, I'm not sure why we needed a separate database upgrade. Why not put it all in one package? Previously, you could do it either via a manual upgrade or an in-place upgrade but it wasn't separate. But for the current version, we needed to upgrade the database separately. It meant doubling our tasks to do the upgrade. That is something that needs to be improved.

    For how long have I used the solution?

    I've been using Control-M for the last 17 years. My specialization is in Control-M and I'm very happy and very comfortable with it.

    What do I think about the stability of the solution?

    I would rate its stability highly, compared to other tools in the market right now, such as UC4, and AutoSys. In the past, I have worked with many banks. All these financial companies are using Control-M, and there is a reason: It's due to the stability.

    What do I think about the scalability of the solution?

    I would give the scalability a nine out of 10.

    In our environment right now, out of 20,000 jobs in Control-M, 15,000 are SAP. We are planning to expand our usage of Control-M to Power BI, Business Warehouse, PeopleSoft, and Azure. Those are in our pipeline right now.

    We have about 25,000 users of Control-M on different projects, in the U.S., Japan, India, and Asia Pacific. Some are monitoring programs through Control-M, some are only doing scheduling. Some are responsible for designing, others for the implementation before the licensing. And once this transition team is done, the operations team comes into the picture for monitoring. We have a separate team for integration, as well.

    The number of people we require for day-to-day administration of Control-M depends on the job size and the user requirements. We work in an offshore and onsite model. We have a key administrator over the 20,000 jobs, seven schedulers, and nine people on the monitoring team, and that work is done 24/7. The schedulers and admin work 24/5.

    How are customer service and technical support?

    In a case where we fail to understand an issue by collecting data on our own through our audit reports, we open a case with BMC. BMC always gives us a fast resolution. Their support is very good.

    How was the initial setup?

    The setup of the current version of Control-M, overall, is very easy. The upgrade is in-place. With one click the agent upgrades, the server upgrades. The only point, as I mentioned, with upgrading, is that we needed a separate database update. When we upgrade our Control-M server, the database server should be upgraded at the same time.

    The initial implementation in my current environment was in 2006. When we took over we just upgraded it. After that, we implemented two more Control-M Servers in this environment, as a PoC.

    The amount of time required to implement it depends on the environment we are working with. In this environment, we have two production servers, four QA servers, and two testing servers. We have eight Control-M servers, three Control-M Enterprise Manager servers, and more than 400 agents. It depends on the change process. In our change process, we first need to upgrade our QA and test environments. Once that is done, we can go for the production environment the next day. After that, over the next seven days, we update our Control-M agents. Some of the upgrades require downtime. In four to five hours, we could easily update everything, but it's dependent on the downtime and the customer requirements.

    When we upgraded to version 20, first we implemented it in our QA environment and we tested the new version in our test environment for three to four months. Once we see there are no bugs, we implement it in our production environment. We've seen a lot of bugs and BMC has had to produce some patches that we have had to apply in our environment. That is why we approach it the way we do in a QA environment, and wait for three months, and then go to production.

    When we moved to Control-M, we used the Control-M Conversion Tool. It's a very important tool. It gives us an idea of where we stand. If I'm going to move an old environment to a new environment, it helps us with any errors so that we can rectify them.

    What about the implementation team?

    Back in 2016, when I was working with version 7, I opened a case with BMC and they helped me to upgrade everything. It was a very good experience. They dedicated a resource to us. We gave them our implementation plan, they reviewed it, and they suggested how to remediate some missing steps. We followed their approach and, at the time of cut-over, they assigned a dedicated resource. If there was an issue, we could open a ticket and they would come online and sort it out. The BMC Assisted MIGration Offering (AMIGO) is a very good program.

    What's my experience with pricing, setup cost, and licensing?

    You must accept that BMC licensing can be very confusing. No one can easily understand how they calculate things, whether it is user-based, job-based, or server-based. The calculation is quite tough. How BMC calculates licensing is not easily available anywhere. It's a very tough part for the client at times.

    But BMC is a market leader, so users don't easily go for different vendors. If there's an option to go with Control-M, they will always choose BMC. But for people who find the licensing challenging, they will go with a different vendor.

    For us, the licensing part is managed by a team in the U.S. But what I deal with is that we have to manage our Control-M jobs to a maximum of 30,000, because we have 30,000 licenses. We have 20,000 with fraud detection and 10,000 in non-fraud. There is a BMC utility that can guide you and alert you if the forecast is for an increase beyond the licensing. It will notify us: "Hey, you have a license for 20,000 and the Control-M forecast shows you might need to increase that number in the coming days." So we do some cleanup, some internal housekeeping to remove things and remain under the threshold. Those are some of the things we do as administrators. We try to manage under whatever licensing we have. Through the BMC reporting tool, we can see our peak number of users in a month. BMC charges if you go over a certain peak.

    Control-M is very robust. There is no harm to the customer if you choose Control-M every time. But when it comes to licensing, it's very expensive, and sometimes users think twice.

    Which other solutions did I evaluate?

    Previously we were using UC4 for more than 20,000 jobs. But our customers were not very comfortable with the user interface in UC4. Certain things were not appropriate in that tool. Since our decision to migrate to Control-M, our customers have been very satisfied.

    Integration is very easy. When I'm thinking about integrating Control-M with anything I'm not worried about it. I know Control-M will definitely have a way to integrate easily. I have used UC4, AutoSys, and Dollar Universe. But when the requirements include integration, I always think of Control-M, because I know the integration will be very easy. I will never go for any of those other tools.

    What other advice do I have?

    Control-M is very critical for anyone who is using it.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner
    PeerSpot user
    Lead Consultant at a media company with 1,001-5,000 employees
    Real User
    Top 20
    Helps us monitor and deliver critical data, but support response to production issues could be improved
    Pros and Cons
    • "We have a team called pro-mon and they monitor all the jobs for us. A single view for them makes it easy for them to monitor things."
    • "With earlier versions, the support was not accurate or delivered in a timely manner. What would happen is that I would be in production mode and I would have an issue and would want to get someone on a call to see what was happening. But they would always say, “Hey, provide the log first and then we'll review and we'll get back to you." I feel that when a customer asks about a production issue, they should jump onto the call to see what is going on, and then collect the logs."

    What is our primary use case?

    Most of my customer's jobs now run on Control-M, mainly on the finance side and for data management. Those are the core applications that we are running. We are using it as a scheduling tool. 

    We have a few other applications that we are migrating to Control-M. Until about two weeks ago we were running on an older version of Control-M, so not many people were interested in migrating to it. But now we are running on an updated, supported version. So more applications should move to it.

    Control-M is deployed on-prem.

    How has it helped my organization?

    Let's say the business wants to run some reports. We give them a console or the Self Service where they can run jobs. That way, they don't have to depend on the IT team: “Hey, can you run this job?" And then they have to open a ticket and the IT personnel have to keep to the SLAs. Instead of that, we give them Self Service where they can run their own jobs and they can see the data instantly.

    For each job we have SLAs and, based on the SLA we define which ones are critical. The most important processes for us include the SFTP process. We have a few files that are very important and are generated every day. They have to be delivered to the business before they come into the office. That is a very critical process. We tried various options but after implementing Control-M we had better results. Another of our critical jobs is what we call our master data management, where we have near real-time data. We have a few SLAs where a job has to be completed within 20 or 30 seconds. That means the data has to be delivered within that amount of time. Using Control-M helps us to monitor and deliver critical data to the business.

    We used to use a native scheduler, like a cron or MDM scheduler. Those kinds of schedulers were very effective, but there were no cross-platform capabilities. With Control-M, we have both types on a single page, and we can see when a file is available and when it's picked up. If I have two different data centers and Job A is running data center 1 and Job B is running in data center 2, when we used the native schedulers for moving files and getting alerts, there was always a delay of a couple of seconds. We have tight SLAs. With Control-M, we're able to deliver on time. While our earlier and our current schedulers are automated, we have a better solution now.

    Control-M has also helped to improve our Service Level Operations performance. If I had to take a wild guess, I would say it has improved SLO performance by 20 percent.

    What is most valuable?

    The main reason we came to Control-M was to integrate everything together and have it all in a single platform. We use different applications, and integrating them was not possible previously. With Control-M it is. Apart from integration, the main features are for long-running jobs and SLA alerts. But there is definitely a lot to explore and to work on within Control-M.

    The solution provides us with a unified view where we can easily define, orchestrate, and monitor all application workflows and data pipelines. We have a team called pro-mon and they monitor all the jobs for us. A single view makes it easy for them to monitor things. Control-M comes with a documentation section for each job. As an SME, I put in the high-level steps in the job documentation; what to do if a job fails. They can read it and do level-one support. Some jobs are very critical and require an immediate call, but with other jobs they can wait, re-run, or read the documentation to give them some guidance. That really helps all our teams. That single view for the monitoring team, where they can see things in a single application, is important because the business needs all jobs completed within their SLAs. Indirectly, it's helping the business to get its data on time.

    Another reason we use Control-M is to integrate file transfers within our application workflows. We have cross-business functionalities, where one business generates something and another business wants to use those files. We use a lot of MFT and AFT functionalities. As a result, Control-M has definitely improved our timelines and SLAs. We have an easy-to-monitor solution now. Before Control-M, each application team had to monitor its own jobs. Sometimes they would miss something and they wouldn't know that there is a mistake in a job. But once Control-M came into the picture and we had a dedicated team to monitor everything, we were able to provide timely files to the business. The business is very appreciative of the improvements after implementing Control-M. It has improved things a lot when it comes to providing files to the business on time.

    For how long have I used the solution?

    With my current customer we have been using Control-M since 2017. I started using it over the last four or five years.

    What do I think about the scalability of the solution?

    Now that we are using the supported version, we can leverage a lot of the features. Going forward, it's going to be very actively used by all our business teams, including all the applications teams. We don't have many jobs at the moment, around 200 or 300 jobs, but down the line, in the next six months or year, we are going to double that count.

    It's a good tool, and they're coming up with a lot of new features and a lot of improvements on the scalability side. Version 20 might have come up with more features and more performance-related things.

    Control-M is running multiple applications for us, including SFTP, MFT, Arkin, Informatica, and Java. There are also a lot of BA jobs and a few OS jobs. We have also integrated some of our reports with Control-M and I'm running them on my local machines. We are planning on expanding Control-M to other applications in the future. That's one of our next steps, to go to applications at the organization level. We are working on it.

    We are not heavily dependent on Control-M as of now, but we are slowly migrating to it. Our users of Control-M are developers and application owners, which puts our number of users in the double digits. There are some business users as well. But it's more the application side and the team leads who are using it. Previously, I worked with a very big financial company where we had thousands of jobs. Everyone was using it there.

    How are customer service and technical support?

    Jesse, my account manager, is very prompt and he answers all my questions in a timely manner.

    We have hardly reached out to the support team. Whenever we would reach out to them when we were running on the older version, they would always say, “Hey, you have to upgrade in order to troubleshoot.” In my experience, the support has not been excellent but it has met expectations. Since upgrading our version, we haven't reached out to the support team.

    With earlier versions, the support was not accurate or delivered in a timely manner. What would happen is that I would be in production mode and I would have an issue and would want to get someone on a call to see what was happening. But they would always say, “Hey, provide the log first and then we'll review and we'll get back to you." I feel that when a customer asks about a production issue, they should jump onto the call to see what is going on, and then collect the logs. At least that would give me hope that the support is there and that they are on top of it. I did not get that kind of support from Control-M.

    It could be this was just my experience from a very limited number of tickets. Once or twice we had a production issue and I was expecting that someone would join the call immediately. I know they need a log to see what is going on, but before that they could jump on and see if they can fix it. Sometimes an expert will know what the problem is before seeing the log.

    I do work with support from other vendors' applications as well, and I get a different response from those vendors, so this is something BMC might have to improve.

    Which solution did I use previously and why did I switch?

    We moved from native schedulers to Control-M.

    What about the implementation team?

    We have in-house people who are expert enough to implement Control-M, but due to other engagements, they were not able to do so.

    The initial setup was straightforward. The vendor implemented it for us. We reached out to our account manager from BMC, and BMC sent a certified vendor, Cetan Corp., to our environment and they implemented it for us. Overall, it was a simple installation, a simple environment. Our initial deployment took about three months, end-to-end.

    We recently upgraded and we also used a partner for that, VPMA Global Services. The process took about six months but that was not six months of work every day. The actual working time on it was about one month. The other five months were due to securing hardware, testing things, et cetera.

    When we went with VPMA for the upgrade, we gave them our requirements, how we wanted our implementation to be. They came up with an architecture diagram and we had an internal discussion about it. The VPMA team came up with their recommendations, multiple approaches, and we choose the best of them.

    Both partners were recommended by our account manager at BMC.

    I also definitely check the integrated guides and how-to videos. They are very helpful. Products like this might be using different approaches, but they have the same types of features, so we had an idea of how to implement this. We know there are best practices so we went ahead and searched the integrated guides and YouTube support. We got a lot out of them. They're very helpful for our new people. They can search and go through the how-to videos.

    We don't require many people for day-to-day administration of Control-M. We spend around one to two hours on Control-M most days. The monitoring team is always monitoring all the jobs on the screen. But the application owners, who are the admins, hardly spend two to three hours on it per day, unless there is an alert.

    What was our ROI?

    Whatever we have spent has definitely been worth it. At every renewal we evaluate it internally. As a Control-M SME, I have to provide some stats in terms of man-hours, the amount that we spent on it, the stability, and SLAs. Based on these, we have always had a good impression. We have to justify it that it's worth the cost, and it is.

    What's my experience with pricing, setup cost, and licensing?

    Initially, our licensing model was based on the number of jobs per day. That caused some issues because we were restricted to a number. So at our renewal time we said, “We want to convert from number of jobs to number of endpoints.” That cost us extra money but it gave us additional capabilities, without worrying about the number of jobs.

    At first we had the standard edition and later on needed some additional features and we paid extra for those.

    What other advice do I have?

    Control-M helps us to proactively monitor things and see what is coming up and what is happening. Based on that, we can take steps for resolution. But I don't think Control-M itself has the ability to proactively fix issues.

    Overall, it's a good automation tool. And it gives us a single view of the customer. I would advise going with something like it. I'm not going to advise about any particular solution. All these tools are very powerful and give you a single view.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Director at a performing arts with 5,001-10,000 employees
    Real User
    Top 10
    By using the credentials vault, we don't need to share passwords anymore
    Pros and Cons
    • "Before Control-M, we didn't have a centralized view and could not view what happened in the past to determine what will happen in the future. The Gantt view that we have in Control-M is like a project view. It is nice because we sometimes have some application maintenance that we need to do. So, in a single console, we can hold the jobs for the next hour or two. We can release that job when it is finished. This is a really nice feature that we didn't have before. It is something really simple, but we didn't previously have a console where we could say, "For the next two hours, what are the jobs that we will run? And, hold these jobs not to run." This is really important."
    • "We develop software. More frequently, we are working with microservices and APIs, using our integration tool, MuleSoft. While Control-M is really a good tool to integrate with other tools, it is important for them to continue improving their microservices and API."

    What is our primary use case?

    Because of security issues that we have, we are a private and public enterprise. Our main area is the lottery in Portugal. This is the most important business that we have. Also, because the money comes from the game, we need to invest it in social, health, and real estate areas.

    How has it helped my organization?

    For my current organization, it is a new tool. We are implementing the tool right now. We have a lot of impact jobs running every day and night, but in a skeletal matter. So, these jobs are running at one o'clock in the morning. With historical run jobs that we needed, we know it took six or seven hours to finish. Then, we have another cron job in another system at eight o'clock. With Control-M, we can reduce a lot of this time. Because when this job is finished, it will immediately start the job in another system. At this moment, we do this manually with an operator. Sometimes, they have errors because it is manual. It is not robots who do the job. Also, it takes a long time. We are losing time between jobs, if it is not automatic.

    Our operator guys mostly use the web interface. As a client, we are more using the UI for the planning of the jobs. However, if we want only to do monitoring, then we only use the web interface. As we continue to work from home, there are a small number of operators who are still at our work. For security purposes, it is important to have the web interface in place because we don't like to install it on our clients because we don't have administration of the PCs. We cannot install on laptops without authorization. Access to Control-M only with a browser is really important and makes our job easier to do. We can access Control-M with a laptop, app, or mobile.

    Before Control-M, we didn't have a centralized view and could not view what happened in the past to determine what will happen in the future. The Gantt view that we have in Control-M is like a project view. It is nice because we sometimes have some application maintenance that we need to do. So, in a single console, we can hold the jobs for the next hour or two. We can release that job when it is finished. This is a really nice feature that we didn't have before. It is something really simple, but we didn't previously have a console where we could say, "For the next two hours, what are the jobs that we will run? And, hold these jobs not to run." This is really important.

    We use the Conversion Tool for audit purposes. We have had things working for a long time, but not documented. The Conversion Tool is nice because it helps us understand our jobs, whether they should be in Control-M or not. 

    What is most valuable?

    The most valuable feature for us is Managed File Transfer. We have a lot of file transfers in-house. Every FTP was being done by hand. Managed File Transfer is simply the best thing for us. This is the most used feature.

    The credentials vault is really important. Before Control-M, every user's operator needed to know the username and password to access a system. With Control-M, we don't need to share passwords anymore. We write down the username and password one time, then we use it without knowing the password. 

    The amount of integration that Control-M already has. We use the web services. We are using the SQL and Oracle integrations because we have a huge environment and a lot of applications in-house. Because we have integrations with all these tools, we don't need to give access to the operators. Now, we have everything in a single pane of glass. The operators can see all night what is happening, where, and if they need manual intervention.

    One of our most used features is Control-M's library of plugins for orchestrating and monitoring work flows and data. We have a lot of different applications, plugins, and API automation, which are really important for us. We are migrating a tool from Apache, which is Java code. So, we can schedule the Java code with the API automation plugin that Control-M delivers for us. We are now starting to operate this way.

    We use the Control-M Role-Based Administration feature. It is integrated with our Active Directory. We have groups in Active Directory, who are administrators and operators. Then, we map this role-base directly in Control-M. Role-Based Administration empowers us to decentralize product teams to manage their own application workflow orchestration environments with full autonomy. We divided this by environment: production, non-production, and demo environments. For each of these environments, we have different roles in Microsoft Active Directory. These roles are implemented by Control-M Role-Based Administration.

    The use of Role-Based Administration eliminates the need to submit tickets or requests to the Control-M administrator. They don't open tickets and are autonomous when doing their job. From a security posture standpoint, it is important for us because we know that only the people who have credentials can access these environments, doing the job that they have to do.

    We use Control-M Centralized Connection Profiles. We create the connections for the user and password. After that, we don't need to share passwords anymore, which is important for us.

    What needs improvement?

    We develop software. More frequently, we are working with microservices and APIs, using our integration tool, MuleSoft. While Control-M is really a good tool to integrate with other tools, it is important for them to continue improving their microservices and API.

    For how long have I used the solution?

    I have been working with Control-M for more than 10 years. First, I was working in a consulting company, as a consultant, where we implemented Control-M. Now, in the last year, I have been a customer in a huge organization in Portugal. 

    What do I think about the stability of the solution?

    We can work with jobs that should run daily because of it. When we need to do an upgrade, it is really important for us not to have any downtime.

    We are always afraid to install the latest version. However, with Control-M, it is really comfortable to move onto the latest version because of the stability. When I worked as a consultant, I never had any problems. Even when we had Control-M in two data centers, if one goes down, then we can run Control-M in another data center. Few software solutions have the stability of Control-M. 

    What do I think about the scalability of the solution?

    We have different areas: real estate, games, social activities, and healthcare. The scalability for us is really important because we have different agents installed by business area. We don't mix it. Also, we have to always buy our VM servers per business area, so we can upscale how we want, which is really nice to have in Control-M. Critical jobs can run from different servers if something is not working.

    How are customer service and technical support?

    BMC support is an eight out of 10. Everyone has centralized outsourcing for the first line of their service desk. They always ask some of their normal questions. After a while, once those guys know our workflow and understand that we already have some knowledge in Control-M, it is really fast to solve the problem.

    Which solution did I use previously and why did I switch?

    We really needed a job scheduling tool. At the end of the day, we bought BMC Control-M. It is for a distributed environment where we have a lot of different working systems, operating systems, and applications. Control-M is the application and tool that meets our expectations.

    How was the initial setup?

    The initial setup was straightforward. It is really easy to understand the architecture, and even install it. Based on some internal rules that we have in-house, Control-M fits well with our architecture.

    It took a day to install and a week to implement. After one week, we had some jobs working and were able to get the users to see, control, and monitor the jobs. We had it deployed and working in less than a week for Windows, Linux, and HP-UX operating systems as well as VMS.

    What about the implementation team?

    My principal difficulty implementing in-house was that people didn't understand what the job scheduling tool can do for us. It was long hours, and a lot of days, saying to our internal colleagues that this is the right tool. With this tool, we didn't need to have a lot of consoles anymore, i.e., working 24/7 to try and open every console to understand what is happening. We can have a single tool for all the jobs, applications, and operating systems. We can monitor and schedule all the jobs. They thought this is rocket science and doesn't exist. This solution has existed for a long time and is really important. 

    What was our ROI?

    The use of Centralized Connection Profiles has helped lower our total cost of ownership. Before BMC Control-M, we had different environments with the same users. We saw before that even the passwords for the different environments are the same. Before Control-M, we had passwords in emails and chats. Sometimes, the password would expire. With Control-M, we changed that. Every environment has an administrator who needs to write a password. We give them access to write the password directly into Control-M. The person configuring the job only needs to know who the user is, not the password.  With this functionality, the time that it takes has been reduced.

    It reduces the duration for a lot of our jobs. We no longer have a window for maintenance applications at night.

    What's my experience with pricing, setup cost, and licensing?

    This is an area where it is a little difficult to work with BMC. They want to do licenses by job, which is what we have. For example, the simplest is to license by job, but they can also license by nodes. While the licensing is simple to use, it might not be the correct licensing model for the customer. It is okay because we want to license by job, which is something measurable. At the end of the day, licensing by job is the most important.

    Which other solutions did I evaluate?

    We evaluated other vendors, like CA, but CA was bought by another company, and we have been a little afraid. Our organization always buys with a tender. Our tender had a lot of requirements on it and only Control-M could meet them all. It was a public tender, so we didn't really choose Control-M. We had a huge list of requirements that we really needed for job scaling. Only BMC could do it. IBM Tivoli tried to answer, but it didn't meet all our requirements.

    Most tools have a huge GUI. You need to open five to seven windows to go to the parameters. Sometimes you don't have all the parameters in the GUI. With Control-M, it is three clicks and we have all the information that we need. We can see that in Control-M, we can see that all the perimeters are there for one job, like Managed File Transfer. It is very intuitive, and we can understand where to find the parameters to configure.

    What other advice do I have?

    I think that every single company should have Control-M installed, because it is really important and useful for everyone.

    I would rate this solution as a 10 out of 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.
    Updated: April 2023
    Buyer's Guide
    Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.