Try our new research platform with insights from 80,000+ expert users
SAP Solution Manager and Control-M Admin at a wholesaler/distributor with 10,001+ employees
Real User
Integrates with all our applications, and saves a lot of time and monitoring effort
Pros and Cons
  • "It is an enterprise tool that integrates with all the applications in our organization. It has made our life easier because we don't need to wake up at midnight and do monitoring, etc. It does everything. It also sends precautionary alerts. If a job or activity is running for more than the specified time, it alerts the application team. So, our teams do not need to sit in front of a laptop or any open application to watch the jobs. They can do their other regular activities while Control-M takes care of all the jobs. It notifies them when there is job completion, delay, and error."
  • "We have some plug-ins like BOBJ, and we need a little improvement there. Other than that, it has been pretty good. I haven't seen any issues."

What is our primary use case?

It is an enterprise tool, and it is a critical one. It is used for scheduling all of our enterprise jobs and monitoring them. We have both cloud and on-premise applications, but Control-M is installed only on-premises. We have high availability as well as load balancing servers in the cloud as well as on-premises.

How has it helped my organization?

It is critical for our business. Control-M directly affects our business because all our jobs are integrated into it. Without it, it is very difficult for us to do the monitoring. There is application-level dependency. We have SAP, Logility, and other third-party applications, and then we also have retail applications. We have different types of jobs. SAP handles only SAP-related or ERP-related jobs. In retail, we have stored procedures, and BI has HANA procedures. If Control-M is not there, it would be difficult for application teams to sit in front of the application and wait for a job to finish and then trigger another one. We are a global company, and we have jobs running round the clock. It saves almost half of our time in a day.

It is good in terms of data transfer. We are using the Managed File Transfer plug-in. It is pretty good, and it has good features. In one place, we can see what files have been processed or what jobs have been deleted or failed. We can see everything on the dashboard. If I have to search for a particular file that is missing, I can go there and check. 

It can orchestrate all our workflows, including file transfers, applications, data sources, data pipelines, and infrastructure with a rich library of plug-ins. This functionality is critical from the application point of view.

It has had a positive effect on our organization when creating actionable data. It is pretty good. It is a critical application for us. All our jobs and integration activities are monitored and scheduled through Control-M. We have multiple projects running, and teams are continuously doing the testing in the Control-M. This is the application where they can do all the testing for high-load jobs and other things. It is a critical application for all project teams.

What is most valuable?

Cost-wise, it is good. It is an enterprise tool that integrates with all the applications in our organization. It has made our life easier because we don't need to wake up at midnight and do monitoring, etc. It does everything. It also sends precautionary alerts. If a job or activity is running for more than the specified time, it alerts the application team. So, our teams do not need to sit in front of a laptop or any open application to watch the jobs. They can do their other regular activities while Control-M takes care of all the jobs. It notifies them when there is job completion, delay, and error.

When we migrated to the SAP ERP application, a lot of jobs got created. We had to do all the things manually and monitor round the clock. Control-M has made our life easier. We can now concentrate on our applications and other tasks.

Since we have got this product in our company, our life has become easier. We don't require much L1 and L2 monitoring and support. We don't have L1 support when it comes to the Control-M application. We do have an L2 team application support, but it is minimal.

What needs improvement?

We have some plug-ins like BOBJ, and we need a little improvement there. Other than that, it has been pretty good. I haven't seen any issues.

Buyer's Guide
Control-M
October 2024
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
813,418 professionals have used our research since 2012.

For how long have I used the solution?

We have been using this solution since 2016.

What do I think about the stability of the solution?

It has been good so far, and I haven't seen many issues in terms of performance.

What do I think about the scalability of the solution?

Its scalability is good. We have more than 100 end-users of this solution.

How are customer service and support?

I would rate them an eight out of ten.

How would you rate customer service and support?

Positive

How was the initial setup?

I was not there when it was purchased and installed. It was already there when I came here. At that time, it was version 8. From 2017 onwards, I've been doing all the upgrades. Currently, we are on version 9.20.

What about the implementation team?

It is updated in-house. Usually, we submit the AMIGO report to BMC for the initial validation. Once they validate and confirm, we do the upgrade. They know what our environment is like, and if there are any issues at the time of upgrade, they easily find out the cause. We also have support from a third party called VPMA. We can take their help as well for critical issues.

In terms of maintenance, there are OS-level updates every month, which are taken care of by the IT team. Application-wise, we do patch fixes when a particular plug-in needs patching.

What's my experience with pricing, setup cost, and licensing?

Cost-wise, it is good. 

What other advice do I have?

I would definitely recommend this solution. Control-M is the place to go if you want to have workflow automation in place. I have previously also worked with the Remedy tool in another organization, and I found it good.

It is pretty good in terms of creating, integrating, and automating data pipelines. If you have all the information, it is a straightforward activity. If it is new functionality, then before integrating Control-M with a third-party application, you need to do some work in terms of configuration.

It is easy to ingest and process data from different platforms. Its setup takes some time, but once the setup is done, it is pretty easy.

We don't use Control-M to deliver analytics for complex data pipelines. We do have analytics, but we have an SAP analytic application called BOBJ BI. We do have a job set up for that. It runs from Control-M, but analytics are shown in the SAP application.

Our cloud usage is not much. From the S3 bucket, we are using the file transfer part from the application perspective, but there is not much integration with cloud applications. We only have the MFT plug-in to communicate with AWS S3. Other than that, there is not much interaction with the cloud from the Control-M application side.

I would rate it a nine out of ten. It has been good so far. I haven't seen any issue. It is easy to use. I still have a lot to learn about this solution.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
IT Operations Specialist at a retailer with 5,001-10,000 employees
Real User
It's very easy and seamless to get important files transferred in a secure manner
Pros and Cons
  • "The most valuable features are the Advanced File Transfer and the manage file transfer. They make transferring files securely seamless. It's very easy to set up, get deployed, and have it transferred to and from vendors. As long as we can get our firewall rules implemented at a decent time, it's very easy and seamless to get important files transferred in a secure manner."
  • "We've also had a few database bugs within our organization. I think we are migrating to OpenJDK rather than just regular Java and that has since shown some issues with our Control-M instance, timing out and causing our jobs to stop running. We are still working with BMC to fine-tune that and get that resolved."

How has it helped my organization?

Overall, we have a great visual of all of our key business processes, and it gives us a secure way of transferring everything in and out of the business so that if anything were to be intercepted, it would be secure and not compromised.

We transfer financial files between Google cloud. We use it for the I series. We have a lot of automated jobs, around 3,000 jobs per day, that we load that range between just regular commands for our planning allocations, finance, or data warehouse along with Google cloud. We're starting to implement a lot of that, but a lot of it has been automated and it allows us to process everything in a timely manner.

We are in the process of implementing the managed file transfer which gives us the dashboard, but we are still fine-tuning that. Overall, it does give us a great picture and helps everything. If there's something delayed, it gives us the opportunity to send out a notification to a team to say that their process is delayed. We get tickets created and have everything sorted in a timely manner.

We use Control-M's web. It makes it very easy for us to show them what they need to see and what they don't need to see. They mainly can just view the tasks that they have, but it's pretty divvied up permission-wise.

Control-M integrates file transfers within our application workflows. It has made everything a lot quicker. We've been able to get files transferred to vendors and we've been able to retrieve files from vendors rapidly and securely.

It also streamlines our data and analytics project. Mainly developers will create either different types of processes that we will implement within Control-M to make it automated and that definitely, I would imagine, helps streamline and format certain projects and reports that we send out to executives that helps out a lot. I don't know the exact extent of it, but I would imagine that it has helped our business service delivery. 

It has helped to achieve faster issue resolution. With the shouts and notifications that we get, we're able to create tickets as soon as a problem surfaces. So as soon as we do get a job failure, we get an email notification that prompts us to create a ticket, page out the team, and get it resolved in a matter of our terms of our SLA.

What is most valuable?

The most valuable features are the Advanced File Transfer and the managed file transfer. They make transferring files securely seamless. It's very easy to set up, get deployed, and have it transferred to and from vendors. As long as we can get our firewall rules implemented at a decent time, it's very easy and seamless to get important files transferred in a secure manner.

Control-M has automated critical processes. We run a lot of our backups through Control-M, daily sales reporting, and warehouse initiatives with shipping and planning. There are a bunch of finance processes that go through here that are time-critical. It's made everything more streamlined and secure and it comes through much quicker than doing it manually.

What needs improvement?

We have had a few small bugs with the configuration of the different types of jobs where it is the order of operations if it's doing a statement, we've noticed that if you try and do a little bit of both, it may cause one of them not to work. 

We've also had a few database bugs within our organization. I think we are migrating to OpenJDK rather than just regular Java and that has since shown some issues with our Control-M instance, timing out and causing our jobs to stop running. We are still working with BMC to fine-tune that and get that resolved.

I believe the file transfer process does everything that it needs to do. I don't believe that there's anything that would need to be changed there with all the features that it has, it's pretty robust. But overall I don't really see many changes that we would need.

For how long have I used the solution?

I have been using Control-M for three to four years. 

What do I think about the stability of the solution?

Other than the database connections that we've had and as of, I believe when we upgraded or moved away from Java using OpenJDK, it's been hit or miss. I know that we've had a few instances where our jobs just stopped processing, but we're not sure if that's related to the application itself or if that's something in our environment, but overall I am personally okay with the way that it runs.

What do I think about the scalability of the solution?

We run it on windows as well as Linux, and we are still trying to work on getting it to our DR site. But, I believe we are able to process quite a bit through there.

We use it for our I series AS 400. We also use it for Google Cloud, Cognos, ADP, many custom applications that we run as well, but we do a lot of I series.

I do not plan to expand it to other applications in the future.

My department consists of eight people, and we are mainly data center analysts. I'm their manager. We also have developers with a select few developers that are able to get in and view it, but they cannot actually create anything. They can just view and see what is running.

Between five to 10 users are responsible for the day-to-day administration of Control-M.

How are customer service and technical support?

I've never used Control-M before, prior to being here and all I had to use were the help guides from the web, as well as the user interface that we have. The help administration guide has been the only way that we are able to get questions resolved and to go through support.

Their support is hit or miss. We have had successful sessions with them. And then we have other ones where there are fingers being pointed and it doesn't really solve anything. We have a rep that my manager goes through, but we seem to usually get issues resolved in a timely manner.

What was our ROI?

We have seen ROI. We were able to have fewer people manually running tasks. We're able to put them right into here and we're able to scale and move a lot of file transfers through here.

What's my experience with pricing, setup cost, and licensing?

It is a little bit expensive. I believe that however we are set up, it might be per job that we load or the highest number of jobs that are loaded monthly and I believe it is quite expensive.

What other advice do I have?

My advice would be to try and utilize as many features as you can. Don't get overly creative with things because that can just confuse other people. If there are other users getting in there, you want to definitely have a standard workflow on how jobs should be created, organized, and make sure that you keep track of what's being changed so that if something were to fail it's easily trackable.

It's a very robust application and there is a lot that can be sent to it and sent out of it and you do not want it to get into the wrong hands because you can do quite a bit with it.

I would rate Control-M an eight out of ten. 

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Control-M
October 2024
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
813,418 professionals have used our research since 2012.
Sr. Systems Engineer at a financial services firm with 1,001-5,000 employees
Real User
Easy to use, extremely stable, and offers excellent technical support
Pros and Cons
  • "Technical support is very helpful and available 24/7."
  • "While they have a very good reporting facility, the reports that I'm asked to produce, a lot of times aren't necessarily what we need."

What is our primary use case?

A lot of the things we've done are just based on our needs, not so much because the product allows you to do it. Basically, I can do everything in Control-M. I mean, we've got plugins for Oracle, SQL, and Informatica, and I can go on and on and on. However, we don't use any of them as our developers prefer not to. A lot of what they do is they do the necessary connections through the batch files themselves.

It's used for our daily batch. It handles all the batch processes and a lot of our maintenance processes. I would say most of it is file movement of some sort. A lot of it is daily processing, to get it in. Our data warehouse runs through Control-M. The big impetus behind it, when we purchased it, was due to the fact that the auditors wanted a more robust system and something that they could audit. Control-M gives you everything you need for that.

How has it helped my organization?

It allows us to automate a lot of the jobs that used to run manually. Everything is automated. We can automate a lot of different processes using Control-M. You can know where it's at, and you can follow it, follow the job flow, from one job to the next, and whatnot, very easily. 

We used to run a lot of stuff in AT scheduler and Cron which really didn't meet the needs, especially for the auditors. We've taken that, and we've made the system where you know immediately if you got a problem with a job string. Our operations department will page it out overnight if we have a problem, and we take care of it. It's like any other system. If it allows you to do what you need to get done, it's the same every day, you know that you're going to get the same process. It drives the process.

Like most schedulers, you can bring jobs in many different ways. There are different ways to execute things. One of the things we had was when we were taken over. They were using a combination of the CA scheduler that they had, and they were also using SQL scheduler to do a lot of it. Prior to us converting our data warehouse system to Control-M, they were using the Informatica scheduler. None of this met any of the auditors. The auditors didn't like it as everything was spread out on different systems. They couldn't keep track of jobs. Everything's consolidated now. Everything's running off Control-M. You can follow everything through the entire process. We kick off all SQL jobs using Control-M. They were using SQL to launch just batch files, which had nothing to do with SQL - they were just scheduling it through SQL.

What is most valuable?

The capabilities of auditing have been great. 

The ease of use is one of its great aspects. It's very easy to use and very easy to pick up. 

It's got an excellent graphical interface. I haven't seen that in anything else that I've looked at, however, that said, I haven't looked at many lately. 

I know that in 20 years, I have had probably two problems where I've had to call the company to get immediate assistance from them, where we had a system down or something. Its performance is very reliable.

It integrates with other applications. You can use PowerShell, you can use Perl, you can use whatever. It doesn't really care. It's just running a process.

The product scales quite well.

Technical support is very helpful and available 24/7.

The stability is excellent.

What needs improvement?

I will say that at one time we used to run on Solaris and not Windows, however, we were taken over by a company that decided that everything had to be on Windows. We put this in when we were the previous company, and then we were more or less given to the current bank by the FDAC, during the 2009 banking crisis. At that point, they wanted us to implement their solution, which was rudimentary at best. It was a CA product that did not meet the needs. I could not convert what we had in Control-M, to run in that system at that time.

While they have a very good reporting facility, the reports that I'm asked to produce, a lot of times aren't necessarily what we need. They need to be better customized. I haven't been able to produce the right reports through their reportings facility. I was a Perl programmer and a C programmer at one time. Perl just worked right in there. A lot of our reports were written in Perl, which right now they don't like at all as Perl's not ideal for our company. 

I can't get to the database tables I want to get to. The database tables they allow me to get to aren't the ones I'm looking for, as, usually, I'm going right into the database, into the raw database, and pulling things out for the reporting I need. I can't do that through their reporting facility, Crystal Reports.

For how long have I used the solution?

We've been using the solution for two decades or so. It's been about 20 years. We've used it for a long time. We started using it around 2000 or 2001.

What do I think about the stability of the solution?

We've had issues only twice in 20 years. It is very stable. I will say that they have improved it. Originally, when we put in a Windows version of it, we had problems with the database that they were using at the time, which was a Postgres database. Then, at one point, we decided to go to Solaris and run it on Solaris. We had it on Solaris for six years. In six years, I don't think we ever rebooted the server. It ran for six years without any hiccups, any problems. The Solaris system was rock solid. 

Now, the problems we run into, if we run into any problems, are Windows-based problems. Those Windows-based problems are, for example, if you don't reboot a server once a month, which, thank God we do, you can have issues. We reboot as we have to patch monthly now and we have to reboot it every month. However, we would see if we went two, three months on Windows, that we would start seeing some problems. Rebooting it took care of it.

That said, that's a Windows problem, not so much a Control-M issue, as we see problems on Windows servers that run for two or three months in any application.

What do I think about the scalability of the solution?

Right now, we are running on their small database model. We, at one time, had about 2,500 jobs, and we were on a medium model then. Now, we're down to about 800 jobs a day. It's just a matter of the requirements we have. In terms of scalability, it scales up very nicely. It works very well. You can have multiple servers if you need multiple servers. Currently, we have one Control-M server and one EM server. We used to have two Control-M servers and one EM - EM being the enterprise manager, which is really what's running the system. The Control-M servers basically take care of the current runs, what's currently running on a system. Adding more jobs and adding more resources to it is not a problem.

It does high availability. We don't use the high availability due to the fact that we have another solution. We run everything in a virtual environment, and take regular snapshots if the system goes down. Should that happen, the snapshots are replicated from our production site to our DR site. We bring up the latest snapshot in the DR site if we lose the production site. It's up and running within minutes, literally. It's just a matter of going in and saying, "Bring these servers up." And they come up.

Currently, we've got three schedulers using the solution. They have more or less God rights, although they can't change user permissions. Those three schedulers can do anything with the jobs - delete, add, create, whatever. We have about 10 operators that have access to it as well. The operators have a somewhat reduced role from the schedulers. They can do a lot of it. They can bring in jobs, they can rerun jobs, they can kill jobs, however, there's a lot that they can't do. Then we have probably about 60 users that are developers, and they're basically read-only. They can see the jobs, they can see what happens. A lot of it has to do with corporate decisions on control. They didn't want the developers to be able to define jobs and items of that nature. They wanted the developers to define the job through a worksheet, and then the schedulers would actually implement the job. That's just a matter of policy, basically. They monitor their jobs that way. I'm trying to allow them to be able to at least bring in their jobs, for test - not for production - so that they can make it policy change here. If they could do that, it would greatly enhance their ability to get testing done. The downside to that is that you might have a developer that just keeps running the job over and over, and over, and over again, which I've seen happen too. Personally, I can do everything in test. I can't do anything in production at all, except view jobs. I have read-only on everything in production, except for the configuration part of it, to which I have full rights. I used to almost be a fourth scheduler at one time. At this point, there's no need. The limits of my job have been redefined several times.

Overall, the usage of the product in the company is very extensive. There's not a part of our daily businesses that's not reliant upon Control-M. If Control-M was done, the company would be at a standstill, literally.

That said, likely we won't increase usage. The company we just merged with, another organization and it's debatable as to how these things go. They have about 5,500 jobs. We used to have a lot of jobs like that, however, the business drives what we do. 

How are customer service and technical support?

The technical support is probably the best I've ever worked with.

If I need support help from them, if we are down, they get back to me, if not immediately, within an hour. 24/7. And usually, we're up within an hour, after the first contact. They help greatly with planning for upgrades. I need to contact them here in the near future. They have a group called the AMIGO group, that does nothing but migrations and upgrades. I need to get with them to go over my plans for transitioning from the old servers to new servers. They will verify that what I'm doing is the right way to do it. If it's not, they will tell me how to do it, which is an excellent resource. 

They have a very large knowledge base. It's integrated with everything I've ever had to have it integrate with. Their support's been very good.

When I call BMC, I get an immediate response. I've had products that I've supported, that I've called companies and been on hold overnight. I've literally gone home for the night and left my phone on my desk, off the hook, on hold, and come in the next morning, and I'm still on hold, listening to the hold message due to the fact that the support hasn't answered yet.

Which solution did I use previously and why did I switch?

We have recently merged with a company that uses Tidal, and of course, they want to hang on to theirs. We use Control-M. I've actually used several other scheduling products in the past, however, we've been on Control-M now for over 20 years.

How was the initial setup?

I'm actually in the process of doing an implementation right now. I'm replacing our current production system. We're replacing EOS, actually, therefore, I'm doing a straight install of everything on the new servers. It is very straightforward. The install is not really difficult. It's fairly simple if you understand how databases work and whatnot. There's really no problem doing it.

In my case, I can bring up a Control-M server within hours. I only say that as I've done that, as we were not DR prepared back during Hurricane Sandy. I had to bring up a production version of it in Cleveland, in our DR site in Cleveland. Within 24 hours, we were up and running. Therefore, if you need it done fast, it can be done. It's just a matter of, are you willing to put in what you need to put in to do it.

It's a fairly easy install, really. I personally have never had any training on Control-M. Other people in my organization have had training. That said, I'm the one that put it in and I'm the one that read the manual. That's where I got all my information from, was from reading manuals and whatnot, and directly working with it.

What's my experience with pricing, setup cost, and licensing?

I can't speak to what our support costs are. That's out of my realm at this point. At one point, I had an idea, however, I couldn't even tell you what that is anymore. I know that our licensing is based on jobs. We buy licenses based on the number of jobs. Currently, we have about 2,500 licenses. We used to run more jobs than we do right now. We did not get rid of those licenses. 

It's basically $100 a job, give or take.

They also don't charge us for items such as the plugins for MFTP, which we don't use, although we could. They wouldn't charge us for Oracle, SQL, or Informatica. It's a reporting product. 

There's no licensing for the server, there's no licensing for the EM server. All that stuff comes as part of the product. It's all-inclusive.

From what I've seen and heard from the other company about Tidal, that's where they're making their money from - the plugins. Whereas Control-M doesn't charge us. The plugins are basically free for us. I'm sure there is a charge for support every year. I have no idea what that is. I don't get down into that level.

I just tell them, "Yes, we need this" and then the purchasing staff takes care of the actual details.

Which other solutions did I evaluate?

At the time we were looking for a product, I looked at five or six different scheduling packages. By far, at that time, Control-M was leaps and bounds above all the rest of them.

What other advice do I have?

We're customers and end-users.

We're using the latest version of the solution.

By far, BMC, from what I have seen, is the industry leader and they are the Cadillac of scheduling. I've worked with a lot of different scheduling systems over the years. When I first got into IT, years and years and years ago, as a JCL programmer, basically you had access to the scheduling system and you took care of the jobs. When jobs failed, you would do the restarts on them, do whatever fix needed to be done, and get them restarted, and get them to rerun. That was on a mainframe. 

I've used Cron, and I've worked with a number of different schedulers. In the Windows world, other than AT scheduler and Control-M, that's about all I've ever used. I did review five different products back when we put this in.

Having worked with so many products, and with this one for so long, I can advise that new uses should follow the installation instructions and notes. They're very simple, very straightforward. I would advise others to not get scared off by the price as, initially, the pricing seems rather steep, compared to some of the others. However, they all have their pricing quirks, and they're all making money in one way or another. The way they make their money is based on the way they license it. The per-job style actually works out very well.

I'd rate the product at a perfect ten out of ten. It has been one of the most stable products that I have supported, and I have supported a lot of different products. I've had fewer problems with it than I have with just about anything else I've supported. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Ramesh Subudhi - PeerSpot reviewer
Analyst at a financial services firm with 10,001+ employees
Real User
Our batch jobs are automated, so we can check our dependencies with minimal manual intervention
Pros and Cons
  • "Our data transfers have improved using Control-M processes, e.g., our monthly batches. When we used to do things manually, like copying files and reports, we used to take three to four days to complete a batch. However, with the automated file transfers and report sharing, we have been able to complete a batch within two and a half days and our reports are on time to users. So, 30% to 40% of the execution time has been saved."
  • "After we complete FTP jobs, those FTP jobs will be cleared from the Control-M schedule after the noon refresh. So, I struggle to find out where those jobs are saved. Then, we need to request execution of the FTP jobs again. If there could be an option to show the logs, which have been previously completed, that would help us. I can find all other job logs from the server side, but FTP job logs. Maybe I am missing the feature, or if it is not there, it could be added."

What is our primary use case?

Most of my work goes through Control-M, e.g., all my development work. When it goes to production, it moves to batches. This will be either daily or monthly batches.

There are many applications running in Control-M, e.g., a quantitative risk management ALM application.

Most of our production jobs at the organization level are fixed through Control-M, running as either mainframe jobs, Informatica jobs, or QRM software-related jobs. Also, file sharing through FTP jobs and dependency setups between different software patches all run through Control-M.

How has it helped my organization?

We use file transfer jobs in our workflows. For example, if I want to share reports to end users in the production shared area, where specific users have access, Control-M makes this very easy as soon as a job is complete. The FTP job copies the report to a defined shared area, triggering an email to the user with a link. As soon as users are notified through email, they can open the email and click on the shared link to view the reports.

We have automated critical processes with Control-M. Our report deliveries are now automated. We automated our batch jobs and can check our dependencies through Control-M with minimal manual intervention. This has saved a lot of time and manual mistakes. For example, we used to copy old reports and send them via email, then users would come back to us, saying, "These are not this month's reports. These are old reports." After automating these reports with Control-M, there were no errors at all.

What is most valuable?

Multiple software can be collaborated through Control-M, then we can seamlessly monitor when it goes into production after a scheduled daily or monthly deployment. Even though we don't have any privileges to change these jobs, we can monitor them with read access and see how they are being executed. We can also verify their dependencies and see the logs. If there are any failures, we can get the logs from Control-M and fix them in the development environment, in the cases that are required to be done as soon as possible. It provides a complete picture about how the batches are running in production.

We have a lot of things that need to be considered. Everything needs to be done one after another in Control-M, where it provides us a pictorial representation of job dependencies, and even a person without technical knowledge can understand it by looking at the pictorial representation of jobs. So, we can provide the exact time when it can start. Then, we can update the users about the expected time for the job's completion. In case of any delays, we can understand them, then provide a new ETA to the users. Without Control-M, it would be difficult to provide these estimates.

We are using the web interface. We are not going through the mobile because we are a bank. Everything we do is through our laptops, not through a mobile. The web interface supports our business initiatives well. Whenever we want to see the updates, we need to connect to Control-M. We know what needs to be monitored and verify them depending on what their dependencies are. If the batch is still running, we can understand the historical information, then calculate and provide an ETA to users.

What needs improvement?

After we complete FTP jobs, those FTP jobs will be cleared from the Control-M schedule after the noon refresh. So, I struggle to find out where those jobs are saved. Then, we need to request execution of the FTP jobs again. If there could be an option to show the logs, which have been previously completed, that would help us. I can find all other job logs from the server side, but FTP job logs. Maybe I am missing the feature, or if it is not there, it could be added.

When integrating different projects through Control-M, sometimes dependencies cannot be identified. 

For how long have I used the solution?

I have been using Control-M for almost six years.

What do I think about the stability of the solution?

I have never faced any issues with stability. It is very good.

10 to 20 people are administering it.

What do I think about the scalability of the solution?

I have never faced any issues with its scalability.

500 to 600 people are actively using Control-M. These are business analysts, team leads, managers, developers, and senior developers. Anyone who is touching the development and production would have access. 

How are customer service and technical support?

Whenever we have issues, they are resolved through our organization's admin.

Which solution did I use previously and why did I switch?

With the integrated file transfer feature, most things are automated. Previously:

  • We used to copy the report, then send manual emails. However, with this feature, we are able to complete tasks with minimal monitoring because they are automated. Users are automatically notified as soon as the reports are complete. 
  • We used to work during the daytime and after business hours. We were forced to open and view that the reports were there. Or, we waited until the next day to copy the reports, sharing and sending them by email. With this feature, we are less bothered. We can wait until the morning of the next day. We just go into the office and see if the reports have been shared already, seeing that everything is okay. So, during the night, some reports are generated and emailed to the users. 

The integrated file transfer feature has saved us a lot of time and manual effort, approximately two to three hours a day. Also, users are notified as soon as the reports are complete, where they used to wait until the next morning. They can just verify their email using the office provider mobile. Then, they connect to their laptops and get the reports. So, if they need the reports and are waiting for them, then they are not required to wait until the next morning to receive them, saving about 10 hours of their time.

How was the initial setup?

I was not involved with the initial setup. That was before my time.

What was our ROI?

Our data transfers have improved using Control-M processes, e.g., our monthly batches. When we used to do things manually, like copying files and reports, we used to take three to four days to complete a batch. However, with the automated file transfers and report sharing, we have been able to complete a batch within two and a half days and our reports are on time to users. So, 30% to 40% of the execution time has been saved.

Control-M has helped us achieve faster issue resolution. Whenever we come across any data-related errors, instead of going into the process, we just get the Control-M log. Nearly 50% of our issues are resolved by looking at the Control-M logs. 

Control-M has helped us to improve Service Level Operations performance by 30%, because we no longer need to manually copy reports and receive email notifications. So, the process has improved a lot.

What other advice do I have?

Organizations looking for seamless integration with different applications can move forward with Control-M. In my experience, Control-M provides a good solution. It also integrates with different applications and software.

At this point, we are not using the solution's streamlining for data and analytics projects.

I would rate it as eight out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Junior Unix Specialist at Oy Samlink Ab
Real User
Multiple scheduling options allow you to do anything you want, whenever you want, and however you want
Pros and Cons
  • "The multiple scheduling options allow you to do anything you want, whenever you want, and however you want. You can easily be in control when things happen."
  • "The unifying features between Control-M for different platforms needs improvement. The scheduling options on the Control-M mainframe jobs are different than they are on our Linux server. There are a few differences here and there."

What is our primary use case?

We are using Control-M mainly to schedule our jobs and also for file transfers. We are now in the process of using Control-M to take some workload off our mainframe. 

We use it mainly for job automation and handling large chunks of data automatically.

We have Informatica workflows, which make up about 50% of all our jobs. Then, we have all kinds of software on Windows and Linux servers. The file transfers are another big thing on Control-M. However, we are mainly using it to automate our in-house scripts, like monitoring and whatever needs to be done.

We mainly use desktop clients. Some users are also on the web. Currently, we don't use the mobile interface at all.

How has it helped my organization?

We have some batch jobs or Informatica workflows that create the files for file transfers. We have those on Control-M, so it is all automated and happens through the conditions.

Our daily customers' accounts and credit card actions files are processed by Control-M automations every day. That is pretty much part of the core of our business. Other critical components are some monitoring scripts and health checks on our servers, which are run from Control-M. This has made things easier because we have the Batch Impact Manager on Control-M. So, we can use that to send emails, like, "We haven't received the daily-files yet. Or, the daily files are going to be late." Therefore, we have proactive monitoring if things aren't running on schedule.

I don't think it transfers data any faster than before. However, we now have better control and can also send emails to the correct people directly from Control-M, like, "Hey, this transfer is now complete." In terms of data transfers, and if something goes wrong, it is easy to just rerun the file transfer.

If we are using the Batch Impact Manager, it has caught a few times where the job has been running for a while and may not meet the deadline. There may be a loop somewhere, where one job has been stuck for a few hours. So, in this case, the Batch Impact Manager notifies us that it is taking quite long. There are days that this is useful to locate issues.

What is most valuable?

Multiple scheduling options allow you to do anything you want, whenever you want, and however you want. You can easily be in control when things happen.

Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all our application workflows and data pipelines. This is quite important because I am our Control-M administrator. So, it is pretty important to me personally, but also for the company. It may not yet be quite in the center of our business, but we are clearly using Control-M as our main scheduling program.

What needs improvement?

Since we are using version 9.0.18, the web interface is a bit outdated and doesn't really support all our needs. However, we are migrating to 9.0.20, which should give us a lot more options, even in the web interface.

The unifying features between Control-M for different platforms needs improvement. The scheduling options on the Control-M mainframe jobs are different than they are on our Linux server. There are a few differences here and there.

There are capability-related issues between versions, but I think the latest fix pack has that covered. BMC has been doing a pretty good job about this.

For how long have I used the solution?

I have been Control-M for two and a half years.

What do I think about the stability of the solution?

The stability is pretty good. We haven't had any issues with Control-M being unstable in the last two years. They are up and running 24/7.

One person is the minimum needed for day-to-day administration of Control-M. We have three admins, who are also our SFTP and file transfer team. Someone just decided that they should be the Control-M admins, so they made all three of us go through the admin classes. Now, we have three admins. 

What do I think about the scalability of the solution?

Scaling has been pretty simple and a straightforward process. We just recently got the Control-M Workload Change Manager, which is an additional plugin to the main software. That installation was also quite easy. We got it up and running pretty quickly.

We have about 10 people using Control-M actively, who are system specialists and business intelligence specialists. We have three admins, then we have some batch job designers from the mainframe team using Control-M. We have also trained some of our Informatica people so they can monitor their own workflows and create new jobs. They can basically do whatever they need to do by themselves. 

How are customer service and technical support?

I would rate their technical support as five out of five. They have been really helpful and knowledgeable. Even though there have been some cases where support has originally said, "Well, we don't know for now," they have asked for data and provided us with a solution pretty much every time we have had any issues. 

If they don't have a solution on hand, they take it to the lab. We communicate with them and the lab, then everything works out pretty well. Even if there is a big issue, which isn't very common, they have just taken it, and said, "We will see. We will go to the lab where we will test".

The interface guide and YouTube videos have been somewhat useful. However, there is too much data in there. When you try to search something, you get too many search results that weren't exactly what you were looking for.

Which solution did I use previously and why did I switch?

I don't think anything has changed that much. We used to have CA-7 before Control-M. Now, Control-M is just kind of taking over. So, not much change happened. It is just a new software to do the old job. 

We have benefited from Control-M. It is much easier to use and a bit more versatile than CA-7. 

I personally don't use CA-7 because it is located on the mainframe, and I'm not a mainframe guy.

How was the initial setup?

I wasn't involved in the initial setup of Control-M.

What about the implementation team?

We are currently in the process of upgrading Control-M into a new version. We have been working closely with BMC's technical people. 

What was our ROI?

So far, I think it has been good. No one has been talking about getting rid of Control-M. It is more like we are increasing our Control-M usage, if anything.

Control-M has improved our service levels on pretty much any aspect. Now, we can see the Control-M estimates of when a certain job will be completed. They become pretty accurate once a job has been running for a week or two. It can predict quite well when a certain job will be ready. So, if a customer asks us, "When are we going to receive our file?" I can check on Control-M, then say, "Well, I would say around...," whatever time it shows and let them know.

Which other solutions did I evaluate?

We have the CA-7 on the mainframe, and I have seen it being used along with Control-M. Control-M seems to offer a much better user interface, mainly because it is graphic and not on the black screen of a mainframe session.

I don't think our data analysts are currently using Control-M. We do have Informatica software in use, which is some sort of data analyst software.

What other advice do I have?

Always make sure that you have at least double checked everything, because Control-M does everything you tell it to do and exactly as you tell it. Therefore, make sure you are giving the right orders.

Working with Control-M has been pretty complex, but that has been mainly due to our corporate policies since we are located in Finland and in the banking sector. So, there are hundreds of things that we had to consider. While it has been a complex process, it has been more because of our corporate policies rather than Control-M. Once we decided everything, and everything was approved, just taking Control-M into use has been a pretty straightforward process.

Definitely take the scheduler course provided by BMC. That was hugely helpful for all of us. Trying to learn Control-M on your own will be a tough path to walk.

We have Control-M on the mainframe. As the mainframe will be taken down in a few years time, we have to replace the mainframe scheduling agent with something else. That will be Control-M.

Our dev teams are running their own fields. Once they are ready, they go through systems to store into production, then we can automate it. However, during DevOps and other testing phases, we may not use Control-M at all.

I would rate Control-M as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
IT - VP at a financial services firm with 10,001+ employees
Real User
We have a better picture of our auditability
Pros and Cons
  • "We have a better picture of our auditability. When someone comes to us, and asks for sources, "How did the deltas occur?" We can provide answers quickly, or at least quicker than what we used to. We are actually sure of the information that we provide, where before it was like, "Hmm, I think it comes from over there. Let me double check, but it gets really convoluted over here and I think that is where it comes from." Now, if it is within the Control-M environment, it has a straightforward answer that we can provide with confidence."
  • "The community and the networking that goes on within that community need improvement. We want to be able to reach out to an SME, and say, "Hey, we are doing it this way. Does that make sense?" Ideally, they come back. and say, "Yes, it does make sense to do it that way. However, if you want to do it this way, then it is a little more efficient." We understand that one solution framework doesn't fit everybody. Depending on the breadth of the data and how broad it is, you may have different models for one over the other."

What is our primary use case?

It is controlling our workflows, ingesting data, and then putting it up into our database platforms. In turn, those are consumed by our internal clients.

We do integrate Control-M Python Client and cloud data service integrations with some of our cloud providers. We have pipelines going out to the public cloud and some pipelines that are internal.

We have public and private cloud channels as well as on-prem. The expectation for most large financial institutions is that we will get 99.9% to the public cloud eventually. We want everything to be in OpEx as opposed to CapEx. We don't want data centers. We just want access to our data and to be able to turn it into information, which in turn, turns it into actionable items. Ideally, we would love to not support any on-prem or hybrid solutions, having everything be public.

How has it helped my organization?

Control-M has improved our visibility and streamlining. We have better clarity into data flows. We can resolve issues faster by not trying to reverse engineer what pipeline the infraction may have come through. We are not completely there yet, but we have better clarity and visibility. 

We have a better picture of our auditability. When someone comes to us, and asks for sources, "How did the deltas occur?" We can provide answers quickly, or at least quicker than what we used to. We are actually sure of the information that we provide, where before it was like, "Hmm, I think it comes from over there. Let me double check, but it gets really convoluted over here and I think that is where it comes from." Now, if it is within the Control-M environment, it has a straightforward answer that we can provide with confidence.

The speed of our audit preparation process is faster. When questions come in about flow, data, or sources, we don't have to try to reverse engineer anything anymore. We are able to go straight to Control-M and find out what the flow is or what happened. The visibility is there. We see the endpoint on this, such as, "What is the reverse flow on it? Where did it come in? Where did that data flow come from?" So, it is not a spaghetti mess anymore. This makes auditability easier. We are able to provide answers more quickly, which in turn, makes the audit process quicker.

Control-M has improved our business service delivery speed. It is more reliable and has increased the release schedules. We are also working on testing standards, and it has shortened the window of getting things to us. It has shortened the window, not to market, but basically getting them live. 

Control-M is critical to our business. If the support ends, we are at risk in some of our critical flows. We have redundancy around it that has been purposely built. We do that with all of our solutions. That way, we are not tied into one specific vendor, then if something happens tomorrow, we don't have a fire drill. We have things in place, but to a certain extent, there is heavy reliance on this solution.

What is most valuable?

The most valuable feature is the Self Service tool. They have metrics in place almost all across the pipeline, which is really nice. 

What needs improvement?

We are not yet really a power user of it. You can take as many training classes as you need, but it is not until you are working with a subject-matter expert (SME) on it that you can find out how you can really make this tool sing. My engineers know how to work Control-M. However, if I ask them, "Oh, is this the most efficient way of doing it?" They may not be able to say, "Yes." It is doing what we want it to do. That is nice and okay, but is it the most efficient, effective way? So, we are not there yet.

For how long have I used the solution?

I have been using it for about four years.

What do I think about the stability of the solution?

The platform is good. We haven't had any major outages. The stability is there.

What do I think about the scalability of the solution?

We really haven't pushed it to any of its limits. No scalability concerns have come up for what we are doing.

If you came to me, saying, "Hey, I was looking at Control-M, but it has some issues." I am going to sit there, and go, "Tell me what the issue is." Right now, we are not using the far reaches of whatever cloud providers are out there. Control-M does well with the major providers.

How are customer service and support?

The community is not as robust as some of our other tools that were replaced. The problem was the other tools that we were using didn't do everything that Control-M is now able to do, like monitoring and the entire pipeline flow.

The community and the networking that goes on within that community need improvement. We want to be able to reach out to an SME, and say, "Hey, we are doing it this way. Does that make sense?" Ideally, they come back. and say, "Yes, it does make sense to do it that way. However, if you want to do it this way, then it is a little more efficient." We understand that one solution framework doesn't fit everybody. Depending on the breadth of the data and how broad it is, you may have different models for one over the other.

I would rate the technical support as seven or eight out of 10.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We had a patchwork set of solutions in place that were getting the job done. The problem with that was we had a lot of SMEs within certain verticals. Therefore, there wasn't one overall picture. Every time we went from one step to another step, we had to start talking to another person to figure out what was going on. So, we were trying to bring everything together under one solution with Control-M.

We are able to have a better picture of our data consumption, e.g., what files or data is brought in. Previously, we would ingest data at different points. The question that would always come back to us would be, "Where did this data come from?" Then, we would always have to reverse engineer and have some documentation on it, but the documentation would be outdated. Someone would change the pipeline and forget to change the documentation. With Control-M, we can see everything in one location. To a certain extent, it is not documentation.

I am an engineer by trade. I have been doing this for over 30 years. I know that it is nice that someone puts together a document describing the environment, but as soon as that document is saved that document is outdated.

We don't throw another tool into the toolbox just because it is a nice pretty tool. We try to figure out what the benefits are. Ideally, in our world, we try to reduce the number of tools because I don't need 50 different screwdrivers in my tool kit. I make sure that I have a flathead and a Phillips, but I don't need 50 screwdrivers. Here, we brought in this solution and it replaced some existing solutions. Now, my engineers don't need to know X number of products. They only need to know half of X number of products.

What about the implementation team?

The tool was vetted by another group before making it available to the organization and putting it into our toolbox. Then, when it was available, we looked to leverage it.

What's my experience with pricing, setup cost, and licensing?

One of the restrictions that we had was with some of the licensing, and not having any insight on the financials part of the product. I don't know what the licensing on the product is, but we don't have an unlimited enterprise license. So, there might be a limitation on either the cost of the licensing or the number of seats.

What other advice do I have?

There is always a learning curve any time you are using a new product. Our engineers who are using Control-M are kind of happy with it. There really are no negatives on its learning curve. I am always weary with new products since it is another thing that someone needs to learn, but now there are other products that we don't use because of Control-M. What I would not be open to is bringing in another product, where we need our engineers to know how to work it and make it efficient as well as support other products already in our environment. So, I like that we can get rid of three or four products and replace them with a single product. As long as the learning curve is not too steep, that is an advantage to me.

We are looking into using Control-M to deliver analytics for complex data. So, the solution is doing either machine learning or complex analytics on top of the data flow. While we do some analytics, it is not to the extent that we really want to.

I would rate this solution as a high seven or low eight (out of 10).

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Architect at a financial services firm with 1,001-5,000 employees
Real User
Provides a holistic view of jobs, a nice interface, and offers lots of plugins
Pros and Cons
  • "The Control-M interface is good for creating, monitoring, and ensuring the delivery of files as part of our data pipeline. There's a wealth of information in both the full client, as well as the web interface that they have. Both are very easy to use and provide all the necessary material to understand how to do various tasks. The help feature is very useful and informative and everything is very easy to understand."
  • "Some of the documentation could use some improvement, however, it gets you from point A to point B pretty quickly to get the solution in place."

What is our primary use case?

We primarily use the solution for automation, orchestrating and automating the workloads, and being able to schedule tasks. Prior to Control-M, we were manually running jobs or there was either a scheduled task on Windows, getting Task Scheduler, or we'd have a script laid out that someone would have to run through manually on a daily basis. 

We learned about Control-M and felt that it could take over that process and have it automated, while also providing some monitoring and notifications so that if something did fail, we could easily be notified and keep track of it.

How has it helped my organization?

It provides a holistic view of jobs that are scheduled to run. We haven't done full production on it yet. Hopefully, we'll be in production by July or August this year. That said, so far from what we can see, it's going to free up some time for certain staff that has been running these tasks manually overnight. Now, if someone gets notified of an issue, then they can address the issue. In the long run, it'll free up some time and resources to focus on other tasks. 

What is most valuable?

I like the interface, including how I can see everything and how I can put the jobs together. Depending on the experience, I can either use the GUI or I can use the command line to create jobs based on JSON scripts. It provides that flexibility for someone who has no experience of using Control-M as well as with someone who's a full-blown developer that can get very complex with creating these jobs. Generally, it provides a good interface for everyone with different levels of experience.

Control-M doesn't really process data as far as I can tell. It orchestrates other scripts. From what I understand, Control-M doesn't really ingest or analyze any data. It's a tool to help with the processing of data on different platforms. I can tell it to run a script on one server, to send the data over to another SQL server, or a different platform, Power BI for example, and run a script on Power BI so that it can ingest the data when it gets there and do what it needs to do. Once that's finished, I can send it to another platform to put a dashboard together based on when that data is available.

Once one understands the process of how it functions, it's pretty simple and straightforward to create, integrate, and automate the pipelines. There is a learning curve to understand how it all works, all the components, and all the requirements for parameters and different options. However, it's pretty simple once someone has a basic understanding of how it all works.

The Control-M interface is good for creating, monitoring, and ensuring the delivery of files as part of our data pipeline. There's a wealth of information in both the full client, as well as the web interface that they have. Both are very easy to use and provide all the necessary material to understand how to do various tasks. The help feature is very useful and informative and everything is very easy to understand.

It’s great that Control-M orchestrates all our workflows, including file transfers, applications, data sources, data pipelines, and infrastructure with plugins. There are a lot of plugins and we haven't used all of them yet. Primarily, we've only used the file transfer plugin, the Azure file service, and Azure functions. Primarily, the developers have used that to put the various tasks and workloads in place. While we haven't fully utilized everything in Control-M yet, we're learning how to use the various functionalities and transitioning from our legacy scripts and data sources. 

What needs improvement?

Some of the documentation could use some improvement, however, it gets you from point A to point B pretty quickly to get the solution in place.

For how long have I used the solution?

I've been using the solution for almost a year. 

What do I think about the stability of the solution?

It seems stable. I haven't rolled the solution out to a very large environment yet. The solution we're working on right now seems to be working fine. All the issues we've seen have to do with us figuring out connectivity between Control-M and the cloud services, however, I haven't had any experiences with issues around stability with Control-M.

What do I think about the scalability of the solution?

Right now, it's a small deployment and we have it in four environments. We have it in our dev, QA, UAT, and production environments. Right now, there are two application teams that are using Control-M, however, we have another two or three teams that are looking to get onboarded.

It's pretty scalable. I haven't done a deep dive look into it the scalability, and we haven't identified a need yet to scale out. It seems pretty scalable, yet I'm not sure as I can't speak from personal experience. I don't have experience with it yet.

How are customer service and support?

It was a challenge to get the direction on how Control-M should be implemented. As we learned about new requirements from the customer, implementing those with help from the engineers at BMC was hard. The third-party contractors were one issue, however, when I escalated it to our customer representative, he was able to get me in touch with a dedicated BMC engineer and she was able to give me the information I needed and provided the context and direction on the best approaches. I wasn't able to use the third-party engineer that was assigned to us, however, the internal resource was a great partnership to help move this along.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We were using Microsoft and internal tools. We used the basic Windows tools that were built in.

We went with this product to centralize the deployment and to centralize the management of all of the workloads.

How was the initial setup?

Some of the installation components were really complex. I'm more on the infrastructure-based side of Control-M, I deploy it and then get it ready for functional use so that the application developers, script developers, and workload developers could easily access it. It took me three weeks to figure out the requirements for getting the SSL certificates as the documentation wasn't really clear on what those requirements were. Once we figured it out, it was simple, however, the support staff couldn't give me the right information to understand what was required.

It seemed like there was a gap in expectations on what was required for certificates. In terms of the installation overall, it wasn't clear what each variable or what each configuration point was referring to until we were well versed with how everything functioned. Then we were able to say, "Oh, this is what that field meant and this is what was required here." However, during the installation process, there was very limited information on what was being asked at each configuration point.

In terms of strategy, there was a challenge with the customer. I was the third or fourth resource that was brought onto the project. The first three people that handled it, internally and externally, had trouble figuring out what the expectations were. I was handed the baton at the last moment. I had to tie up loose ends and try to get this up and running for the CIO before he started to send up red flags to BMC.

What about the implementation team?

We had an integrator, however, setting up the timing with the integrator was a challenge. What I got from my company and the general expectations weren't clear. When I did get clarification, I wasn't able to get ahold of the contractor since he required a week or two weeks lead time. We then ran behind based on the lack of information I got. Setting up time and requirements was a challenge.

I'm also a contractor working for a customer. Being a third party, trying to work with another third party with minimal information from the client, was just a challenge all around.

What's my experience with pricing, setup cost, and licensing?

There was another team handling the pricing. I'm not sure of the exact costs. 

Which other solutions did I evaluate?

Our customer chose this solution. 

What other advice do I have?

We do not use the Control-M Python client and cloud data service integrations with AWS and GCP and we do not use Control-M to deliver analytics for complex data pipelines yet.

We haven't gone into production yet, so we haven't rolled this out to all our customers. We're still testing the features and we'll be starting the UAT in two to three weeks.

Right now, we're still in the early stages of rolling everything out. We've gone through the testing in our development environment and in QA to make sure things are good. Now, we're testing performance in UAT internally, and then we'll have customer validation within a few weeks before we go into production.

The solution will play a very critical role in day-to-day operations. However, it'll be at least two months before it becomes critical. Right now, it's still being implemented and evaluated.

It is pretty flexible on various cloud solutions, working with different cloud technologies and platforms. I would say potential users should take a look at it. It does provide a lot of flexibility, especially with the application and integration component that they have. The developers seem to really be able to get what they need out of the AI or the application into an integrated product or feature set.

Before installing Control-M, have a sit down with the Control-M solutions engineer and make sure you share with them all of the details of what you'd like to accomplish before deploying the solution. My client just said, "We want this" and they didn't give us the details about what they were looking for. We ended up having to redesign a few features, as those items were not clarified as part of the installation. When I was brought on board, the customer didn't mention they wanted HA, so that came later. At that point, we had to reinstall and add more servers.

The person who signed the contract was focused on MFTE, which is the enterprise file transfer tool or managed file transfer tool. However, later, the architecture team decided not to use that and go with another tool. Due to that decision, the client could have gone with a SaaS solution instead of the on-premises solution to Control-M and saved a lot of time, money, and hassle on deploying the on-premises infrastructure. So my advice to others is to make sure that the needs and the functional usage of the tool are identified clearly before purchasing or implementing the tool.

I'd rate this tool ten out of ten. It does what it says it does. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
System Engineering Manager at a marketing services firm with 10,001+ employees
Real User
Provides a single pane of glass through the dashboard to determine if a backup is successful
Pros and Cons
  • "My organization has been able to script scheduled jobs in Control-M to potentially replace legacy products that are at end of life or end of service. The previous backup applications that were being used for specific files, folders, or applications were no longer being supported, therefore being able to use Control-M to replace that has been very valuable."
  • "The infrastructure updates could use improvement. Some of the previous updates that we have run to get to version nineteen were troublesome. So, a more seamless upgrade path for the infrastructure components would be useful. I don't know if they have replaced that in version 20 or if version 20 has an easier path, but I would like to see the upgrade from one version to the next version be a little smoother."

What is our primary use case?

We use it as a scheduling tool. We use it for infrastructure backups and running scheduled tasks, but nothing in regards to data analytics.

It is an infrastructure process behind the scenes: custom backups and custom file migrations.

How has it helped my organization?

We leverage Control-M for backups. That would be a critical process that we have integrated. This allows teams that rely on the backups to have a single pane of glass through the dashboard to determine if their backup is successful. It allows email alerts or triggers, if something fails or we need to do manual intervention.

My organization has been able to script scheduled jobs in Control-M to potentially replace legacy products that are at end of life or end of service. The previous backup applications that were being used for specific files, folders, or applications were no longer being supported, therefore being able to use Control-M to replace that has been very valuable.

We rely on Control-M for automation. Anything that would have been a manual effort previously or legacy, Control-M has been able to replace.

What is most valuable?

The scheduler allows you to pretty much run anything from anywhere. It is very convenient. The sensor reporting that the scheduler gives you can monitor hundreds of jobs that could potentially be running in a given hour.

All the scheduled tasks are available in a dashboard or workflow view that different teams leverage. This is important and great. Having the ability to have a dashboard or workflow allows for easier troubleshooting. We also have alerting set up through email triggers, which are very helpful.

We leverage it for file transfer. We don't necessarily have application workflows dependent on those, but we do have Control-M for the migration of files. The visibility of a successful transfer is very useful, e.g., the ability to report on that or view whether that job succeeded or failed in the dashboard. You have an alert that would trigger on a failure. So, failure is automated. The Control-M job could retry that file migration a number of times based on logic that you have programmed into the job, and having to avoid manual intervention is useful.

The alerts are helpful and can contribute to faster issue resolution in the event of an issue.

What needs improvement?

The infrastructure updates could use improvement. Some of the previous updates that we have run to get to version nineteen were troublesome. So, a more seamless upgrade path for the infrastructure components would be useful. I don't know if they have replaced that in version 20 or if version 20 has an easier path, but I would like to see the upgrade from one version to the next version be a little smoother.

For how long have I used the solution?

I have been using it for about five years.

What do I think about the stability of the solution?

The platform has been great. I don't think we have had any downtime besides our upgrade process.

What do I think about the scalability of the solution?

The scheduling process has been able to handle almost everything that we have asked it to do. It seems to be able to run pretty much anything from anywhere within our environment.

Which solution did I use previously and why did I switch?

This solution was a new integration/installation done before my involvement.

The application was a part of the infrastructure when I joined. We have been able to add automations for components that were otherwise manual. 

How was the initial setup?

The upgrades are a bit complex. The last time we did an upgrade, it took several hours.

What about the implementation team?

The upgrade was planned. We ran into an issue, then we had to reach out to support. They were quick to respond, but the resolution did take several hours. They did a good job. The issue was resolved in a timely manner during our upgrade window. Their service was an eight or nine out of 10, as far as issue resolution. To be a 10 out of 10, I would like something prescheduled. If we could have had support personnel available for the upgrade procedure, it would have been helpful. So, it was just the time element.

What was our ROI?

The product is helpful for its automation components.

What other advice do I have?

It is worth evaluating.

Control-M is mainly an infrastructure tool that we use for scheduled tasks. The IT teams and most of the operations teams are the ones who use it. I would estimate about 10 people, but the management of the application is centralized.

The big lesson learnt: Reach out to support when using the product and do something that you could reimagine.

We don't have any data analytics in Control-M.

We don't have developer integration with Control-M at this point.

Control-M is doing a fantastic job for what we use it for. The product is a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.
Updated: October 2024
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros sharing their opinions.