We changed our name from IT Central Station: Here's why
Get our free report covering Broadcom, IBM, Broadcom, and other competitors of Control-M. Updated: January 2022.
563,148 professionals have used our research since 2012.

Read reviews of Control-M alternatives and competitors

Sr System Engineer at a financial services firm with 5,001-10,000 employees
Real User
Top 10
Alerts when things are falling behind schedule, or something unexpectedly fails, enable us to jump in and address an issue
Pros and Cons
  • "The first, big thing that we got out of using Tidal Workload Automation was having a centralized view of the status of all of our batch processes across all these systems... We can look into the schedule at any given time and see if things are running on track or if they are falling behind. We can also see if something failed."
  • "Their software installation and update process could use some improvements. I'm pretty sure they're working on that, but that's definitely an area where it could be streamlined a lot. There's still a lot of manual work that you have to do with the schedule when you deploy masters or do the agents."

What is our primary use case?

We use it to manage our batch processing. For us, it came in as a replacement for a lot of different systems running crontab. In our case it's primarily for Unix/Linux systems that don't have their own mechanism for kicking off all these batch processes. It's the coordinator of all of our background processes and batch jobs that are running overnight and during the day.

We use it to kick off custom Unix/Linux scripts that will launch our application processes. It's almost entirely Windows and Linux shell scripts that it's kicking off.

How has it helped my organization?

For administrators, the alerting has been a big plus, in addition to having a place to go and look at the status. They can be notified when there's something happening in a schedule, like things are falling behind schedule, or something unexpectedly fails. It definitely helps speed up the time to jump in and address an issue and get things back on track.

It has also given us a framework for standardizing a lot of our processes. Before we had all these things in Tidal, there were so many custom services and applications written. Tidal has given us a way to say, "Here's a standard way for you to get your jobs scheduled and automated." It hasn't necessarily enforced it, but it has given people an opportunity to say, "Oh, if I use the tool and if I set up my jobs to be able to run in the scheduler, it will be that much easier for me to get this delivered to production, or to test it and validate it." It has helped us put a framework around how developers are going to get their application code deployed. It's not really pushing the code, but it has encouraged some consistency in how they design their processes.

It would be really hard to quantify how much staff time it has saved, but for sure, before that initial move into the solution, some things would take forever. It was just complete spaghetti going through dozens of boxes with different crontabs trying to figure out: "Okay, I had an incident in the middle of the night. What ran, what didn't run? What ran but didn't complete successfully?" and those kinds of things. Tidal has resulted in a huge gain there. I don't think there's any way I could quantify how much it's simplified those outage scenarios. 

And even a planned maintenance was just as hard as an outage before we had Tidal. Now, with a scheduler, we can schedule a big maintenance that's going to require a lot of people to be on hand, one where time is of the essence. The more efficiently we can adjust a schedule for an off-hours maintenance and essentially disrupt what our typical schedule is, the more it helps us with those maintenance procedures. We know in advance that we have the capability to move jobs earlier and to move jobs later so that they're outside of the maintenance window and that we're not going to conflict with anything. When we're done with our maintenance, we're able to just press a button and let everything run and go.

Tidal has definitely reduced weekend and overtime hours. In our environment, there's no way to eliminate those hours, but that's nothing to do with Tidal. That's our own design. 

Our team does the majority of the work with the scheduler. It gives us the ability to do a lot of the scheduling tasks pretty quickly, so that the developers or business folks who are making requests don't need to deal with it. It gives us the leverage to make what they feel is a bigger change to the schedule, and to knock it out really quickly. They don't have to code something or make changes to handle it. We can do a lot of those adjustments from the scheduler itself.

The solution has enabled us to do more in terms of job capacity because, in the past, we had all these different crontabs running around out there. There was really no good way for people to condense jobs together, as soon as the previous one finished, unless they customized every process flow or job flow into a script. Doing so was essentially a custom program or process that they'd have to create for each one, and that's pretty difficult to manage. With the scheduler, we can squeeze those jobs together with their native process runtimes and say, "Okay, we're going to run through steps 1 to 10, allow those things to run in a sequence, and get them done in the shortest window possible. It has definitely helped with that.

Our environment is really different now compared to what it was when we started with Tidal all those years ago, but there's really no way we could have sustained that old model without having the functionality that's in the scheduler get our schedule done quickly. As our company has grown, it's been difficult for us to find maintenance windows or quiet periods. Every minute that we can save reduces the time an overnight batch process impacts daytime business users. The quicker we can get things completed, the better it is for the user experience and our environment.

What is most valuable?

The first, big thing that we got out of using Tidal Workload Automation was having a centralized view of the status of all of our batch processes across all these systems. We're not a big environment compared to some of their customers, but these are all business-critical processes that we're running and there are at least 100 different systems in our environment. To manage all these processes, it gives us a single point of view. We can look into the schedule at any given time and see if things are running on track or if they are falling behind. We can also see if something failed. The big thing is having that visibility into everything.

We use it for cross-platform and cross-application workloads, although they're not that different from each other. A lot of our workloads are similar, but they're technically different platforms and applications. We have some different OS's, but they're all Unix or Linux systems that are running the same sort of back-end technology. In our world, internally, they're different platforms. It gives us a really simple view into everything that's happening. 

I've been using it for a long time, so to me, it's a pretty intuitive way to, at a glance, look at how things are progressing in the day for the batch schedule. I don't know if that would necessarily be the case for a new user. To me it's intuitive and that is what helped us choose it over some other scheduling technologies in the past. It seemed like the most intuitive way to look at a lot of different batch processes running on lots of different systems.

As far as its ability to allow admins and users to see the information relevant to them, the interface is good, once you have access to it. We have had a little bit of an issue with some browser compatibility, but other than that, it's been a good tool for people to come in and look at where is their process is at from a business point of view. They do have to have a little bit of familiarity with what it is that they're looking for, the programs in the back-end. This is nothing to do with Tidal, but our technology environment is a bit hard to digest early on. Things can be a little bit difficult to navigate in our technology stack, at times. Tidal helps those users who are new to it to get a view of: "Here's the thing that I'm interested in. I know the program name, but I don't know when it runs, or how long it takes." Without having to get into the back-end of our technology, it does give them a way to look at what's happening in the schedule.

What needs improvement?

Their software installation and update process could use some improvements. I'm pretty sure they're working on that, but that's definitely an area where it could be streamlined a lot. There's still a lot of manual work that you have to do with the schedule when you deploy masters or do the agents. 

The other thing is that the performance of the web interface has not been great. It's feedback I get quite a bit, that the web interface can be sluggish at times. We've got to recycle it to get it to be more responsive. We brought up this issue a while ago. A lot of what we may be dealing with is that we are running on an older version. A lot of the performance stuff, I suspect, has been corrected in the later versions. We are running on 6.2.1 but they have got 6.3.5 out there now.

As for stuff we'd like to have, I'd love to see the database back-end have PostgreSQL or MySQL. Right now the choices are Microsoft SQL Server or Oracle.

For how long have I used the solution?

I've used Tidal Workload Automation for about 15 years.

What do I think about the stability of the solution?

It's been rock solid for us. We've had it for 15 years and I have really never had to make support calls to either Cisco or Tidal. The only times I ever really have to contact them are when we do our renewals or we migrate to a new version and we have to get a different license key.

What do I think about the scalability of the solution?

I don't think we've ever pushed a limit of the schedulers, the masters. We haven't really had any kind of scalability issue with regard to the scheduler or the agents. The only thing that we've run into as far as scalability goes would maybe be the web interface, which can get pretty slow at times, so we've got to cycle it. The web client is just sluggish and has an issue where that performance degrades over time. That's why we do the recycle and we notice it helps quite a bit to recover it.

How are customer service and technical support?

I really don't have to make support calls almost ever.

I'll ask a question sometimes, and they've been great. They've been very responsive. I haven't even had to do that for quite a while now. We set up our current implementation when they were still with Cisco. 

It was a little bit difficult with Cisco to get to the Tidal software engineers who are now their own entity. It's definitely gotten a lot better now that they're not part of Cisco. I can just call in. They know who I am and what I'm asking for right off the bat. When it was with Cisco, there was a whole triage system you had to get through, and a lot of people at Cisco didn't even know what the product was or that it existed.

Which solution did I use previously and why did I switch?

We only had crontab on a bunch of Unix systems. We looked into Tidal because we were having so many missed processes. Our environment is so much bigger and more complicated now compared to 15 years ago. But even back then it was almost like having things in crontab made it easier for there to be issues because they were all arbitrarily set to run at different times, different users, different systems. If there was some sort of conflict or collision, there was really no way to even regulate the fact that there were too many processes running at given time. 

It actually helped prevent some issues then, and now we have so many things cranking through Tidal. Getting all this to work in crontab would be impossible.

How was the initial setup?

Installing is not terribly complex. I don't have experience with other scheduler products, so I can't compare it to them, but it does have more manual install steps than some other software in general. For instance, there isn't an RPM installer. We use a lot of Red Hat in our environment. We can use RPMs for our Unix platforms and our Linux platforms. It would be nice if it was just packaged like that, so you could run the install or do the configure, perhaps with a few prompts. It's not far from that. It does have a shell script that runs, which isn't too different. But it would be nice to run updates for our scheduler along with all the other OS updates that we do in our environment.

If you know what you are doing, you can really get through the deployment, easily, in under an hour. I don't even know if it would take that long. If you have access to create your database and you already have your OS environment provision, the install and setup is really not very time-consuming. There are just the few manual steps you need to do, here and there, to configure it. But it's definitely doable in an hour. 

Assuming someone has access to do each of the steps that they need to do, one person could definitely do the install. I've done it in a VM lab and definitely knocked it out in under an hour. As long as you can create your database, create your database users, and run the software install, it's definitely a one-person job.

In terms of an implementation strategy, we've really stuck with one model. There's not a lot of leeway there. Essentially, you are going to have three master servers, a client manager, and you're going to have a database somewhere. The only difference might be the choice of operating systems or whether you're going to run on a VM or a physical server. But that's pretty removed from Tidal itself. There isn't a whole lot of variation there.

When it comes to a learning curve for Tidal, I've been using it a long time, so it's pretty intuitive to me. New users need to get their bearings and to know how they can filter, and what they need to filter on to answer the questions they have. It takes them two or three times of logging in and working with it. Sometimes we provide some guidance on best practices to find their program. It can be a bit overwhelming. I don't think Tidal necessarily makes it hard, but it's just the nature of all these processes running and the things that are there. Tidal helps with it, but it doesn't keep it from being a complicated thing to try and follow and to try to understand.

What was our ROI?

Tidal Workload Automation is a no-brainer for us, given the importance of the processes that we have. The cost for coordinating, managing, and getting all these things to complete, while warning us when things are not running on time, to me, makes it a no-brainer. 

I do not know how to quantify our ROI. We get everything that we pay for in the product, and there are even features that we do not use.

What's my experience with pricing, setup cost, and licensing?

Another advantage of Tidal is that it is a pretty affordable scheduler tool that lets us do a lot. You get a lot of bang for the buck. It has always seemed pretty reasonable to me.

The licensing model is hugely flexible. In fact, sometimes we get a little bit lost on which model should we go with. Over time, it has adjusted and changed. But the current model that we have is to run with enterprise license agreements. We do not have to worry about how many agents we add and remove. That has been the easiest for us.

They have options to do one-, two-, or three-year renewals. You can space out your renewals or do things like an enterprise license agreement. You can dial into, "Hey, I just want to run this many hosts." They cover a lot of options for you. It may not make sense for a smaller shop to run an enterprise agreement. They might just want to run five agents. In their case, having that option is huge.

Given that there are no costs for upgrades and other enhancements, it is really easy to budget for Tidal. We have not had any issues.

Which other solutions did I evaluate?

When we did the initial implementation, we did a full product comparison. We looked at the top four and did a comparison of the features of what seemed like the best products at the time. Over the years, I've reached out to other vendors just to get an idea of what other features are out there in the product space. We have never really found anything that had a compelling advantage over Tidal Workload Automation that made us want to switch. It has been really stable and has definitely gotten the work done for us.

We looked at CA's AutoSys at the time, but CA has so many schedulers now that it's hard to say exactly which one that is today. IBM had Tivoli Workload Scheduler, at the time. Since then, we have had someone from ISC reach out a fair amount. We looked a little bit at Control-M from BMC Software as well. JAMS was another one that popped up.

Tidal is familiar. We know how it works and what it is doing. It also keeps a fair amount of accessibility about it. One person could sit down, deploy it, do the install, get it up and running, and then it is just a matter of setting up the agents and the workload. I have not looked at the other products in so long now that it is not even relevant today, but BMC and a couple of other schedulers were overly complex, or their user interface just was not intuitive enough for our users.

What other advice do I have?

The big thing I would say to someone who is deploying this new, aside from having a naming standard and the structure, would be to get their security groups right, up-front. That is a pretty big one. Set your owners and who your users are going to be. Think about how you are going to structure it from a user point of view.

We have two core systems here. One is for our loan origination system and the other is for allocating leads and directing leads, and they both rely on Tidal heavily. If the scheduler were to shut down for some reason and we couldn't run it, it would have a huge impact on our business. Thankfully, that's not a scenario that we encounter, but we really rely on it to drive so many of these business processes. In terms of increasing our usage of it, other business areas have started take some interest in it, but we haven't made a dedicated effort to get, for example, our SQL Server systems to be managed by the scheduler, or to do things with Amazon. We haven't really had anyone driving that effort.

In our environment, one person, me, maintains the Tidal software. That's more an organizational question of how many people do you want to have who are capable of supporting it. We have a team of six people, all systems engineers. They're not all as up-to-speed on it as I am, but if I gave them my notes for doing the install, I'm sure they could all do it.

The number of users of Tidal, in our organization, depends on the definition of "users." It touches things that impact every user in our organization. But with respect to users of the interface who log in and use it, it's only about a dozen people. Aside from the system engineers, the next biggest users would be developers or program engineers. They are people who are involved in researching updating a task to a procedure or process and they want to know what the scheduled processes are and when they run. They are also looking at what their rules are for running and how long it takes. Sometimes business analysts will be involved in that as well.

Tidal is a nine out of 10. I would say it's a 10 if we didn't have some performance struggles with the web interface.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeterBirksmith
Senior System Analyst at a insurance company with 5,001-10,000 employees
Real User
Top 5Leaderboard
Native API calls are very good and very easy, enabling us to tie in to a large range of solutions, including Tableau and ServiceNow
Pros and Cons
  • "The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent."
  • "A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult."

What is our primary use case?

We have roughly 8,000 jobs that run every day and they manage anything from SaaS to Python to PowerShell to batch, Cognos, and Tableau. We run a lot of plans that involve a lot of constraints requiring them to look at other jobs that have to run before they do. Some of these plans are fairly complicated and others are reasonably simple.

We also pull information from SharePoint and load that data into Greenplum, which is our main database. SharePoint provides the CSV file and we then move it across to Linux, which is where our main agent is that actually loads into the Greenplum environment.

Source systems acquire data that goes into Greenplum. There are a number of materialized views that get populated, and that populating is done through ActiveBatch. ActiveBatch then triggers the Tableau refresh so that the reports that pull from those tables in Greenplum are updated. That means from just a bit after source acquisition, through to the Tableau end report, ActiveBatch is quite involved in that process of moving data.

We have 19 agents if you include the Linux environment, and 23 if you count the dev environments. It's huge.

It's on-prem. We manage the agents and the scheduler on a combination of Windows and Linux.

How has it helped my organization?

We have some critical processes in ActiveBatch that go to finance and to the auditors in our organization. Those processes are highly critical because that allows us to trade. If those reports don't get to them, we get penalized by the government or by APRA or by some financial institutions. ActiveBatch, in this particular case, is absolutely critical for getting those reports out.

We have SLAs requiring us to get reports out by a certain time of day or by a certain day of the month, by a certain time. We're judged on whether those reports go out. ActiveBatch, being as stable as it, is only impacted by external factors like the network and database performance. But otherwise, we are quite comfortable with the way ActiveBatch is able to handle these jobs without our having to look at them.

Because the connections between ActiveBatch and other tools are automated, it gives us more time to do other things, and more interesting things. If something goes wrong, we can go back and have a look in the logs that are produced and that explain what's going on, and we can then repair it. It's an enabler, and it provides us with more time to get on with other jobs. It's something that's critical and it runs by itself and we're really happy it does that. We have that time available because we're not actually manually babysitting processes.

It provides a central automation hub for scheduling and monitoring, bringing everything together under a single pane of glass, absolutely. There is finance, sales, marketing. Pretty much every department has a job that we deal with. It's quite heavily integrated into our whole stack. As an insurance company, our major events department, for example, is critical because every time there's a storm or a hail event or a cyclone somewhere, those reports must get out in a timely manner. I can't think of any department that isn't impacted by ActiveBatch, running some report for them.

The single pane of glass helps the DataOps team manage all of the processes that are supported by ActiveBatch as the main scheduling tool. We've created a dashboard which pulls information from ActiveBatch, information that we can share with the organization. They can look at jobs and the schedules and, if necessary, run their own jobs from that point. It's like the lungs of our company.

Overall, it has helped to improve workflow completion times by 70 to 80 percent, easily. Once you've built a job, it just runs and no one has to concern themselves with it doing what it's doing. They will get the notification or the file or the email that says it's processed and they move on with their day.

In addition, we had a guy who was spending seven hours in a week to extract, compile, and then export information into a CSV file, and then another few hours to get it transferred to another department. We were able to build a PowerShell script, with a query that could easily be updated, that was automated through ActiveBatch. It takes 10 minutes to run. What that guy was doing in hours, we are now doing within minutes.

What is most valuable?

One of the valuable features is the ability to tie things in using API calls. The native integrations and REST API adapter for orchestrating the entire tech stack are really good and really easy. We have a product called ServiceNow, which is a call tracking system. If a problem occurs, ActiveBatch will send an API call into ServiceNow, and it will raise a ticket to say that there's a problem. That gives us an auditing process. We're also using API calls for Tableau and we're also using some API calls for SharePoint. We tie ActiveBatch into a lot of different applications.

Also, the overall ease of use is brilliant. It's easy to pick up. We can get a newbie up and running within a day, using ActiveBatch. It's not to the extent where that person will know some of the more complicated issues, but in terms of being able to build a job and export or run the job, it's within a couple of hours. Within a day, people are quite comfortable with the application. We've just signed an agreement with ActiveBatch which gives us all the education materials now. That means we'll be applying more advanced features. It's really good as far as ease of use goes.

We use the solution across all sorts of organizational branches. It's used for SaaS and SAP, which is finance. We have fraud and Salesforce, which is for the sales group. It's also used with marketing and major events because, when there's a storm, we need to know what's going on. We also have the ability to pull from external sources, meaning external vendors such as Guidewire. So ActiveBatch is widely utilized and probably more widely utilized than the executives realize. It's well embedded in our company.

What needs improvement?

We are moving to version 12 soon, and I believe that interface is going to be more of a "webbie" look and feel, but I can only comment on version 11 which is what we have. 

A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult. Other than that, it's all good.

For how long have I used the solution?

I've been with this company for almost 10 years and it was already here before I arrived.

What do I think about the stability of the solution?

The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent.

With those 8,000 jobs that run in a 16-hour period, the majority of the time we're spending about an hour of the day with ActiveBatch, repairing problems. There are issues where we have to re-run a job because of it exceeding its runtime. Or when a job fails, even though the alert goes out to the end user, we still have to tap the user on the shoulder and say, "Did you look at this alert? We've got a problem here, can you please fix it?" Other than that, it pretty much runs itself. Overall, ActiveBatch saves us a huge amount of time, being as stable as it is.

If we were having to repair everything, on an ongoing basis, we would be spending more than five or six hours a day, so we are saving at least five to six hours a day by using this tool. The improvement to the business is quite substantial. People aren't having to manually do anything that would normally take them two or three hours to do. Those things are being done within a matter of minutes and then passed on. And those five or six hours are just for us in our department. You can multiply that by the number of people who would normally have done something manually and who now have it done through ActiveBatch in minutes.

We're looking at more than a 98 percent success rate for uptime and for running jobs. The only time that something falls over is not to do with ActiveBatch itself, rather it's to do with problems with either the network, the database, or developers.

What do I think about the scalability of the solution?

The scalability is brilliant. We've got 23 machines. We have redundancy integrated into this environment. 

If a server goes down, we can turn that queue off and re-queue those jobs to another server, while we get a new image spun up and restarted. In that situation, the delay is in getting the IT guys to spin up the image. If we could get an image spun up when it failed, it would be a matter of five or 10 minutes to be back in business with that server. As it is, once the IT guys do spin it up, we kick off from there.

The main interface is used by about 12 people. The dashboard that we've built on top of it is probably used by 70 to 80 people. But the number of people it affects is in the thousands across the entire organization.

It's heavily utilized across a number of departments in the organization and they really do rely on ActiveBatch to stay up and stable and to provide their reporting mechanisms.

How are customer service and technical support?

We've had a couple of issues where we've had to log a defect with ActiveBatch. But the guys at ActiveBatch are really responsive. We had things fixed in 24 hours, and they're in a different time zone. The response time is exceptional. This is one of the few vendors that I can say is highly responsive and that shows a level of commitment that I don't think many other organizations show.

Which solution did I use previously and why did I switch?

ActiveBatch replaced Windows Scheduler, Chrome jobs that had been running on some servers. There was also another scheduling tool that popped up somewhere but that data was moved into ActiveBatch. The scheduling from Cognos was also moved into ActiveBatch because it was more convenient, and some of the Tableau scheduling was moved into ActiveBatch as well.

How was the initial setup?

The initial setup was straightforward. It's super-easy to install and super-easy to set up. Even on the Linux box, it was really easy to install and set up and run. There was no real complexity in the installation process.

Most of the time with setup or upgrades is spent testing. We usually deploy agents within 20 minutes. The scheduler and the database might take an hour and a half, but because the agents are on virtual machines, we have an image and we just spin that image up. If something goes wrong, we can just spin up a new image and get that agent started straight away. In terms of testing, when we do disaster recovery, we redeploy to a disaster recovery environment and then we test that the connections are working, the jobs are running, and that there are no problems. That's where most of the time is spent, not in the deployment itself.

We usually have two people involved in the process, one who is the primary and one who is the secondary. And then we have a couple of people on standby. The primary does the installation and the secondary is looking over their shoulder for learning purposes. Then we have a few people on the IT side in case there is a problem with the operating system or the network that we have to deal with, but they're not involved until there's a problem. The DBA is also on-call just in case there's an issue with the database.

Maintenance-wise, it's only if something happens that we go and look. We have a job that looks at the health of the database that ActiveBatch uses. It's pretty much all automated, so it looks after itself. We have another job that pings the servers to make sure that all the ports that it needs are running and open. We also have jobs that look at the network latency so that if the network latency is beyond a certain point, it notifies IT and us. It also looks at the operating system and the actual directories. Unless we schedule it for an upgrade, which we do every six months, we don't look at maintenance for that six months unless there's a problem.

What was our ROI?

It pays for itself because it gives the DataOps team more time to be involved in other projects. It allows the organization to move forward without having to worry about doing anything manually. ActiveBatch is performing a huge service to the organization in terms of reducing the number of man-hours required to do manual tasks.

What's my experience with pricing, setup cost, and licensing?

If you compare ActiveBatch licensing to Control-M, you're looking at $50,000 as opposed to millions.

Which other solutions did I evaluate?

ActiveBatch isn't the only scheduling tool that we have. There's also a product called Control-M, but control-M is a lot more expensive and mostly manages mainframe. ActiveBatch is at a very modest price for running a very complex process.

We can expand ActiveBatch more readily than Control-M because, with Control-M, you pay for X number of runs in a run book. If you want to extend that run book, they want half-a-million dollars, or more, for 500 jobs. We can expand ActiveBatch. We could go to 10,000 jobs and it wouldn't cost us any more. It's only if we were to add more agents to load balance that we would be charged any more, and it wouldn't be anywhere near what Control-M charges.

I've mainly been involved with ActiveBatch and it's hard to compare another vendor when there hasn't been a vendor to compare against. As far as performance is concerned, Control-M and ActiveBatch are on par, but they're not the same because Control-M is really just moving files and running programs on mainframes, whereas we're running against Windows and Linux environments.

The other one that's being utilized at the moment is Apache Airflow, but that's more for the developers because they like to be able to program the backend, rather than to use a frontend interface. We've been looking at how that works, but we haven't seen it to be very stable for a production environment. You can't compare Airflow with ActiveBatch, in effect.

What other advice do I have?

My advice would be to jump on it straight away. With the ease of installation, the expandability or scalability of the product across multiple servers with different agents, the ability to not only use Windows but Linux as well, and the fact that you can build complex plans that have multiple constraints, multiple types of scheduling, and multiple types of alert mechanisms, it's highly expandable. You're going to have a lot of fun with it.

It's highly flexible and easy to use. In terms of what we can do, we still haven't gone to the Nth degree of what we can't do with ActiveBatch. It's incredibly flexible. We're running shell scripts that run Python scripts. We've got PowerShell scripts and batch scripts. We tie into different applications. We still haven't exhausted the potential of ActiveBatch. That's what I've learned.

Predictability is something that is out of the control of ActiveBatch. We can set a job to run against a database, but it's really going to be the network or the database that will impact ActiveBatch. ActiveBatch will continue to run. There is an average run time that we look at, but if the network has high latency or the database is under load, the time will increase. ActiveBatch will continue to run as normal. The frequency of ActiveBatch failing is quite rare.

We use the ActiveBatch interface up to a certain point, and then we start looking at running Python and shell scripts. That's why we have the Linux agent. We call a shell script which runs a Python script that does some manipulation and passes that information back. And then there are a number of plans that manipulate the process. In this particular plan, the CSV file is created and it's dropped into a file location. ActiveBatch is polling for that location. It sees that file. Then a Python script runs and creates an MD5 hash. When you download a file from the internet, there's an alphanumeric number that indicates whether that file is valid or not. The MD5 hash is generated on the file and when it's moved to another location, another MD5 hash is generated to determine whether there was a change in that file when it moved from A to B. It's a validation to make sure that no data was corrupted during the movement from where the file was dropped to where the file landed. Once it has been validated, the file is then moved into another location where it's uploaded into the Greenplum database and a notification is sent to whomever was involved in that particular process. It's quite involved.

If a job fails, we have set it to wait for a few minutes and to then re-run. If that fails, we can trigger another job to continue on in that process flow, if the failed job isn't critical. Some of the plans are quite complicated and have a certain amount of logic involved, but that enables us to navigate around problems that might otherwise need a developer's assistance, if it doesn't affect the overall plan process. As long as there are no constraints involved that require the next job to run, and it can move around that job and continue on, that's how we set it up.

We're looking forward to version 12 to see how that goes as well. We've also mirrored the database, the backend database that ActiveBatch uses. We have a failover process which was just recently installed. If one database fails, we can switch over immediately to the other database in real time.

Overall, we're really comfortable with how ActiveBatch is performing and with what it's doing.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
YvetteCarpenter
Technical Operations Manager at a financial services firm with 501-1,000 employees
Real User
Top 10
Enabled us to consolidate jobs run by many tools into one solution, but there are some scenarios we haven't been able to automate
Pros and Cons
  • "Our company is based on data. Everything we do is data-driven, so it has been very valuable having one place where we can process all of the data and do batch schedules with chunks of data."
  • "JAMS handles exceptions fairly well but there are some areas where it might improve a little bit. It has to do with being able to automatically handle exceptions, out-of-the-box, rather than having to code them."

What is our primary use case?

We started with basic tasks because we were bringing things over from Windows Task Scheduler. We didn't have a whole lot of dependencies at that point. We have gotten much more detailed in our scheduling requirements since. We use what are currently called JAMS Setups, which in the new version are called Sequence Jobs, quite a bit, especially for our enterprise data analytics team. We do some pretty complex scheduling scenarios.

We also use it for holiday calendars that impact our scheduling and for multiple regular scenarios, such as dependencies on a file or another job or another Setup. 

Overall, we use it for basic, normal enterprise-scheduling solutions.

How has it helped my organization?

We've been able to automate a lot of processes that were done manually before. We're not a huge company, and we're a fairly new company, so a lot of things were being done before in Task Scheduler or in a homegrown solution called Batch Nucleus. They were also in cron and in SaaS. They were all over the place. Being able to consolidate all of that into this one enterprise scheduling solution allows us to put dependencies on different jobs between different systems. It also allows us to monitor everything from one place and gives us the ability to do some exception handling. We have unlimited licensing with JAMS and we have hundreds of environments that we have agents on and do testing on. Having one location that we can monitor everything from, and handle all the exceptions from, is critical.

We've automated our critical processes, which used to be done manually through an external product and that means we don't have to worry quite so much about manual, human error.

Because we have gone from a lot of manual processes to automated processes with JAMS, we have been able to free up IT staff time. We're not spending 30 minutes doing something manually that JAMS can do in five minutes. It has freed up IT resources, but it has also sped up our processing times. For just the Technical Operations Center team that I manage, it has saved about 20 hours a week.

JAMS has also helped eliminate “data slack” across our applications. All of our enterprise data analytics is done through JAMS, so being able to access things like Teradata, Hadoop, and Snowflake cloud solutions for data integration is important. Our company is based on data. Everything we do is data-driven, so it has been very valuable having one place where we can process all of the data and do batch schedules with chunks of data. It's been a good tool for that. Having current data ready to go when our users need it is extremely critical because we are a FinTech company. We have to be able to pull data instantaneously to make decisions. Otherwise, our customer base is reduced and there are also compliance issues. We have both financial and legal obligations to our partner companies, so that data has to be up-to-date and ready to go when they request it.

What is most valuable?

I've used a lot of the other scheduling packages in the past. The most valuable feature of JAMS is the ease of being able to update parameters on-the-fly. Also, their monitoring and historical views are pretty robust.

We are also able to go into a job that is inside of a Setup and say, "Turn this one off for a while," by using the Except clause.

Another useful functionality is being able to pass parameters and variables between different jobs, and different steps in a job, or a Setup.

What needs improvement?

JAMS handles exceptions fairly well but there are some areas where it might improve a little bit. It has to do with being able to automatically handle exceptions, out-of-the-box, rather than having to code them. I'd also like to be able to do different things, based on what the actual exception is. In our current version, there's a placeholder where you should be able to do some things along those lines, but we've never actually been able to get it to work. I've seen in the 7.x versions that that has been fixed.

In terms of automation, there are some scenarios that we're still working on trying to automate and we just haven't been able to find an applicable solution through JAMS for those yet. I'm excited to see, once we get to that point, if we can do those things in the newer version.

For how long have I used the solution?

I started using JAMS in June of 2016. I was in charge of taking all of our disparate scheduling systems and converting everything into the JAMS scheduling package. I have used it from the ground up.

Right now we're on-prem, but we are going to want to go to the cloud sometime next year.

What do I think about the stability of the solution?

In the five years that I have worked on JAMS, I have never had it crash.

The fat client on your machine, for the 6.5 version, is not really reliable. It can slow down and it can get hung and you have to restart it. But with JAMS itself, the only issues we've had were when we didn't get the license key updated on time. For the most part, JAMS has been a very steady, reliable tool.

What do I think about the scalability of the solution?

Because we have unlimited licensing, it has been extremely scalable for us. We can put agents on whatever servers and environments that we need to, fairly quickly and easily. We now have that set up as an automated process. So it's extremely scalable, based on the pricing model and how many agents you're allowed.

How are customer service and support?

Technical support is an area in which JAMS has come a long way. When I first started with them, they didn't have any kind of training. The way it worked was that if we had a question, we would call their support team and there might be some back-and-forth trying to figure out how to get what we needed. But they now have JAMS University where you can go to a boot camp and learn more about the product. 

And their support is pretty good and pretty responsive. They get back to you fairly quickly and they usually have a good solution to whatever your issue is. And while they have generally been responsive, there have been several times when getting an answer has taken several weeks, instead of being able to get a really quick answer. I would rate JAMS support at seven out of 10, but I wouldn't give more than an eight for the support for any product that I've worked with. That makes a seven a high mark, for me.

How would you rate customer service and support?

Neutral

How was the initial setup?

We spun off from another company, and that other company used Control-M. When we went our own way, we didn't bring Control-M with us. The scheduling solutions that we were using before were Task Scheduler, a homegrown solution, and SQL Server Agent jobs, things that aren't necessarily true enterprise scheduling solutions.

In our migration to JAMS, we had to refactor some of the code, but that's because of the way that it was coded before. SQL Server Agent and Task Scheduler were pretty easy to migrate because there is actually a conversion routine where you can log in to a machine from JAMS and just say, "Go pull the job and convert it." It would automatically convert it, and we would just have to do some cleanup. That part was easy. But when it came to some of our other stuff, we pretty much had to build it from scratch.

I was the only person working on the migration back then, so it took about a year and a half to get everything over, but a lot of that was because we were having to go find things that were being scheduled on these other boxes. Some 80 percent of it was done within the first four to six months.

What's my experience with pricing, setup cost, and licensing?

JAMS is close to the lower end of the pricing models for enterprise scheduling solutions. They are much cheaper than Control-M, as well as some other products that I've used.

I also don't know of another solution where you can actually get true, unlimited licensing, where you can have as many instances and as many agents as you want. That has been a godsend for us because we have environments that we spin up and take down on-demand. There are times when we have hundreds of environments going at one time. Having that lower-cost model has been really good for us, while still being able to get the functionality that we need from the tool.

Maintenance and additional features are all included in the yearly cost, and that cost is still much cheaper than what you would pay for maintenance for another product.

Which other solutions did I evaluate?

The one that I had used most recently, and the longest, was BMC Control-M. It is an extremely robust product that has the ability to do some things that our current version of JAMS cannot do. For example, Control-M has the ability to truly diagram out what the flow looks like, from within the tool. My understanding, after having talked to my scheduling analyst, is that that feature is coming up in a future version of JAMS, which is cool.

Control-M also has the ability to do batch impact analysis, and to put a job at the end of a job flow that says that if anything in the job flow breaks, provide an alert. JAMS has the functionality to do that in the current version, but you have to code it. If you want to say, "If this job fails, I want this other job to run to fix it, and then come back and do this other job," you have to code it. But I believe, again, in the newer versions, it's easier to do that type of flow by using Sequence Jobs. That's the biggest area where I felt JAMS really needed to improve, in automatically handling issues, and they've come a long way.

Control-M enables you to send different types of notifications based on the output, which is also a feature that's coming up in the 7.0 version of JAMS.

JAMS has taken quite a few of the recommendations that we gave them and has built them into their newer versions of JAMS. It has been an exciting journey for us to be able to have a lot of input into how the product works.

What other advice do I have?

I'm really excited that we're trying to upgrade to the 7.x version, because it's so much better. But it's a huge change to go from the 6.0 version to the 7.0 version. The tool looks completely different. It works differently, with different ways to do things, so there is a big learning curve. Since our developers build their own jobs in the lower-level environments, it's going to be a big learning curve for our entire company to start using the most current version.

We've defined our complex scheduling scenarios the way that JAMS works in our current version, but in the future version that's going to be much easier. That version has the ability to create multiple schedules on the same job, instead of having multiple jobs with different schedules doing the same thing.

In terms of the upgrade process, we have multiple instances, including development, stage, and production. We've been trying to build a test environment and we have been doing a lot of our tests there. For our actual cut-over and conversion to the newest version, we are being told that we can actually upgrade in-place, instead of having to do a conversion of our database. We're going to take a two- to three-week freeze on any scheduling updates and on adding anything new. Then we'll convert our development instance and train all of our developers on how to use it and what the differences are. We'll let them test. Then we'll upgrade our stage environment and let them test on that. As soon as all of that looks good, we'll do an upgrade of our production system.

We will be working with HelpSystems on the upgrade when we get a little bit closer to it. At this point we're still trying to figure out exactly when we're going to be able to do it. But we have asked them multiple questions and gotten a lot of good feedback from them.

In terms of saving time when troubleshooting stalled jobs, JAMS could do that. But we don't have all of our code set to send the output from a job back to JAMS. So in a lot of instances, we're still having to dig into the system, like Informatica, to get that log back and find out what's wrong. That is something that we, as a company, need to improve. It's not a lack of functionality on the part of JAMS.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
Marcos L. Domingos
Support Analyst, Lead at Sonda IT
Real User
Top 5Leaderboard
Good integration, responsive support, and a short learning curve
Pros and Cons
  • "The monitoring and troubleshooting features are rich and with the dashboards and other features, automation work is made easier."
  • "In most of the packages available, it took time to study and gain knowledge of the features and resources due to poor documentation."

What is our primary use case?

We have several corporate solutions that need to be integrated with our ITSM products.

Automic facilitates integration to ensure the correct execution of ITIL process workflows. The first was to create service offerings for provisioning virtual machines in both private and public cloud environments.

The creation of virtual machines would have to contemplate the current process of changes, releases, and configurations. Our service catalog should make service offerings available; these involved diverse operating systems, different hardware configurations, and some functioning web services.

An approval process would also be contemplated, where the main steps would be the registration and closing of the change tickets, as well as the registration and deactivation of the generated configuration items.

How has it helped my organization?

We have several solutions, but we opted for Automic following the natural evolution of CA Process Automation. The fact that it has native integration with ITSM solutions was also an important factor in the decision. Due to the various possible integrations, all of them need to be in accordance with the ITIL process. This would facilitate the actions that would come in order to implement a DevOps culture in the organization.

The service support is acceptable and response times are fast, whether from the manufacturer itself or from partners.

What is most valuable?

Due to the fact that we have several solutions to be integrated into the ITSM processes, the possibility of having multiple client areas is important because it favors organizing integrations and automations.

We have other products from Broadcom (CA Technologies) in operation, and integration with these products was very smooth. It integrates easily with CA Service Desk Manager, CA Service Catalog and CA Process Automation, and more.

Adding that to the various packages and plugins available at Marketplace, the learning curve to make automation work was very fast.

What needs improvement?

We found that some Actions Packs and plugins do not have documentation, are incomplete, or are of poor quality. In most of the packages available, it took time to study and gain knowledge of the features and resources due to poor documentation. This time could be reduced if the documentation was more complete.

If the documentation is not well built, there will always be extra time for testing and some of these generate doubts that turn a simple job into something complex. With the project's schedules in progress, it is difficult to set deadlines, even if they are adjusted for more.

For how long have I used the solution?

I have been using Automic Workload Automation for two years.

What do I think about the stability of the solution?

The product demonstrates excellent stability, regardless of whether the installation has high availability or not. We have not seen any problems in this regard.

What do I think about the scalability of the solution?

It is a product that scales easily, but with a short aggregate in its version with high availability.

How are customer service and technical support?

The most experienced support team is international. Some professionals at the time of implantation are no longer available. The service responds quickly to doubts and so far we have had no difficulties in this regard.

Which solution did I use previously and why did I switch?

We have also used CA Process Automation, MS System Center Orchestrator, and Control-M.

How was the initial setup?

Complexity only exists when installing high availability. Support from the manufacturer was required.

What about the implementation team?

It was through a Broadcom partner. The team was very experienced in the product, including with use cases in a company in the same area in which our organization operates.

What was our ROI?

We don't have enough time to calculate ROI yet, but we are optimistic.

What's my experience with pricing, setup cost, and licensing?

There are different licensing fees for cases where high availability is important. There is also a certain complexity in this type of installation.

Which other solutions did I evaluate?

We only evaluated Broadcom (CA Technologies) products.

What other advice do I have?

Overall, the user experience is extremely good. The monitoring and troubleshooting features are rich and with the dashboards and other features, automation work is made easier.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Salvador Reyna
IT Manager at a manufacturing company with 10,001+ employees
Real User
Top 20
Workload automation that helps track productivity and progress across platforms
Pros and Cons
  • "The technical support is great, the product is easy-to-use, and it is stable."
  • "It is missing some features and can improve in areas where the competition is somewhat better like linking job dependencies."

What is our primary use case?

The tool is managed from offshore by another company. The primary use for it in our company providing support to our main client who is a large beverage company that needs the product to manage operations. I just act as the support interface between the client and the offshore team starting at the time I took this position as a global manager for workload scaling. I found out about this product someplace and how we could put it to use. We have been running it ourselves and with our client for a long time. It is also a part of our company's application or solution set.  

The use cases for my workload purposes have to do with my applications. It works fine for scaling jobs and can interface with other systems. So it does what I need it to do.  

What is most valuable?

The interface for the applications team is really the most valuable part of the product in my opinion.  

What needs improvement?

The interface for the operator is not so good. I do not think it is as complete as something like Control-M by BMC Software (named for former Shell executives Scott Boulette, John J. Moores, and Dan Cloer). A few other things could be better like the scheduler and linking between jobs and dependencies.  

For how long have I used the solution?

I have been working for the last five years between the client the provider and my company. 

What do I think about the stability of the solution?

This product is quite stable. There are no issues within the application or with the tool itself becoming unstable.  

What do I think about the scalability of the solution?

The scalability is actually quite fine. I think that right now we have around 100 to 150 users that have jobs running on it.  

The offshore team is made up of about five guys that mainly take care of the maintenance tasks. At this moment, we do not actually have any plans to scale our usage. Maybe in the coming two years, we might have to. We are planning to upgrade or migrate to another tool depending on what is best for our situation at that time.  

How are customer service and technical support?

I think the technical support is great. They have been helpful when we needed something resolved.  

Which solution did I use previously and why did I switch?

Before working with IBM Tivoli Workload Automation I worked with Control-M from BMC. 

The main differences and advantages of Control-M are mostly to do with the operator interface. The console that the operator is using is quite a bit better in Control-M rather than Tivoli, and so is the way to schedule and make the relationships between jobs.  

How was the initial setup?

I think the product is generally easy-to-use and that includes my experience with the setup.  

What other advice do I have?

The advice I would give to others considering using Workload Automation depends on the necessity and the reality of their requirements. It depends on the complexity of the jobs. Depending on that it may be interesting to use Tivoli because it is a good tool. It is a good application to use to run workload tracking.  

On a scale from one to ten where one is the worst and ten is the best, I would rate this product between and eight and ten depending on who is using it and for what reason. I think it is quite good so I think it deserves a nine.  

Additional features I would like to see included in the next release to improve and make it a ten would just be the two things I mentioned that Control-M does somewhat better for now. The interface for the operator should be improved, and the way to create relationships and dependency between the jobs can be better.  

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Get our free report covering Broadcom, IBM, Broadcom, and other competitors of Control-M. Updated: January 2022.
563,148 professionals have used our research since 2012.