Tidal Administrator at a retailer with 5,001-10,000 employees
Real User
Gives us the ability to see everything across our scheduling universe, without having to access multiple systems
Pros and Cons
  • "The feature that I find to be valuable, as I'm working with other folks, is the ability to cross-schedule across platforms, and the flexibility that comes with that."
  • "From a management standpoint, when using the solution for cross-platform, cross-application workloads, I've never had a problem with the application. It's very interactive, especially with the different security levels that they offer."
  • "For the most part, the drill-down and the logging are really good. But if we take an Informatica job, for example: We have the ability, and the operators have the ability, to actually drill down and see, at a session level, where the failure is. There is, unfortunately, no way to extract that into an actual output email or failure email. It's not that that information is not available, but extracting it into an email would be a nice-to-have."

What is our primary use case?

We're running jobs on a global scale. Being a global company, we're running scheduled jobs and ad hoc jobs across different regions. Jobs cover backend processing, financials, and the like. We're running on an SAP ERP system and we're also running Informatica for data warehouse. We're running BusinessObjects web reports as well as a lot of straight Windows and Unix command-line things. We run FTP processing, PGP encryption processing, and data services jobs. We're running about seven or eight of the different adapter types that Tidal has available.

We have it on-prem. Both our test and production environments are on fault-tolerant setups.

How has it helped my organization?

When I started here, they had already been on Tidal for about five years. So I'm not really sure where they were before Tidal. They did a lot of mainframe things in the past. From what I've heard from people here from the "old school," once they globalized and got everything into Tidal, the ability to see everything across the scheduling universe was a huge improvement. They didn't have to give different people different access to different systems and check four or five things, just to make sure something was running correctly.

The solution helped to reduce weekend and overtime hours. We're a 24 by 7 support model. Regarding the Tidal application, the one thing that we try to explain to anybody, from a support or monitoring standpoint, is that jobs trigger through Tidal, but not physically in Tidal. So if we have, hypothetically, an SAP job failure, it's not a Tidal failure, it's an SAP failure. So it goes right to SAP support, which saves time. In the environment I came from, they didn't have that mentality. So if, hypothetically, an ERP job failed, they'd call the Tidal person first instead of the ERP support. That type of understanding, as a whole, really helps from a support standpoint. The admins don't get a lot of calls unless there's an actual issue with the Tidal application itself.

In the time I've been here, we've definitely increased staff availability. From a business standpoint, we've started utilizing file monitors more, for what they call "file events" within the application. In the past, when an end-user would drop a file in SAP, for example, they'd contact our operations team, or send an email saying, "Run in this job." There isn't a real need for that in many cases. We've implemented a lot of file events that will actually only run jobs if they need to, if a file's available. Along the same lines, we had processes that would run a process in SAP, and even though it didn't create a file, there were other jobs downstream that would be hanging out and waiting for a file that never showed up. So not from just a staff availability point of view, but in terms of resource availability, it has definitely improved things a lot. From an operator standpoint, I would estimate Tidal is saving us 15 to 20 hours per week, just in manual interaction with inserting jobs on a request, since a lot of that stuff was implemented at our end.

Regarding job counts, we're pushing over seven million a year. That varies, obviously, depending on request jobs and other things. There are some processes that we shut down for year-end processing, so they stop running for a week or two. But from an expansion standpoint, we are constantly looking to see where else we can use Tidal, for new applications that are coming online or things that people are running on their own where they haven't even thought about Tidal's scheduling. In 2019, we did 7.7 million jobs. In 2018, we were at 7.1 million. In 2017, we were at 6.1 million. So with Tidal we're adding on the order of half-a-million jobs per year.

What is most valuable?

The feature that I find to be valuable, as I'm working with other folks, is the ability to cross-schedule across platforms, and the flexibility that comes with that. I'm kind of biased, as I've only used Tidal. I haven't used CA or IBM or any of the other scheduling platforms that are available on the market.

From a management standpoint, when using the solution for cross-platform, cross-application workloads, I've never had a problem with the application. It's very interactive, especially with the different security levels that they offer. We have two or three operators who are at a certain level where they can actually rerun jobs. If they fail, they don't actually have to get ahold of a Tidal administrator. The only thing they don't have access to is changing the master settings on the jobs. That flexibility of access is a big plus.

We do have a few developers who will actually set up processes within Tidal, but only in the test systems. They get a little bit more access that way, but they obviously have to have training prior to that, from me, on how to properly schedule things in Tidal. So the security and flexibility are valuable features.

They have a lot of pre-set stuff, but you can actually create something like: "Run the third Wednesday of every third month on a blue moon," going to the extreme. Their scheduling functionality is really advanced enough where we can create a lot of different kinds of customizations, based not only on a regular calendar year, but on fiscal calendars and regional calendars. We have jobs that process files for our EU operation and when they have a bank holiday over there we don't need to run the job. We can tie up those jobs that don't need to run on their local, European bank holidays.

The solution also enables admins and users to see the information that is relevant to them. The admins have super-user access, so they can actually adjust and transport different jobs from test to prod. Whereas the operators can adjust a job that's already scheduled if they need to, based on direction from support. They can change this variable, or change this setting, or change this text. But they don't have the access to actually change the master copy of that job. So, a one-off change is literally just that, a one-off change of the next compile scheduled. Otherwise, it's going to run as it's normally set up.

Another good thing that Tidal has is in regard to the history retention of job failures. Whereas our SAP ERP system usually has an eight-day history retention for jobs, Tidal can actually go back longer than that. So if somebody says, "Hey, why did this job fail three weeks ago?" we can bring up the failure message, which is something they can't do directly in SAP.

What needs improvement?

For the most part, the drill-down and the logging are really good. But if we take an Informatica job, for example: We have the ability, and the operators have the ability, to actually drill down and see, at a session level, where the failure is. There is, unfortunately, no way to extract that into an actual output email or failure email. It's not that that information is not available, but extracting it into an email would be a nice-to-have. It's minor, but it would definitely be a help. In the grand scheme of things though, you can drill down to session-level failures and get that error message to provide to support. 

Another thing has to do with job events. A job event triggers when a job completes. It sends an email or reruns a job. Right now — and I've even talked to Tidal about this — it will run all the events at the same time. It doesn't provide the logic to say, "I want this job to rerun five times. If it fails on the fifth time, then send an email: 'Out for Failure.'"

The only other thing I would like to see is an easy way to flag jobs running longer than a certain percentage of the estimated time they should take. Right now, you can hard code in a max expected run-time and you can trigger a notification off of that. The unfortunate thing is, in a consumer product-related business such as ours, Q3 and Q4 jobs are going to run longer. So you can't really put a hard-coded expected run-time, because that's going to fluctuate. So it would be useful if we could specify something like "Flag this job if it runs 25 percent longer than estimated," which the solution does track for 30 or 35 days. That's what they usually recommend, out-of-the-box, for keeping track of history.

Buyer's Guide
Tidal Automation
November 2023
Learn what your peers think about Tidal Automation. Get advice and tips from experienced pros sharing their opinions. Updated: November 2023.
745,775 professionals have used our research since 2012.

For how long have I used the solution?

I have been using Tidal for about 13 years. I used it for about eight years at my previous company and then I came over to this company.

What do I think about the stability of the solution?

I came on about four-and-a-half years ago here and Tidal has been really solid. The high-availability and the fault monitoring they use is very good. I can think of twice, in the last four-and-a-half years where we've actually had to failover for one reason or another. And the bottom line was that it wasn't even a Tidal issue; it was something to do with patching. One of the patches from Microsoft was a little funky. From a stability and support standpoint, this is a rock-solid app, in my opinion.

It's very stable, especially for those who utilize what they call Fault Monitor or Fault Tolerance. When we do patching, the jobs, in and of themselves, automatically fail over from our primary to our backup. There might be a slight disconnect in the web UI that the operators use, but that maybe lasts a minute because of the cut-over time. But it picks up all of the backend PIDs, and the jobs just pick up where they left off. From a stability standpoint, this is a really good product.

What do I think about the scalability of the solution?

From what I've seen, the scalability is very good. There are companies that I know that run millions of jobs a day. I've been through some user groups that have some people running nine different instances of Tidal, and they're running a lot of different things. So, the 7.7 million a year we run here, coming from where I was beforehand where we were running about 400,000 a year, seems like a lot. But we're still a small fish in the barrel compared to how other Tidal customers are using it.

So the scalability is phenomenal. We're always looking for that next hook and working on trying to tie into other things. We're keeping our versions updated as much as we can, in regard to OS compatibility. Take Informatica, as an example: We're making sure that we're as up-to-date as we can be with the versions that are out on the market.

Which solution did I use previously and why did I switch?

In my previous company we used the Lawson ERP's internal job scheduler. There were Windows tasks that we had to check on. They were running a lot of VB6 stuff. In my current company, I came onboard years after they had already cut over to Tidal. I know they had some mainframe stuff in the past, but I don't think they converted from something like CA to Tidal. Tidal was their first choice.

How was the initial setup?

I came in at the tail end of the initial setup when I first started with Tidal back in '07. The decision had been made on the application before I got the position of scheduler in the Tidal admin. In terms of the actual setup, I was on the periphery. Once it was set up, I got more involved. But I have been involved since then with the system upgrades and version upgrades.

Upgrades seem to be fairly straightforward. When it comes to hotfixes and partial, mid-version updates, it's pretty simple. You don't have to call the vendor in. When it comes to versioning upgrades, like when we'll go from 6.3 to 6.5 in a couple of years, we do utilize a third-party vendor to come in and assist, because they do a lot of backend database cleanup and scrubbing. We're running in a SQL database for Tidal, and I know just enough SQL to get me in trouble. So we do rely, especially because this is such an enterprise-based application here, on having a third-party come in and take over the upgrade part of it. We work in conjunction with them, making sure jobs are set and that the copies are good.

As for the learning curve, a lot of it depends on the individual's knowledge of the particular systems. Windows is fairly straightforward. If you know some Unix commands, you can help set them up really easily within the application, when you're setting up a job to run from the Unix command line. If you don't know SAP or whatever the ERP system of the company is, at least a little bit — enough so that you can navigate through it — there might be a little bit of a learning curve. But it's really not as big as one might think. Take the SAP ERP as an example. I came from a Lawson background. I came into the SAP environment here, which I was totally unfamiliar with. But within about a month, I was able to set up SAP jobs without an issue.

There are some little things involved in understanding how to up jobs if you want to overwrite certain variant settings. Learning to do that, and making people feel comfortable doing that, was probably the biggest learning curve.

The other thing is understanding using API hooks within Tidal to other processes. That's one thing they could improve on as far as their training materials go. I've talked about that with them during the past couple of user calls that I've been involved in. At this point it's still a little rough, but hopefully that will get better as time goes on.

The amount of training a new user needs in Tidal depends on the level they're at. We have a training program in place for our operators who do a lot of the manual reporting and failures, running jobs on request, etc. We'll start them with just an inquiry only so they can see everything that's happening, but they can't act on it. That way they can get a feel for the application. We'll give them that for about a week or so, and they'll work hand-in-hand with an operator who's been onsite and using the application. Then we can roll them out to a test version with test-operator access, for another week or so. By that time, they're through four weeks of Tidal acclimation and they're good to go with everything. Because of the operator's schedule — they work a four-on, three-off rotation, it's not like they're working five eight-hour days of straight Tidal — plus all the other things that are on their plate for their job requirements, they're not going to see every single potential issue that could come up. But they have a pretty good grasp at the end of that time.

We'll usually get a feel from not only the trainee, but also the person who is working with them, about how they are doing and if they feel that they're ready to start doing stuff in production. Generally, within a month, they're up and running as an operator, in both test and prod environments.

Developers are a different story because of all the different things that they have access to regarding scheduling and building schedules. We haven't brought on a lot of developers since I've been here. It would probably take a good two to three weeks for developer training, if someone wanted to know how to set up a job in Tidal. We'd really try to hand-feed them little things, so they don't inadvertently schedule a job, or an entire job group that runs hundreds of jobs, which could really bog things down from a systems standpoint.

What about the implementation team?

The partner we use is a Tidal partner called BLUEHOUSE. They've always been very helpful and very flexible in terms of scheduling. The way we do it here is we'll have them come onsite to update our test system. We'll bring that up online and run that on the new version for two months or so. Then they'll come back and we'll do the production update. The whole time onsite, between test and prod together, is about four or five days. But they do a lot of the prep work for production, while we're doing the test upgrade. When we're ready to go to the production, they're only here for a day or a day-and-a-half at the most for the production cut-over. When it comes to initial support right after the fact, they're very receptive to fielding the questions.

What was our ROI?

I would say we have seen a return on investment by going with Tidal, and not only because of the volume of jobs we're running, but because of the variation of jobs that we're running. It gives us the ability to manually adjust processes on-the-fly, and having that visibility and quick reaction to failures has been a big plus for us.

Which other solutions did I evaluate?

At my previous company they looked at IBM, CA, and one other solution. The reason my old company went with Tidal back then, was that it was the only one that offered integration with Lawson.

What other advice do I have?

As with any product you're looking at, first of all, don't get pigeonholed into it. Don't have a laser-focus on an individual product. But with Tidal, especially now that they're rebuilding the customer base, reach out and work with their salespeople, and network with current users. One thing I found, especially being on some of the network boards — they used to have a Yahoo Group for Tidal — people aren't afraid to say, "Hey, this works great and this doesn't." I'll be the first to tell you what works great and what still needs some work. And now that Tidal has put its own forum together, the company is monitoring and responding to concerns and questions a lot quicker than they used to when they were under Cisco's umbrella.

The biggest lesson I've learned from using Tidal is that it's always growing. In user calls that we've had since Tidal went back to its own environment, they're really looking to rebuild and invest in the application, and make sure that things are up to date and validated. They're working on making sure they're as current as they can be with certain connections. 

It's like they have a renewed vision since Tidal was divested from Cisco. They seem to have a real yearning to get back into the way things used to be in the pre-Cisco days. I'm not trying to knock Cisco, but it is what it is, because I worked with Tidal before Cisco acquired the product. Now with the STA Group and a lot of the older Tidal developers and folks "back in the saddle," there seems to be a renewed interest in rebuilding, making it a lot easier, and opening up a lot more process availability for users and customers.

We've got a handful of developers, five or six people, who actually have the ability to create jobs in our test system. We have a team of six operators who have access to Tidal as well. They do the 24-hour monitoring and ad hoc jobs, etc. And we have two Tidal admins. We do have some other folks who have inquiry access into our production system. We'll give people who might be developers in our test system view-only access to prod. Overall we have 15 to 20 people who have access to the system, with varying security levels. I'm responsible for maintenance, upgrades, job migration, and production. I also work with people who don't have access to Tidal and on helping them get jobs set up properly. I also make sure we get the email notifications correct.

For what we're using it for, and what we have, it's very good.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Application Engineer at Columbia Sportswear
Real User
Top 20
Scheduling across multiple applications gives a holistic view
Pros and Cons
  • "Thinking of all the people involved in checking jobs on a daily basis, manually running jobs or auditing them through standalone tools, and trying to connect them. We have saved hundreds of hours weekly, which is substantial."
  • "I'm still hoping with Explorer to be able to see end-to-end job streams. That's not really something that's easy to see today in the web client. However, I haven't worked with Explorer yet. One of the things that we have found frustrating is not being able to see an end-to-end job stream across multiple applications within Tidal. We use jobs for that right now, but I have high hopes that we'll be able to see that in Explorer."

What is our primary use case?

We use Tidal to run jobs across multiple application platforms, such as SAP, ECC, PDN, and Informatica, as well as jobs that run in Azure cloud. We also use it for several warehouse management jobs with OS/400 and AS/400 connectors. We have a lot of different types of connectors, then we are bringing all these jobs into Tidal so we can set up dependencies between jobs that run, e.g., an SAP job and a OS400 job may be dependent on each other in some way, allowing a cross-platform job flow.

We are currently on the most recent version.

How has it helped my organization?

We are using it for cross-platform workloads. That is probably the biggest reason that we are using it. The solution is generally good. Over the years, we have needed to do our own learning about how to manage it in terms of understanding dependencies and successors, then setting up times and so forth. However, this is the type of stuff you would have to learn with any scheduling app. We find it to be really useful. I'm hoping with the Explorer tool that they'll have better reporting so we can do some full cross-platform job stream reporting that they haven't really done much in the past. Therefore, we should be able to see some of that. In terms of managing it, I find it very useful other than the learning curve.

We use cross-platform management for so many things. We use it a lot for our warehouse management replenishment type things: to and from SAP. Once we implemented our job stream flow, things gets sorted out of house for delivery and can be update in SAP (and vice versa). Having the job stream has been helpful. Also, having it all automated makes a difference to replenishment. 

We use the ability to enable admins and users to see the information relevant to them specifically in our production environment. We can, but don't always, limit someone to only seeing data that they need to see. Then, they are not overwhelmed by other data. We do allow most of our users to see all the other data just for information and to understand the environment. However, you can begin to narrow in on what you need, if you're using policies and work groups correctly. Depending on how we use it, especially in production, it lets users only be able to do what they should be doing in production. They should only be managing their jobs, possibly see other jobs, and understand if there is a delay upstream which could be impacting them. They won't be able to manage those jobs. They need to contact the right people who understand those jobs to manage them. The solution lets them work within their lanes and do the work correctly without having a negative impact upstream, and hopefully, not downstream. 

There is an awareness that we are scheduling across the multiple applications and understanding that all applications don't live in their own silos. There is an impact across the organization. It gives us that holistic awareness, in general.

In the past couple of years, I have done education and we have leveraged creating alerts that go to the right people. It has allowed us to do that. Therefore, I don't get alerts for something that I shouldn't be dealing with. Now, people who own the jobs get the alerts and they can figure out if there is a problem with the application that they need to work with or if it is something with Tidal. Then, if necessary, they can elevate it up to me. Fortunately, that doesn't happen as much anymore, which makes me very happy. It gives us the alerts in time so we can handle things ideally before they become critical, and hopefully, we're doing our jobs so the right people are contacted.

What is most valuable?

I love the "where used by" feature where you can find out where a particular job action, job event, or even a connector is being used. That is really good. 

I've seen a lot of improvements in the logging. It has become more useful. 

I'm looking forward to working with Explorer and Repository. I haven't had time to implement those yet, but I'm pretty excited about both of those tools. 

We get a lot of use out of variables within Tidal to help schedule jobs, help track things, create alerting, etc. I find those variables have a lot of use.

What needs improvement?

The solution’s drill-down functionality, so admins can investigate data or processes, depends on what we are looking at. In some places, it is better than others and getting a lot better. In the five years that I've been supporting this solution, I've seen them get much better at allowing us to get more detailed information in the logs and job activity. 

I'm still hoping with Explorer to be able to see end-to-end job streams. That's not really something that's easy to see today in the web client. However, I haven't worked with Explorer yet. One of the things that we have found frustrating is not being able to see an end-to-end job stream across multiple applications within Tidal. We use jobs for that right now, but I have high hopes that we'll be able to see that in Explorer.

The reporting piece needs improvement. They are working to improve it but this is the piece that they can continue to work on. By reporting, I mean things like end-to-end job streams, historical reporting over the long-term, and forecasting. Those are some areas that I've expressed to them where they need to up their game.

We have the transport functionality where you move ops from one system to another. Right now, it's a manual process. I would love to be able to have more automated transports. Then, I'd love that to be able to tie this into our ITSM system so we can have change approvals, which are then approved, then transports automatically happen. 

For how long have I used the solution?

It feels like forever. We have had it at Columbia Sportswear for seven years. I have been supporting it for five years.

What do I think about the stability of the solution?

The stability has gotten a lot better. Every time that they level a version up, there are a few months where it is a little rocky, especially because they are trying to make some real changes on the back-end. Sometimes, I'm guilty of being a bit too cutting edge with the patches that I put in place. I have learned to hold back a little and give it a couple of months. Usually by that time, they have worked out the bugs and things are pretty stable. I would say this about any system.

I'm the only one who supports Tidal, then I pull in a dev person. There is usually one person involved with setting up the VMs. However, they have that automated so it is just a request for a standard set of servers. They just push a button and the servers are built. When we get to where there is QA testing, we're usually trying to align that with a lot of other QA testing. Therefore, people are naturally testing the system as they would with any other work that they are doing. Essentially, this is all of our schedulers, which are 15 to 20 consistently. I'm not asking them to do anything that they are not already doing, except tell me if there are problems.

I have a very loose backup person but I'm very motivated not to get calls on the weekends or vacation, which is why we built in our alerting systems. We try to keep them strong, so before anything gets to me, it's been vetted by the people who can solve the problem if it is job-specific. If Tidal itself goes down though, I'm the one who gets called because I'm the one who can fix it.

What do I think about the scalability of the solution?

Tidal does a good job. We periodically have them do a performance review every six to nine months by sending them our logs. I open a ticket, then send them a bunch of logs. They take a look at them and we do any necessary tuning. We have discovered over the years, going from a small to medium to high-medium organization, that Tidal is very responsive in terms of helping us figure out how to tune systems so we have the best performance. It can handle very large scale organizations job-wise. It is just how you tune your servers, and they're very willing to help with that. The best thing that a person can do is work with Tidal support to find out exactly what is necessary on the back-end to have their system scaled out correctly. It can be done. We run about 8,000 jobs in production, but I know there are some systems which run tens of thousands of jobs of production. We haven't hit a scalability issue at all.

Regularly, 20 to 30 people use it in our organization on a week by week basis. We have about 100 users in the system. Their roles are developing, creating jobs, QA, testing job scenarios, events, and actions; everything around developing a job or job stream. Then, we have our service desk people who do the transports from QA into production. There are about four people who do this.

In production, people from each scheduling team are responsible for the health of their jobs, which can include if there are issues with the jobs running, maintenance that they have planned, setting those jobs on hold, asking me to put an outage on an adapter, rerunning jobs, or disabling/enabling jobs. It is general job development and job management.

How are customer service and technical support?

The standard tech support at Tidal is very good. You can call or open a ticket, if you get stuck on something. They are usually quick with answer or at least quick to respond to you with more information. When I have gotten stuck, I have always been able to get help and get out of it. I once spent eight hours on a weekend call with one poor guy. 

The reality is you will always have issues that you have to escalate. That is just the world that we live in. 90 percent of the time, I have had a very good experience and gotten what I needed. I have been able to get support people on the phone. If we find something, and they haven't seen it, they are good at pulling in development. They are good at saying, "Okay, this is new. We will put it in a development." Now, with their new website where you can see your tickets and track things, they make it a lot easier. If you have a bug that is in development, you can track where it is and when it will probably be released. Now, there's a lot of transparency that makes it comforting to know your stuff is being worked on. These are improvements that they made as they moved away from Cisco. 

When it was supported by Cisco, it was okay but it wasn't as good. Since Tidal broke away from Cisco two years ago, that was when we saw the most improvements in terms of things that we had been asking for and the delivery on them. 

Which solution did I use previously and why did I switch?

I think we had a variety of solutions that were sort of stitched together.

How was the initial setup?

Its setup is around mid-level complexity. You need to do a little reading to understand how Tidal works. You need to understand things like connectors and the whole fault tolerant environment, but the data is all there to get to.

Whenever we are moving to a new operating system, I work with my infrastructure team to get new VMs built up in the right OS. I start to set them up with all the things that I need in order to build Tidal. At this point, I usually get a demo license from Tidal as I'm doing the build. This way, I can build and test but not take up a license. Then, when I'm ready to go live, I always go live in development first to QA, then production. So, I have a cut-over from the old system to the new system, then we migrate our database over. I work with my DBAs to do that. Then, I do testing in development to make sure everything is right, doing the same thing in QA. I also do more rigorous testing with the schedulers, then eventually it goes into production. It is about six weeks from development to production.

The migration to the cloud has been an extensive project. It is going generally well. A lot of what was running in the Informatica environment has now been shifted over into the Azure environment over the last couple of years. That is where some of the migration has been occurring.

What about the implementation team?

The initial setup was done by somebody else who no longer works with the company. Since then, we have moved to new operating systems over the years. These are always new systems that we build up, then migrate from the old system to the new system. I've set this up several times, so systems that we are currently running are the ones that I've set up.

What was our ROI?

Thinking of all the people involved in checking jobs on a daily basis, manually running jobs or auditing them through standalone tools, and trying to connect them. We have saved hundreds of hours weekly, which is substantial.

I am able to create something predictable and manageable in such a way that we know that we will get alerted if there's a problem and know how jobs are going to run. People can see and manage their jobs on a daily basis without having to talk to me about them. The return on investment is scope of jobs, making it so the management of jobs is not something that is handled by one team. It can be parsed out to the schedulers who know and understand those jobs so they can have some control over them, then I don't have to worry about all the different jobs streams. I just have to look from above and be able to help make sure that the system itself works. 

What's my experience with pricing, setup cost, and licensing?

Our yearly licensing costs are between $10,000 to $20,000. They have always been reasonable with us. I like that non-production licensing is about half the cost of production licensing. Licensing is by adapter typically. We have had scenarios where we have had to take an adapter from one environment to another, and they've allowed us to do that. They have made it a very reasonable process. There's definitely a feeling that they will work with you.

Budgeting is pretty predictable. They changed their model last time, which is why I'm not sure exactly how much it ended up costing. I know that our licensing guy did make a decision to license us in such a way that now we have a lot more flexibility based on adding VMs that can connect to Tidal and run jobs. So, it's not a problem to budget for it. 

Which other solutions did I evaluate?

We have on occasion looked at other options simply just to be aware of what is out there. We don't plan to change anything right now that I'm aware of simply because we don't have the time or budget. I'm not even sure we have the need. Every once in a while, we do look around because it's useful to go out, compare, and ensure that it's still something that fits our needs.

What other advice do I have?

Depending on how you will roll it out, engage people who will be managing the jobs earlier in process so they are aware and can help plan how Tidal is used across the environment. That is something that I wish the people who had rolled it out had done. I don't know if that was even a consideration back then. There were definitely things that I would love to change about how we do our scheduling which are just so baked in at this point that it would be such a large change. Also, make sure that you engage and use Tidal's resources. They have some great resources and know what they are doing. Work with them, as they can help you figure out how to use this tool.

There are ways that it makes life more convenient in terms of ensuring the right people get alerted for issues. We are able to see job health, jobs over a couple of days, and have some predictability, but not as much as I would like to see in terms of forecasting. If we were to stop using it, we would go to something similar simply because it's so useful to have an overall scheduling application.

I have developed some training specifically for the learning curve. The basic job stuff is pretty quick, especially because we have a lot of people who can be leaned on. When you start drilling down into things like using variables or more ad hoc type settings, the learning curve is a little higher. However, we have a lot of people using those features or settings who help each other with learning them. While it's not incredibly steep, there is a learning curve. I do an hour to two hour sessions, which are either classroom led or recorded. That is usually enough for most people to get started. Sometimes, people will come back with more questions, which I totally encourage. Then, if they start to get into some of the deeper things, like ad hoc variables, I have additional sessions that they can attend. These are usually about an hour long and get them going down the right path. I know that Tidal has developed some training, but I had put some stuff in place before they did, as I wanted to train everybody so they could do their job and not have to talk to me.

The biggest lesson that I have learnt from using Tidal is train people. Make sure that the people who manage jobs understand what they are doing and educated to the best of your ability. That has been one of my key takeaways from this. Also, don't go to the latest patch when it first comes out. 

There is a lot of power within Tidal, probably a lot that we're not even using today in terms of managing jobs as well as how we can set up alerting. Also, they have great support, so I can usually get what I need.

It's pretty extensively used right now. We might shift some of our job scheduling to more on demand, then still leverage Tidal for more of the batch scheduling. At least for now, we will be using it as we are continuing to have systems added in. I even have a ticket open because we have an adapter that we just added in that is not quite working right, potentially due to me not understanding the adapter. Therefore, we're continuing to add job streams, but it will always be dependent on what applications we are adding.

Two years ago, I would have given it a six (out of 10). Today, I will give it a nine (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Tidal Automation
November 2023
Learn what your peers think about Tidal Automation. Get advice and tips from experienced pros sharing their opinions. Updated: November 2023.
745,775 professionals have used our research since 2012.
Production Control Analyst at a healthcare company with 1,001-5,000 employees
Real User
Enables me to construct groupings with dependencies that automatically allow jobs to run in the proper sequence
Pros and Cons
  • "We had a number of different schedulers in this organization and we've been porting everything that was running out of these other, unrelated schedulers into this scheduler. That has afforded us the ability to set up direct dependencies between processes that couldn't talk to one another before. Over the 15 years, we've definitely gained a lot from that. What had been manual controls have become automated controls..."
  • "From an administrative point of view, I wouldn't give really high marks to the solution. I actually entertained getting the JAWS application at one point. One of the shortcomings with the scheduler is the reporting capabilities. At least at the time, JAWS was the best that they had for a third-party integration. I think they've got things in the pipeline to help alleviate that gap."

What is our primary use case?

I have three installs of Tidal: production, qual and dev. I have a portfolio of 12,000 unique job definitions in production, 13,500 definitions in qual, and about 8,000 in dev.

The Tidal adapters I use are for Windows and Linux agents, as well as Informatica, Cognos, and mSQL.

How has it helped my organization?

With the portfolio of jobs that we're talking about, it's continuing to grow. There is way more work being added to the system than there is work that is being retired from it. That's just the way the animal works. It's been able to handle, perfectly fine, the complexity of the interrelationships between the processes.

We actually ported off of Maestro. Maestro was the scheduler that we were using, enterprise-wide, and it was very inefficiently used when I got here. When we came up on Tidal, we didn't convert anything. We built all of the definitions that exist in Tidal. So over the 15 years, that portfolio has grown.

As a whole, we're trying to automate as many things as we can to alleviate the manual processes. One of the things that Tidal has helped us with, because it is cross-platform: We had a number of different schedulers in this organization and we've been porting everything that was running out of these other, unrelated schedulers into this scheduler. That has afforded us the ability to set up direct dependencies between processes that couldn't talk to one another before. Over the 15 years, we've definitely gained a lot from that. What had been manual controls have become automated controls, by using this tool to replace a number of schedulers.

What is most valuable?

The automation aspect of the solution is the most important. I'm able to construct groupings that have dependencies which automatically allow the proper jobs to run in the proper sequence. That's the strongest selling point of any scheduler.

As for the solution's ability to enable admins and users to see the information relevant to them, the security model that I use is fairly simple and straightforward. For developers and other folks, an inquiry-type access is more suitable for the production environment. I've added functionality for people in both the qual and the dev environments, based on their roles. But I haven't restricted anything, meaning that anyone who has an account can see everything. There is a lot of flexibility in the way that things can be configured with Tidal. You could restrict it down to the point of people only seeing those things that are applicable to them specifically. I found that that would be too restrictive, and result in a lot of overhead to manage. So I went with a much simpler model, but the flexibility is there.

There are certain things I can put in play, triggering events based on statuses. For instance, if I have a certain job type where a number of the jobs are going to "waiting on resource" in the middle of the night, I can configure alerts so that I can assess those and then determine if I have to raise the job limits on some of those resources to make sure that we're not having things held up on necessarily. By the same token, if we're having long-running processes, I may want to tailor that down so we don't have so many processes running concurrently. There's some flexibility in that. I haven't had to rely on it a lot, but there are some features there that can be tapped into.

What needs improvement?

From an administrative point of view, I wouldn't give really high marks to the solution. I actually entertained getting the JAWS application at one point. One of the shortcomings with the scheduler is the reporting capabilities. At least at the time, JAWS was the best that they had for a third-party integration. I think they've got things in the pipeline to help alleviate that gap.

Also, one of the things I'm concerned about is that, with the security we have, there's a hazard that somebody could go in and accidentally delete a master grouping of definitions out of Tidal. Right now, I don't have an easy way to recover from that. It looks like a couple of things that are in the pipeline with Tidal are going to allow for that kind of recovery. There should eventually be a replacement for the Transporter tool. That sounds like it's going to have the capability of doing copies out of Tidal. If I scheduled that once a week, it would give me a copy of definitions out of Tidal. If it turned out that one of the operators, who had the rights, accidentally deleted a grouping of definitions, I would have something that listed definitions that I could go back to and recover.

For how long have I used the solution?

I've been using Tidal Workload Automation for about 15 years.

What do I think about the stability of the solution?

The stability has been fine.

In fact, we're going back to using the master and the fault monitor. We had it disabled for some time, but we've gone back to setting it up with the fault monitor and the master, and the backup. There was a problem with it. There was some kind of a fault status that kept getting triggered. The network person who was in charge convinced us to disable the redundancy that we had set up, and we've just recently gone back to it. And it's been working fine.

What do I think about the scalability of the solution?

We haven't hit any roadblocks with volume, but I think we've been sized properly too, behind the scenes, with each upgrade that we've done. It's been scaling fine. That's the bottom line.

There are systems out there that are larger than ours. We try to get to the user conference, here in Boston once a year, to do some comparisons to other organizations and the way they're using the tools. It's an information-sharing session.

Whenever we go for an upgrade, we look for an assessment of whether we need to provide more horsepower or not. If any of the configuration has to change, we watch that carefully with each upgrade. There's a formula that Tidal provides on whether you should have a small, medium, or large installation, based on the number of definitions that you have. They help with calibrating that.

We consider Tidal to be an enterprise scheduling application, so any new process that comes along is first looked at to see if it can be run from Tidal, whether that would involve purchasing another adapter or whatever else would make it work from here. We want it to be an automated function as opposed to being run manually and not integrated with the scheduler.

How are customer service and technical support?

The technical support is much improved. That's over the course of 15 years. Tidal has gone to great lengths, with the transition to STA, to strengthen its support capabilities and also strengthen the relationships it has with its clients. STA seems very interested in trying to focus on a direction, advertise that direction, and make the current clients comfortable. That, in turn, will help them take on new clients.

Which solution did I use previously and why did I switch?

As I mentioned, we came off of Maestro. Back in 2004 or 2005, when we were looking at schedulers, Tidal was one of the solutions we demoed. Universally, we all decided that Tidal seemed to be the better candidate.

How was the initial setup?

The setup was pretty flexible. We had to come up with our own ways of deciding how to group things and what our naming convention would be. 

When we first came up on the product, one of the issues that we noted was that the default sort for all of the jobs was alphabetical. That complicates the ability of the operators to visualize the order jobs should run in. To overcome that, we came up with a naming convention that puts a prefix on all of the job names with a number. So when we create our groupings, within a grouping it will list the jobs in the order that they run. Half of Tidal's clients wanted to see things alphabetically listed and half wanted to see them listed numerically, in the order that they run. The vendor wasn't willing to modify the product to give the user a choice of one order or the other.

I don't remember the original installation taking that long. It took us a while to actually build all of the job definitions. That was a lot of work. It was done within about a week. Once the equipment had been spec'ed out we had an onsite install here in the computer facility.

We've had to train a number of new operators and I don't think it's been a terribly big learning curve for them to understand how it works. The developers, in fact, self-trained in their environments and they seem to be able to maneuver fairly well. There are times I have to explain things here and there, some ways of handling things that are more convention. Those are things they have learned over time. But they seem to do all right with it. There isn't that much of a learning curve.

The only people who need to have the training would be the operations staff. I think there was a beginner's and intermediate course that we originally took, when we came up on the product. And then we learned things as we went. 

One of the things that would be beneficial though would be some training that incorporates best practices. You can go through the manual and it will tell you, "This feature does this," and, "these are the parameters that you need to put in," and then the delimiters, but it doesn't necessarily tell you the best use case for certain functionality. I've had a few people mention to me "Oh, you shouldn't do this, and you shouldn't do that." Well, where does it say that in the book? It doesn't. And that's the problem. There's a little difference between an instructional manual that gives you the nuts and bolts of how to do things, and something that's more tailored to best practices, or recommendations of things you should not do. And some of that has to do with the architecture behind the scenes. Users wouldn't necessarily know that unless there was some documentation expressing it.

What was our ROI?

I don't really have metrics for ROI. It's more of a feeling because we've been able to consolidate from all these separate scheduling products into this one scheduling tool, allowing us to have direct dependencies between things. That's an efficiency in itself, but I don't have any statistics to support the number of hours saved and the number of dollars saved. Overall, it has improved our business model with automation.

What's my experience with pricing, setup cost, and licensing?

My experience was that it was very difficult to figure out the licensing cost on an annual basis. I don't know if they've changed the model, but I remember it would take a month to reconcile if we were being billed the proper amount because it was based on the number of CPUs; if they were test CPUs or production CPUs. I recall, and this was probably five years ago, that it was very difficult to reconcile the annual statement with what we had, and to verify that they were components we were using.

Our ability to budget for the solution is a fairly easy aspect of it. One of the difficulties that I have internally has to do with the specialized adapters. I don't think it's well known within my company that I can't just snap my fingers and get an adapter. There's a cost associated with it and the license key has to be updated after we've made the outright purchase of it. I don't think there's familiarity, within our company, of budgeting for the coming year if it involves these additional Tidal components. That's nothing to do with Tidal. That's just an internal struggle.

Which other solutions did I evaluate?

There were five solutions we looked at in total. Two were ruled out right away. When we went to do demos with the three of them, the third one couldn't even do the demo, so it came down to Maestro and Title.

What other advice do I have?

One piece of advice I would have is that if you get into a product, try to keep it upgraded. It's to your benefit, support-wise to be, maybe not on the cutting or the bleeding edge, but close to the current version. That's been a pain point for Tidal, to try and get their clients up to speed.

Stay on the latest version because of the functionality. It's not only relevant to just this tool, but to many IT tools. It's just like the next generation of laptops that are coming out; they're coming out more quickly. The same thing is happening with the functionality that is being added to all of these products, including the scheduling application. It's important to go through the pains of staying up to date.

It's been a good product. We could have done a lot worse. This is a heck of a lot easier to use than some of the other schedulers that I've used in the past. But, then again, it's been proven as a solution, as well. Other solutions are all moving targets. Everybody is making changes in their products. At the time that we made the selection of Tidal, it was definitely constructed a lot better. It was easier to use than the other option.

In terms of the number of users in our organization, I honestly wouldn't mind if everybody in the company had an account to log into Tidal with inquiry access. But I think we've got around 300 accounts set up in each instance. They could be used by managers, developers, operators, and all the other IT folks who have accounts.

For deployment and maintenance of Tidal, since we're doing a 24/7 staff, we're talking about eight people, and three or four other people who are going to be part of production control and/or an IT server ops-type of functionality, because you need that level of support as well from time to time. So we have twelve or so people in one capacity or another maintaining Tidal.

I would give Tidal a solid eight out of 10. 

Which deployment model are you using for this solution?

Private Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner.
PeerSpot user
Lead Control Analyst at Central States Funds
Real User
Enables us to verify and to send out notices that a given step has started or finished
Pros and Cons
  • "One of the most useful features is being able to set up a schedule and create dependencies. The calendar can kick off processes at certain times, based on dependencies that you specify, like time, or whether another process has finished. Dependencies are the most useful thing."
  • "We've had some quirky stuff happen on an occasional basis where a job does not take off. For example, a job we expected to be finished by 3:00 a.m. is sitting there and not executing when we come in in the morning. We have to go all the way back to the dependencies and then we can see that one of the dependencies has become unscheduled, for some reason. No changes were made to the schedule but this prerequisite job has, all of a sudden, become unscheduled. I have brought this up with Tidal's support but they have never had an answer for it."

What is our primary use case?

We use Tidal extensively to run our health and welfare claims processing throughout the day. That's the reason we got Tidal back in 2011. We receive 15,000 to 20,000 claims a day and we use Tidal to process the whole thing, all the way through to creating checks at the end of the day.

Since 2011, we've expanded it to other applications and other processes: mostly reports, and files that come in electronically from other companies that feed other applications. And in a roundabout way, what we use Tidal for is to execute the applications to load whatever needs to be done on those applications.

The transfer function we used to do with Tidal has been switched over to another software product called Cleo. And that is run by our network team. That way they can control all the information that comes in and out of our building. They can put secure FTP on it, encrypt and decrypt the information, and set password protections. Cleo has its own scheduler, like Tidal, but they don't use it. They let Tidal execute the Cleo commands to bring the data in and Tidal will execute any application programs after that.

Overall we run 1,100 to 1,200 steps every day, depending on day of the week. I call them "steps," but they're actually multiple steps. Before you get to the actual processing of a program there might be a move, a copy, or a delete when we're clearing out folders, using DOS commands. We then move data around to certain directories so that either the TriZetto software that we use can find that data or any internal programs that we use in VBS, .NET, Oracle, or MS SQL stored procedures can find that data.

We're also starting to use this new MDM application which captures addresses from various databases, verifies they are correct, and pulls them together into one database. After all of our nightly processing, we have Tidal kick off the main MDM master so that all those addresses are in sync.

Tidal sits on its own database and then it talks, through agents, to the other applications.

How has it helped my organization?

People who are on the Client Manager were complaining about response issues. It's never been proven that a batch job is causing the issue, but they do find that so many things are hitting the database at the same time that they shut down the batch job that's running at the time. We've now been able to move our schedules around so that it can just run at night when everybody's off the system.

Also, after a while using Tidal it started to reduce weekend hours by not have to watch it constantly on the weekend. The only time we're really busy on a weekend, now, is when there is a major upgrade going on, as we usually do it on a Saturday or Sunday. But other than that, it's very quiet on the weekend. It has reduced overtime by 80 to 90 percent.

As of right now, the only time we really have overtime is planned overtime. Once a month, our network team applies the Microsoft security patching, so we have to pick a day, once a month to hold everything in the schedule. They then apply their security patches to all the Windows Servers. They bring the applications back up and we have to do a quick, sample test to make sure Tidal is okay. We then run a few jobs to make sure other things are okay and the business users have to check their applications and their data. At that point we turn the schedule back on for the weekend. It sounds like a lot but it only takes about an hour. Where we used to have two or three hours of overtime a week, now it's down to one hour a month.

In addition, our number of jobs has been growing steadily. We do about 1,100 to 1,200 jobs a day. We could go further but we have never really tested how many jobs we could do.

What is most valuable?

One of the most useful features is being able to set up a schedule and create dependencies. The calendar can kick off processes at certain times, based on dependencies that you specify, like time, or whether another process has finished. Dependencies are the most useful thing.

You can also verify that a step is finished. And some of our departments are really interested when something has started. You can send out an email saying this step has launched or this step finished normally and, obviously, we always have it notifying us when something goes wrong.

It's also very useful to do repeating steps. If you need to do something multiple times throughout the day, it's very easy to just copy that group of steps or jobs and continually process the same thing each time. And you can always have one dependent on the other.

Tidal is also helpful because, once you set a schedule, you can keep an eye on it. You can kind of have "bookmarks" where it can tell you when this step is done and that step is finished, and you know that the schedule is moving forward and nothing has been stopped yet.

What needs improvement?

We've had some quirky stuff happen on an occasional basis where a job does not take off. For example, a job we expected to be finished by 3:00 a.m. is sitting there and not executing when we come in in the morning. We have to go all the way back to the dependencies and then we can see that one of the dependencies has become unscheduled, for some reason. No changes were made to the schedule but this prerequisite job has, all of a sudden, become unscheduled. I have brought this up with Tidal's support but they have never had an answer for it. It would be helpful to be notified ahead of time when something is going to stop the schedule, even if we don't necessarily know what's causing it.

But the main area for improvement is reporting. A lot of our managers would like to have metrics shown in graphs for the products they keep track of. The reporting part of Tidal isn't very useful. When you use the report function, you can't bring that data into an Excel spreadsheet. I understand in the new release they have something called Explorer which is a new reporting feature. I think they acquired a product to handle reporting functions, but we haven't gotten it yet.

For how long have I used the solution?

We've been using Tidal since 2011.

What do I think about the stability of the solution?

Tidal has been pretty stable. We've had these little quirks, but they are mostly just minor bugs that crop up every once in a while. For instance, you might have to click on something twice or click off of something, like a tab, and then click back on it and it will bring up the screen. But other than that, it's been pretty stable.

What do I think about the scalability of the solution?

The scalability is pretty good. We've used Tidal only for our main application which is our health and welfare system. We do a lot of reports and off of that, but we don't use it in any other areas. 

We've never scaled it extensively across too many different platforms. The only thing we have right now is a SQL Server platform and an Oracle Database that we go against. We're only in one location.

I don't see us expanding our use of it for now. We're pretty stable.

How are customer service and technical support?

I haven't really dealt with Tidal support too much. The only time I really dealt with tech support is when we were doing an upgrade to a new release, to find out what release we need to have the agents in and was it compatible with other releases of SQL Server. The Tidal database, itself, is on a SQL Server release — I think it's at 2012 right now — and it can go up to 2016, but other applications are at different SQL Server levels. We had to check with them to see if it was all compatible.

They were very good in responding.

Which solution did I use previously and why did I switch?

We had two mainframes running all of our applications. We were using CA products. Our health application was ClaimFacts, from TriZetto, but they were dropping support for the mainframe product and everybody had to switch to Facets. We were running both products at the same time while we were transitioning to Facets. We had to run ClaimFacts, the mainframe version, for about a year or so because, if somebody has a claim they have a year to report that claim and another six months to make adjustments on their claim. So our old mainframe product had to be kept until all that faded away. 

Then everything went into PC, server-oriented applications. We got Tidal because the company, TriZetto, used Tidal to run their stuff. So we brought it in and we started setting up our whole batch schedule.

How was the initial setup?

I wasn't privy to the technical part of the initial setup, but I think it was pretty straightforward. We just needed to know where to place the agents so that they could connect, and we had to do a file share so that if you're doing a DOS command and a Tidal job, it will have shareability to whatever servers they're going to. Once those were all set up it was pretty stable.

Our deployment took about a month. But we were using the product for the first time. So we were setting up jobs for the first time. Some things were kept out of Tidal until they were ready to be moved in. They were run by developers or the application people, manually. It took about six to eight months to get everything on Tidal. There are so many icons and buttons and things that they had to press on to run something on a desktop and we had to convert that all into executable commands for Tidal in the schedule.

That approach was planned. The initial plan was to get the batch processing of claims in first. That was pretty smooth. There were hiccups every now and then but it was not that bad. While that was going on, all the in-house stuff was done in the periphery on a person's desktop. Those things were set up afterward.

The learning curve is at least one to two weeks, if you teach a person, full-time, how to run the schedule and how to set everything up. It depends on what knowledge they need to have to run a schedule. If it was just a matter of running jobs, it would take less than a week. But if they're constantly being asked questions on what this or that job does, it will take a person longer to get a feel for what all the applications do.

I came from a programming background when I started running these jobs and setting up the schedule, so I had a fairly extensive knowledge of what all the applications do. But you take a person who is just out of the computer room and all he knows is how to do a Computer Associates schedule, he knows the timelines and the flow of everything, but he doesn't know exactly what the applications do. They would need at least a few days to find out what are the major applications or major steps in a daily job schedule are. If some of those steps are very critical to run, they would need to be pointed out so they know which are critical and which ones can be held or bypassed. It takes time to get used to the processing.

What about the implementation team?

We used a Cisco consultant for installation. There were four people involved in the installation. We had the consultant working with our network people, and we had a technical support person who made sure all the libraries were in place to set up for Tidal. And there was me and another person getting all the schedules together.

The only time we've used a third-party is when we were doing a major upgrade of Tidal from 3.1 to 5.2, back in 2015.

The third-party we used was Synertech. Our experience wasn't too good with the consultant they gave us. He was very gruff and it wasn't a pleasant experience. We didn't ask him to come back. But the actual conversion to the new release went well. We used the consultant to show us the technical part of upgrading to 5.2. We also wanted to use him to train one of our new people in Tidal functions, but it never got that far.

What's my experience with pricing, setup cost, and licensing?

I'm not in the financial end, so I don't know what our licensing costs are.

I know that Tidal integrates with a product called JAWS Workload Analytics, which will analyze your schedule, give you graphics and reports, tell you where your logjams are, and analyze all the data going in and out. We asked what the price is on that and it was about $200,000.

Which other solutions did I evaluate?

There was one option back then, but by the time they wanted to come in for a demo, we had already decided to use Tidal.

What other advice do I have?

The biggest advice I can give is to test Tidal first. Run the whole schedule, whatever you're putting in. Run everything you can and test Tidal before you bring it over to production.

The trickiest thing to do is to change a schedule during the day. Once you associate a job with the calendar, and then somebody comes by and says, "Hey, I want to put these six steps in, and we need to run that today," if you try to change that schedule during the day, you don't realize that, because you put it on the calendar, it's on a schedule. You could be making changes and kicking off things inadvertently. You can't change something during the day without like stopping the scheduler or putting it on pause. We might have done that once in all these years — pause the whole scheduler or pause job launching and then make a change and then turn it back on.

You may think that you can change something during the day and let it go, but then you realize, "Oh, this thing took off." And you realize that because you put that job on the schedule, it picked up the scheduling requirements it inherited from another group of jobs and it will take off on you. That's probably the trickiest part of the learning curve.

When we brought Tidal in there were six people who were taught how to use it but five of them have retired. I'm the last one. About three years ago I had to train another person who came from the mainframe computer room after he took the job as a Tidal scheduler. I got him up to speed. The two of us run the schedule during the day. There are no other users. There are a few application programmers or developers who just want to have Tidal available so they can see what's going on, and we give them inquiry access. But nobody else has any authority to change anything or to set up anything.

Overall, it does what it's supposed to do.

I always get into arguments with the management staff here. They always claim something happened in Tidal and I say that Tidal doesn't process anything. It's a scheduler and it just launches jobs on the servers. If there's a system hung up somewhere, it's not Tidal. Stop. It is the actual program. Whatever processing has been launched by Tidal is the issue, not Tidal itself. I finally convinced them of that. Just because Tidal launched something doesn't mean it has touched anything or changed any data. It just goes to a server and launches a process. A person with the right authority can do the same thing from their desktop.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr. Platform Engineer at a computer software company with 10,001+ employees
Real User
Eliminated Saturday work hours with event-driven jobs
Pros and Cons
  • "The job dependency is something that you cannot have in a regular, simple cron job or simple scheduler dependency. The event-driven jobs are core for us, as we really need that. Therefore, we really need Tidal with its ability to run thousands of jobs per day."
  • "It takes a lot of time to learn the product. I have admins and developers who are working on the products for the last three to four years and still don't know all the functionalities. Tidal has really great things about it, but people are focused on their day-to-day job and the solution is not intuitive."

What is our primary use case?

We are mainly using it for triggering data jobs. It does a lot of ITIL stuff and data movement from systems into Hadoop. We use it because it has the capability of dependency triggering or dependency running. That's the main idea behind it. Also, it helps us to centralize and organize jobs across the organization.

We use Tidal to run Hadoop backup system, SAP HANA, and SAP BusinessObjects. We also trigger a lot of jobs into SnapLogic, SalesforceServiceNow, Workday, and Tableau, along with a couple of dashboards. We run a couple of batches from our Unix and Windows machines: the stuff that the developers are working on and want to run in ITIL. But, SAP is the main thing.

The main goal is to use Tidal for managing and monitoring cross-platform, cross-application workloads. The ability to manage those loads is what they do well. I can put a job to run in SAP, and once the job ends successfully, I can run that job in Hadoop. Or, I can run that job in Salesforce.

How has it helped my organization?

Tidal enables admins and users to see the information relevant to them. We have something like 50 different teams working on our Tidal platform. We segregate between them using work groups. When a user logs into Tidal, they only see what they have permission for, not other projects. Data engineers are the users of this solution.

A user who comes into Tidal, develops his job, and creates their job in Tidal, then triggers the job or sets up a schedule. An admin is someone who keeps the lights on, making sure the platform is up and running. They maintain the solution and configure it, doing upgrades.

If I just want to monitor my job, that is something that the solution does really well because there is some constant job activity that you can login and see what has happened every day and every minute. That is pretty good. An admin can drill down to processes and data, but I don't think they are doing that.

The solution has helped to eliminate weekend hours. In the past, we had to schedule a job every Saturday. Then, someone had to login and run the job. Now, Tidal has the capability of event-driven jobs. For example, if a job is failing, we can do something. Or, if a job is completed abnormally, we can rerun the job. So, all of these features that they offer help us not to come into the office on a Saturday. We don't need to have a human person do those weekend activities and treat them. They also thought a lot of about outages in the product. You can set up an outage to an adapter or connection, to say, "Between these hours on the weekend, I don't want to trigger any jobs." That works very well.

What is most valuable?

The job dependency is something that you cannot have in a regular, simple cron job or simple scheduler dependency. The event-driven jobs are core for us, as we really need that. Therefore, we really need Tidal with its ability to run thousands of jobs per day.

What needs improvement?

We started to deploy Azure, and it's still not fully baked. We are struggling with it. It is not something that has worked out-of-the-box. We haven't installed Tidal in the public or private cloud. We have a problem with security. While we can install the entire platform in the cloud to handle separate work or an entity, if we want to centralize it, then it's a little difficult.

They don't have good reporting capabilities. From the user perspective, I have 6,000 jobs running per day, and I would like to track them to know exactly what is going on. E.g. if a manager asks me, "Can you bring me this data or can you do a dashboard or report?" I need to take a lot of actions in order to do that. It's not easy to compute that data.

We are now testing version 6.5. The speed of this console is much better than 6.2, where the speed has not been sufficient for me. 

Most of my users are doing customer service review these days. So, we are asking the customers what they think about Tidal and what the vendor needs to improve. The number one that we are exploring is the user experience (UX). It has a lot of features, which is one thing that is great. On the other hand, the user experience is a bit old. It is hard to find what you're looking for. The UX is not intuitive for all users. So, if I'm a user, it might take me some time to know where I need to find my stuff.

It takes a lot of time to learn the product. I have admins and developers who are working on the products for the last three to four years and still don't know all the functionalities. Tidal has really great things about it, but people are focused on their day-to-day job and the solution is not intuitive.

We have internal training where we do two weeks of training for three hours each day. So it's approximately 30 hours of training. I cannot say after that users know everything. It takes about six months to ramp up on Tidal to be really good and professional.

For how long have I used the solution?

I have been using it for the last three years.

What do I think about the stability of the solution?

In version 6.5, it is very stable and works quickly. The UI works quickly too. Their services load pretty fast. If one of the servers reboot, they have a layout of the high availability. This means that from each component of the product you have two items. If one of them goes down, then the other one kicks in and starts to work. I really like that idea.

In terms of room for improvement, if one of the master goes down, then another one takes a minute to start. While it is not a big deal, when you commit on four nines, one minute is huge. So, I'm pushing the vendor all the time to be better on this.

They still haven't implemented the load balancing-oriented thinking. So, if I have two client managers, I cannot put them behind a load balancer. Or, I can put them there, but the load balancer will never have a health check. That is something that everyone is doing, building health checks, and they don't have health checks on clients for load balancing. Maybe this will come in the future. I submitted a request for having a health check for load balancers.

In version 6.2, they had a lot of problems. One of the problems is the Oracle support. We are using Oracle RAC and high availability on Oracle. If one of the databases would suddenly goes down, the entire system would crash. In version 6.5, we have tested this. They have done significant work and it's working perfectly. It's not crashing and working continuously without any issues. From this perspective, I am very happy with the new version.

What do I think about the scalability of the solution?

If you have enough memory, it is scalable. We are running 20,000 jobs. We just increased our memory. It scales really well. 

How are customer service and technical support?

The North American technical support is very good. They go the extra mile for you all the time, and we are very happy them. We have had some problem in the past with the Asian support during IST time, while it is night in the North America. However, I think it's getting better. Overall, I'm very happy.

Which solution did I use previously and why did I switch?

We used local solutions, like scheduling for each platform, such as SAP Scheduler, SnapLogic scheduler, and cron jobs. We didn't have a centralized place.

How was the initial setup?

I was the architect of the initial setup. The initial setup was complex; it's not easy. They have a lot of settings and configuration that need to be done. There are a lot of small things that vary from environment to environment, and they fail to consider every situation. 

The deployment takes a couple of days.

With our first environment, we tested it in a sandbox. I let my admin play with it to see how it behaved and what are the downsides. Then, we created a document. While I know that they have a document for installation, every time that we go to install, we are finding new issues.

I'm behind a firewall and we are in a limited environment. Our infrastructure is built differently from what they probably tested on their environment. So, it's a bit different from what I need to install. I first put it on the sandbox to see all the issues that we are facing, document step-by-step what we did, and then I go and do it in stage. Now, stage is the place where the developer come in and develop their jobs. Once they are ready, we move the jobs into production. 

Stage is really almost production. If stage wasn't available, then the developer could not work nor deliver. We see if it works for at least three weeks. If we don't have issues during that time, then we deploy to production. 

They do a better job in version 6.5, which we are testing now.

What was our ROI?

We have seen return on investment.

What's my experience with pricing, setup cost, and licensing?

BMC is really expensive. The other solutions are about the same price. I think Tidal is even cheaper than the others, such as CA, Stonebranch, and JAMS.

Our licensing model for Tidal is on an annual basis. It is very good and works well for us. Tidal's licensing is very transparent and simple. It lets you know, for the amount you use, that's the price that you pay. So, we buy X number of licenses, and we know that this is where we are. I'm very happy with that. I saw the licensing modules on other platforms, and I didn't like them. Other companies and solutions would calculate the connections, adapters, and instances. I think that's the reason that BMC was pretty expensive: They just didn't understand what our needs are.

The solution has no hidden costs. It helps me to plan forward into the future. I know that I can add another 100 or a thousand jobs, and that's how much it will cost me today.

Which other solutions did I evaluate?

We did evaluate other schedulers. This was the best solution.

I was not the one who selected it in the first place. I was the one who asked to evaluate a replacement at some point. There was a time when Cisco was the owner and we felt like Cisco was not delivering the product like we wanted. We sought to move to a new solution and assessed different solutions: BMC, CA, Stonebranch, and JAMS. We installed all of them, running all our tests. It took us six months to do our evaluation. Eventually, we found out that they are very similar from the infrastructure side. I could not see any advantage using the other solutions.

We discovered that we are good with Tidal and what we have. Then, a new company acquired Tidal from Cisco and they promised a lot of things to be better. We felt that the solution was going to a better place. So, we decided to wait and see how much they invest on the stability. We have been happy with the results. They are really focused on the customer and our pain. They are trying to remediate everything that we have issues with. Therefore, we decided to stay with them for now.

What other advice do I have?

Don't be afraid. Just do it. You will enjoy the features of it. It is a great tool.

You need to test Tidal many times. It's not straightforward. You need to test and learn it. 

We have something that is not unique to many platforms. I have five guys who handle the platform. That's costly for us. We would like to see the platform more automated or straightforward. I would like to not need to hire so many people just to administrate and maintain the platform. 

Our capacity has increased in terms of the number of jobs and integrations, but that is a natural thing. I don't think it's related to the solution. When you start to develop jobs, then year by year the number of jobs grow because the organization is growing. 

I'm very happy with the product, but it's not a fully baked product. It requires babysitting. I have worked on other solutions and know what is there. This takes time for us to install, upgrade, and task because there are so many components to the product. If you do one little mistake, then you can screw the system.

I would rate the solution as an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Data Platforms Operations Lead Managed Hosting at a marketing services firm with 1,001-5,000 employees
Real User
Dashboards enable tier-one people to monitor multiple jobs and alert when things fail, helping our reliability and in managing SLAs
Pros and Cons
  • "Tidal helps administrators and users to see the information that is relevant to them in that single pane of glass. They can see jobs running, they can see job history, and they can see job progression. If you look at alternatives like Airflow and clouds, you'd have to design your own UI to monitor the progress of the different jobs that you've created in Airflow. So Tidal is huge for us."
  • "One area for improvement is the command-line interface and the API to bulk-load jobs. It's a little bit kludgy, but we still manage without it. They're working on it and it's getting better all the time. In addition, the documentation for their API for creating jobs needs to be updated. It's a bit of a learning curve."

What is our primary use case?

Our use of Tidal is mostly file-event driven. We use it to manage our ingestion, processing, and loading of data. Tidal has a hook and it runs ETL for us. It runs jobs and SQL and some of our database appliances like IIAS, the new version of Netezza Teradata.

We have a file gateway that receives a file and drops it in a location. That file event picks it up and drops it over to the ETL tool. The ETL tool will run and aggregate a number of source files and turn it into a properly formatted input file. That file then goes through data hygiene and data analysis. Then it goes through a matching process. It is then put back out and runs an ETL process to stick it into a SQL database. And then there are a number of jobs that are run in the SQL database to manipulate that file.

We don't have a lot of calendared events or scheduled windows.

We have a central location for Tidal in our data center, and then we have client-hosted solutions where we run smaller instances of Tidal, and those are in the cloud. We use AWS, Azure, and GCP.

How has it helped my organization?

It reduces our administrative costs. As much as people are in a DevOps model, we can create dashboards for tier-one people to monitor multiple jobs and then alert or call when things fail. It helps us with reliability and managing SLAs.

It has also helped to reduce weekend and overtime hours due to the fact that you can have a single person manage multiple jobs. If we didn't have the single pane of glass and that visibility, people would have to manually look at logs to determine the progress of a job. So it reduces headcount. But when you run 24 by seven and 365 you still have people working weekends.

We run 70,000 Tidal jobs a day. it would take a mountain of people months to run that many jobs manually.

What is most valuable?

What we find most useful from the operations side is that it provides a single pane of glass for managing that workstream. It also alerts us on failed jobs, so it's our monitoring and management tool for those workstreams. 

Tidal helps administrators and users to see the information that is relevant to them in that single pane of glass. They can see jobs running, they can see job history, and they can see job progression. If you look at alternatives like Airflow and clouds, you'd have to design your own UI to monitor the progress of the different jobs that you've created in Airflow. So Tidal is huge for us.

Most of our stuff is private clouds. We haven't had an issue with its support for private cloud or its migration to the cloud. In our scenarios, we run the masters here and we reach out to agents that are running in the cloud. We also use it to kick off command-line utilities for loading data into BLOB storage and S3 buckets. We use the SFTP utility to move files around.

What needs improvement?

One area for improvement is the command-line interface and the API to bulk-load jobs. It's a little bit kludgy, but we still manage without it. They're working on it and it's getting better all the time. In addition, the documentation for their API for creating jobs needs to be updated. It has a bit of a learning curve.

We also wish there was a search functionality for assigning actions to events, and users to workgroups. 

Finally, the S3 data mover jobs are still a little buggy.

For how long have I used the solution?

I've been using Tidal Workload Automation for about 14 to 15 years.

What do I think about the stability of the solution?

After the 6.2 release, the stability became awesome. With 6.6.1 it was a little bit difficult, but everything after that has been solid.

What do I think about the scalability of the solution?

Scaling is easy. You could run these in VMS. We happen to have physical boxes. 

We haven't scaled it out, such as creating a remote master. In instances where we thought we may have to kick off jobs from our Maryland data center or jobs in our Denver data center, over MPLS, we thought we would have issues but we didn't have any issues. We were fine. We've been able to run things centrally.

The databases scale the way SQL scales, either by giving it more memory or more CPU.

As we have brought on clients we've grown over the years. We have a tendency to overbuy for the Client Managers. Our Client Managers are coming up on four years now. In 2021 we'll likely do a tech refresh. We'll stand it up with another version of Tidal and we'll do the migration onto the new platform. At that time we'll look at scaling up the boxes a little bit. You can put a lot more workload, a lot more Tidal jobs, on these without having to increase CPU or memory.

How are customer service and technical support?

Their tech support is awesome. We've had Tidal for a long time. We had Tidal when it was Tidal, and then when it was purchased by Cisco. During the time that it was purchased by Cisco, support was lacking. But now that it's part of the STA, it's back to being awesome.

Which solution did I use previously and why did I switch?

We were using a home-grown solution. It was a cron job manager. It didn't do file events very well; it had monitor CIS logs. It was tough to schedule tasks. It was purpose-built so it didn't have a SQL adapter. It didn't have the ability to run on Netezza and things like that.

We switched because to programmatically create the enhancements for the things that came out-of-the-box with Tidal was just too costly. It would have taken too much time.

How was the initial setup?

We've retooled our environment three times since we first installed it. Our last one was easy, a piece of cake. The ones prior to that were not so good. 

When Tidal sold it to Cisco, and they had introduced the concept of a Client Manager, a type of web interface, there was a time when going from one version to another version was not good. Now that Tidal is back to the STA Group, our upgrades are much easier.

With our last upgrade, we stood up a whole other set of servers — our servers were old — as well as a database. From the time we got the servers installed, loaded Tidal, and did our initial database export, so we could do testing, it took two to three weeks. It was a piece of cake. And then we did extensive testing.

In terms of the solution's learning curve, from an operations standpoint, teaching people how to search and manage jobs, and start and stop them, put jobs on hold and kill them, we can get someone up to speed in less than a week. For developers, it's a little bit more lengthy. There have been several instances where we have a Tidal developer, a subject matter expert — we've only had one or two of them — who has been able to train multiple people and make them serviceable. We've been doing it for 14 years, so we don't use Tidal training. We've created our own training documentation to get them up to speed for how we use Tidal. We can get them up to speed very quickly. I know people who have joined the company and who are writing and creating Tidal jobs two weeks or three weeks later.

What was our ROI?

For ROI we'd have to figure out how many man-hours am we're saving with Tidal versus not having it or having one of the other automation tools. We've grown up with it. I can't imagine being without it. Back in 2016, when we looked at possibly switching over to another solution, it wasn't a clear path to migrate to any of the other tools. We literally run our whole enterprise on this, so if Tidal goes down, the world stops.

We feel we're getting a pretty good deal with Tidal. It's supporting $600 to $700 million in revenue.

What's my experience with pricing, setup cost, and licensing?

The licensing model's flexibility is awesome. The way it's licensed for us is per master and then per agent. We have an enterprise agreement, so we have unlimited agents, and we have it on 500 devices.

I don't know how it could be easier to budget for Tidal, given that there are no costs for upgrades and other enhancements. There are increases over time, but unless you add functionality, such as buying other adapters, it's very easy to manage costs for maintenance and the like.

In terms of the hardware that we purchased — VMs and storage and networking, and the VMs' SQL licensing — it was a little bit below $200,000. That doesn't include licensing.

The hardware list is includes

  • a SQL cluster
  • a utility server that we use to migrate jobs from dev to prod
  • two masters in dev
  • a fault manager in both dev and prod
  • three Client Managers in dev and two Client Managers in prod
  • for each of those Client Managers we have a database
  • 11 VMs
  • 12 physical boxes.

So we've got a pretty big environment.

Which other solutions did I evaluate?

There have been a couple of times that we have looked at competitors, especially when we saw that Cisco wasn't really investing time or money into it. It wasn't clear to us if Cisco was going to continue to invest in Tidal. So we went out and looked at the market and did evaluations. 

We looked at Automic or UC4. We looked at BMC Control-M. Stonebranch was actually interesting, back in 2016.

What it came down to was that Automic was tough because it was changing hands on a regular basis. Stonebranch was more in our price range, but Tidal's price for the way that we use it was cheaper. When we started looking at what it would take to migrate from one to the other, there was no ROI.

The way we evaluated things was we looked at our use cases and ranked them from one to ten, and then costs. All of Automic, Stonebranch, and BMC would do what we wanted them to do. I'm sure, if we had dug a little dig deeper, we'd have found the little idiosyncrasies between them. But the cost for those and the cost of migration was just too much.

We started seeing how Cisco was propping it up a little bit more, right before they sold it to STA. And when STA bought it, they assured us that they would start making improvements. We stopped our analysis of other solutions there.

What other advice do I have?

Tidal's drill-down functionality is one of those things where you get out of it what you put into it. If you program it to fire-and-forget then it doesn't have a lot of drill-down mode to it. If you put in result codes and things like that, instead of using the agent to kick off the SSRS package in SQL, or if you use the adapter, then you can drill down.

We have about 100 users using Tidal in our organization. They are anywhere from developers to operations people to administrators. There are only a couple of administrators. There's a bunch of operators because we use this to run 24/7, 365 for 20 or 30 customers. For each of them there may be a couple of operations people and a couple of developers. As for maintenance, we patch our boxes, our masters, our Client Managers, and our databases every month, and it takes one person.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Tidal Administrator at Devon Energy
Real User
Has the ability to support multiple platforms
Pros and Cons
  • "With the varied features in the varied adapters provided, we use Tidal Enterprise Scheduler because we want everything to be scheduled in one place. Tidal provides that for us with its tools and varying platforms in our organization. Tidal provides all the connectors to the platforms. This is very useful because we don't want to look for another scheduler for scheduling certain jobs. We don't want to look at those schedules manually between platforms."
  • "With the client, we have had certain issues. The user interface for Tidal is a little slow. A lot of people would love this tool if they had a faster user interface. The drill-down functionality should be much quicker than what it is pulling out now. If I fill out some data, then it takes awhile to get that data back onto the screen. It's not as fast as we were expecting."

What is our primary use case?

We use it to call multiple source systems, such as Informatica workflows, Unix scripts, Windows scripts, PowerShell, batch files, and a few SAP web programs. We use it for certain file events and monitors. Tidal, by itself, can't monitor, but we create a script and job for that, then schedule it in Tidal. We use Tidal for multi-purposes.

We use Tidal for our SQL Server, where we call from Tidal any procedures, statements, SQLs, or jobs. We also call a few HANA Stored Procedures from it. As of today, Tidal doesn't have an adapter, but we have some internal scripts which call Stored Procedures from Tidal.

We run around 2000 to 3000 jobs per day.

The infrastructure is in Azure.

How has it helped my organization?

Tidal enables the administrators and users to see the information that is relevant to them. 

We do have a logs tab that we go through. The errors point us in the right direction where we need to troubleshoot our issues. Depending on the issue, remediation does not take too long. 

With the varied features in the varied adapters provided, we use Tidal Enterprise Scheduler because we want everything to be scheduled in one place. Tidal provides that for us with its tools and varying platforms in our organization. Tidal provides all the connectors to the platforms. This is very useful because we don't want to look for another scheduler for scheduling certain jobs. We don't want to look at those schedules manually between platforms. With Tidal, we just need to maintain the dependency, ensure the job is on the platform, and make sure the predecessor runs. We just set this in Tidal and forget about it.

Sometimes, it does reduce overtime hours. It's not a full-blown automation tool, but we usually set up monitoring. In the olden days, people used to do this with shadow scripts, cron jobs, etc. Now, we are using Tidal and have a call-in mechanism that is triggered from it. So, we do use Tidal for certain automations.

What is most valuable?

Tidal's most valuable feature is the ones for adapters, like the Informatica and SQL Server adapters. They have managed adapters for most platforms. We can have integrations running on multiple platforms. That is a valuable feature that Tidal provides compared to other schedulers. That's what's beneficial for us is that it calls jobs, programs on SAP, and processes on Informatica, Windows Box, and SQL Server. Tidal has expanded the platforms that it can support. 

Tidal provides usable information from the logs, its user interface, and Client Manager.

What needs improvement?

The HANA adapter is not available today. If I need to call a procedure in HANA right now, I don't think Tidal has any adapters. I know that we do not have a ServiceNow adapter either, but I believe they will be coming out with a new release.

With the client, we have had certain issues. The user interface for Tidal is a little slow. A lot of people would love this tool if they had a faster user interface. The drill-down functionality should be much quicker than what it is pulling out now. If I fill out some data, then it takes awhile to get that data back onto the screen. It's not as fast as we were expecting. 

I would like to see improvement in terms of performance, meaning that it triggers jobs at the right time. If Tidal improves their performance with the client, that will be really useful for people who are developers and doing call/production support of jobs. 

We are looking for a cloud offering from STA Group. We keep hearing from STA Group that this is in discussion on their end. We are also looking at SaaS offering that other customers are using.

For how long have I used the solution?

Seven years.

What do I think about the stability of the solution?

It is stable. We don't have any issues with the Enterprise Scheduler. It never goes down.

Very few people are required to maintain and administer Tidal because it is very stable. Right now, we need people to administer it when migrating stuff within Tidal because that need to be done manually. We are a team of four because we are spread across multiple geographic locations, but we do other stuff too. E.g., while I am a Tidal Administrator, we do support other platforms.

What do I think about the scalability of the solution?

There are close to 10 other teams using Tidal, and I'm not sure how extensively they are using it as of now. People login to Tidal when they need to check the status of their jobs. When it comes to developers, there are close to 20 users. We do have business folks who use Tidal when they just want to monitor or operate their jobs.

We are still expanding day by day. We do get requests to create new jobs, and the developers will take care of those. We receive those requests once in a while. We are still expanding but it will not be a drastic increase.

How are customer service and technical support?

Very few issues take long to remediate and we create support cases for those issues. 

The technical support used to be somewhat bad when we were with Cisco. We used to get slow responses. It is better now with the responses we are getting from STA Group. I would like to see more in term of STA support. If they could provide a knowledge base to the customers, that would be really useful. Most other vendors have their own knowledge base. E.g., if you have an issue for a certain customer, they will place that solution in the knowledge base so a new case won't be created for them. Instead customers can go look at their knowledge base to determine if this issue happened before. They can search for it in the knowledge base. If it is available, they will try to implement the solution. If not, they will create a case to the support team.

If STA can have a knowledge base, that would be useful to a lot of customers because most issues are probably repeated across multiple customers and organizations, not just our organization. We might be using the same version, but the same issue can occur with the same version anywhere. So instead of us creating cases and waiting for them, if their technical support resolved an issue on this particular version and the resolution is already available to look at, that will be useful.

Tidal switched hands from Cisco to STA Group. I have been taking the quarterly seminars or webinars from STA Group. We are looking forward to the new version that they will make available sometime in Q1, probably in February or March. We are looking forward to that because STA Group is already aware that a lot of customers had complaints that the client is responding slowly. So, they are aware of that and made some big changes. We are waiting for that new release to see how it will behave. It is good that the solution changed hand since Cisco is a big giant. Tidal was just one part of their business. Now, STA Group has dedicated teams who are working on developing this tool, adding new features, etc. 

We do not use Tidal support for private or public clouds.

Which solution did I use previously and why did I switch?

This is my first scheduler. I used to send jobs to the Control-M team, but that was with my previous organization.

When I started working for my current organization, Tidal was already available. My team was supposed to support Tidal too.

How was the initial setup?

I was not here for the initial setup. We have been partners with Tidal for a long time, close to 10 years.

I was part of multiple upgrades we did within our organization that were fairly simple. 

For an upgrade, we go to the support site and get the documentation. That documentation is useful. We do not need to go back to the support team asking for more details, as we usually get valid documentation. We just need to follow their steps. Following the steps will take around 30 minutes, then we wont release it to other employees without doing our own validations. Overall, the upgrade takes an hour.

The implementation is straightforward. It is whatever is provided in the documentation. They do provide two ways to do it. So, we choose one way to do it. We copy whatever files are required manually because we want to make sure of what we are copying. We want to make sure we have all the backups available before we do stuff.

What about the implementation team?

Before implementation, it is better to get with the Tidal support guys. They usually assess the organization's features that they want to use. They will provide the specs to use based upon how many jobs needed to configure. So, it is better to work with support, because if they provide certain specs, then it is always better to go with the specs they provided.

We had this issue when we were doing some upgrades. Moving our infrastructure from one place to another because we thought we could reduce. Then, we had issues. So, it is better to go with whatever specs are provided by the vendor.

What was our ROI?

The time that it saves my staff is not huge, maybe four hours a week.

It has helped our organization by having one scheduler, instead of multiple schedulers, and having resources to support dependencies. It saves both monetary resources as well as fiscal resources. We don't want people to look at the screens on multiple platforms, and say, "Okay, this job is done. Go trigger another job."

The TCO is okay, but not out-of-the-box.

What's my experience with pricing, setup cost, and licensing?

Right now, we are in a good position with the licensing model that we have with the Tidal vendor. So, we won't have any issues. even if we double in our current production. Initially, Tidal provided us some specs where if you have these number of jobs, then you come under this category. They usually provide a range of jobs from 2,000 to 10,000. You can use these specs for your infrastructure. Whether you have 2,000 or 8,000 jobs, Tidal should support it.

This solution is a bit expensive in the current world where everybody is trying to cut down on certain things. 

Transparency regarding cost is okay. There were few changes that happen because of the move from Cisco to STA Group when we renewed our contract.

What other advice do I have?

Our platforms do have dependencies, but not in a single job. We do have two different jobs dependent on each other, but one may run on Windows while the other run may on Unix software or our SQL server. The jobs will not communicate, except one is dependent on one another, not internal data.

They are increasing capacity. However, we probably are not using it because we don't have a requirement, and sometimes it's expensive.

The learning curve is easy. I don't think it's complex. I never heard back from my developers that it is complex. They always complain about the performance of the client. Other than that, they usually say the documentation or help available is fairly useful for them.

The training needed is minimal because Tidal is straightforward. It takes a couple days of training. Of course, with any new tool, you need to read certain documentation. Anybody who is doing the training can't provide every detail of that particular tool, but people can get the feel of the tool pretty quickly.

The best thing with Tidal is its ability to support multiple platforms.

I would rate the product as an eight (out of 10).

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr System Engineer at a financial services firm with 5,001-10,000 employees
Real User
Alerts when things are falling behind schedule, or something unexpectedly fails, enable us to jump in and address an issue
Pros and Cons
  • "The first, big thing that we got out of using Tidal Workload Automation was having a centralized view of the status of all of our batch processes across all these systems... We can look into the schedule at any given time and see if things are running on track or if they are falling behind. We can also see if something failed."
  • "Their software installation and update process could use some improvements. I'm pretty sure they're working on that, but that's definitely an area where it could be streamlined a lot. There's still a lot of manual work that you have to do with the schedule when you deploy masters or do the agents."

What is our primary use case?

We use it to manage our batch processing. For us, it came in as a replacement for a lot of different systems running crontab. In our case it's primarily for Unix/Linux systems that don't have their own mechanism for kicking off all these batch processes. It's the coordinator of all of our background processes and batch jobs that are running overnight and during the day.

We use it to kick off custom Unix/Linux scripts that will launch our application processes. It's almost entirely Windows and Linux shell scripts that it's kicking off.

How has it helped my organization?

For administrators, the alerting has been a big plus, in addition to having a place to go and look at the status. They can be notified when there's something happening in a schedule, like things are falling behind schedule, or something unexpectedly fails. It definitely helps speed up the time to jump in and address an issue and get things back on track.

It has also given us a framework for standardizing a lot of our processes. Before we had all these things in Tidal, there were so many custom services and applications written. Tidal has given us a way to say, "Here's a standard way for you to get your jobs scheduled and automated." It hasn't necessarily enforced it, but it has given people an opportunity to say, "Oh, if I use the tool and if I set up my jobs to be able to run in the scheduler, it will be that much easier for me to get this delivered to production, or to test it and validate it." It has helped us put a framework around how developers are going to get their application code deployed. It's not really pushing the code, but it has encouraged some consistency in how they design their processes.

It would be really hard to quantify how much staff time it has saved, but for sure, before that initial move into the solution, some things would take forever. It was just complete spaghetti going through dozens of boxes with different crontabs trying to figure out: "Okay, I had an incident in the middle of the night. What ran, what didn't run? What ran but didn't complete successfully?" and those kinds of things. Tidal has resulted in a huge gain there. I don't think there's any way I could quantify how much it's simplified those outage scenarios. 

And even a planned maintenance was just as hard as an outage before we had Tidal. Now, with a scheduler, we can schedule a big maintenance that's going to require a lot of people to be on hand, one where time is of the essence. The more efficiently we can adjust a schedule for an off-hours maintenance and essentially disrupt what our typical schedule is, the more it helps us with those maintenance procedures. We know in advance that we have the capability to move jobs earlier and to move jobs later so that they're outside of the maintenance window and that we're not going to conflict with anything. When we're done with our maintenance, we're able to just press a button and let everything run and go.

Tidal has definitely reduced weekend and overtime hours. In our environment, there's no way to eliminate those hours, but that's nothing to do with Tidal. That's our own design. 

Our team does the majority of the work with the scheduler. It gives us the ability to do a lot of the scheduling tasks pretty quickly, so that the developers or business folks who are making requests don't need to deal with it. It gives us the leverage to make what they feel is a bigger change to the schedule, and to knock it out really quickly. They don't have to code something or make changes to handle it. We can do a lot of those adjustments from the scheduler itself.

The solution has enabled us to do more in terms of job capacity because, in the past, we had all these different crontabs running around out there. There was really no good way for people to condense jobs together, as soon as the previous one finished, unless they customized every process flow or job flow into a script. Doing so was essentially a custom program or process that they'd have to create for each one, and that's pretty difficult to manage. With the scheduler, we can squeeze those jobs together with their native process runtimes and say, "Okay, we're going to run through steps 1 to 10, allow those things to run in a sequence, and get them done in the shortest window possible. It has definitely helped with that.

Our environment is really different now compared to what it was when we started with Tidal all those years ago, but there's really no way we could have sustained that old model without having the functionality that's in the scheduler get our schedule done quickly. As our company has grown, it's been difficult for us to find maintenance windows or quiet periods. Every minute that we can save reduces the time an overnight batch process impacts daytime business users. The quicker we can get things completed, the better it is for the user experience and our environment.

What is most valuable?

The first, big thing that we got out of using Tidal Workload Automation was having a centralized view of the status of all of our batch processes across all these systems. We're not a big environment compared to some of their customers, but these are all business-critical processes that we're running and there are at least 100 different systems in our environment. To manage all these processes, it gives us a single point of view. We can look into the schedule at any given time and see if things are running on track or if they are falling behind. We can also see if something failed. The big thing is having that visibility into everything.

We use it for cross-platform and cross-application workloads, although they're not that different from each other. A lot of our workloads are similar, but they're technically different platforms and applications. We have some different OS's, but they're all Unix or Linux systems that are running the same sort of back-end technology. In our world, internally, they're different platforms. It gives us a really simple view into everything that's happening. 

I've been using it for a long time, so to me, it's a pretty intuitive way to, at a glance, look at how things are progressing in the day for the batch schedule. I don't know if that would necessarily be the case for a new user. To me it's intuitive and that is what helped us choose it over some other scheduling technologies in the past. It seemed like the most intuitive way to look at a lot of different batch processes running on lots of different systems.

As far as its ability to allow admins and users to see the information relevant to them, the interface is good, once you have access to it. We have had a little bit of an issue with some browser compatibility, but other than that, it's been a good tool for people to come in and look at where is their process is at from a business point of view. They do have to have a little bit of familiarity with what it is that they're looking for, the programs in the back-end. This is nothing to do with Tidal, but our technology environment is a bit hard to digest early on. Things can be a little bit difficult to navigate in our technology stack, at times. Tidal helps those users who are new to it to get a view of: "Here's the thing that I'm interested in. I know the program name, but I don't know when it runs, or how long it takes." Without having to get into the back-end of our technology, it does give them a way to look at what's happening in the schedule.

What needs improvement?

Their software installation and update process could use some improvements. I'm pretty sure they're working on that, but that's definitely an area where it could be streamlined a lot. There's still a lot of manual work that you have to do with the schedule when you deploy masters or do the agents. 

The other thing is that the performance of the web interface has not been great. It's feedback I get quite a bit, that the web interface can be sluggish at times. We've got to recycle it to get it to be more responsive. We brought up this issue a while ago. A lot of what we may be dealing with is that we are running on an older version. A lot of the performance stuff, I suspect, has been corrected in the later versions. We are running on 6.2.1 but they have got 6.3.5 out there now.

As for stuff we'd like to have, I'd love to see the database back-end have PostgreSQL or MySQL. Right now the choices are Microsoft SQL Server or Oracle.

For how long have I used the solution?

I've used Tidal Workload Automation for about 15 years.

What do I think about the stability of the solution?

It's been rock solid for us. We've had it for 15 years and I have really never had to make support calls to either Cisco or Tidal. The only times I ever really have to contact them are when we do our renewals or we migrate to a new version and we have to get a different license key.

What do I think about the scalability of the solution?

I don't think we've ever pushed a limit of the schedulers, the masters. We haven't really had any kind of scalability issue with regard to the scheduler or the agents. The only thing that we've run into as far as scalability goes would maybe be the web interface, which can get pretty slow at times, so we've got to cycle it. The web client is just sluggish and has an issue where that performance degrades over time. That's why we do the recycle and we notice it helps quite a bit to recover it.

How are customer service and technical support?

I really don't have to make support calls almost ever.

I'll ask a question sometimes, and they've been great. They've been very responsive. I haven't even had to do that for quite a while now. We set up our current implementation when they were still with Cisco. 

It was a little bit difficult with Cisco to get to the Tidal software engineers who are now their own entity. It's definitely gotten a lot better now that they're not part of Cisco. I can just call in. They know who I am and what I'm asking for right off the bat. When it was with Cisco, there was a whole triage system you had to get through, and a lot of people at Cisco didn't even know what the product was or that it existed.

Which solution did I use previously and why did I switch?

We only had crontab on a bunch of Unix systems. We looked into Tidal because we were having so many missed processes. Our environment is so much bigger and more complicated now compared to 15 years ago. But even back then it was almost like having things in crontab made it easier for there to be issues because they were all arbitrarily set to run at different times, different users, different systems. If there was some sort of conflict or collision, there was really no way to even regulate the fact that there were too many processes running at given time. 

It actually helped prevent some issues then, and now we have so many things cranking through Tidal. Getting all this to work in crontab would be impossible.

How was the initial setup?

Installing is not terribly complex. I don't have experience with other scheduler products, so I can't compare it to them, but it does have more manual install steps than some other software in general. For instance, there isn't an RPM installer. We use a lot of Red Hat in our environment. We can use RPMs for our Unix platforms and our Linux platforms. It would be nice if it was just packaged like that, so you could run the install or do the configure, perhaps with a few prompts. It's not far from that. It does have a shell script that runs, which isn't too different. But it would be nice to run updates for our scheduler along with all the other OS updates that we do in our environment.

If you know what you are doing, you can really get through the deployment, easily, in under an hour. I don't even know if it would take that long. If you have access to create your database and you already have your OS environment provision, the install and setup is really not very time-consuming. There are just the few manual steps you need to do, here and there, to configure it. But it's definitely doable in an hour. 

Assuming someone has access to do each of the steps that they need to do, one person could definitely do the install. I've done it in a VM lab and definitely knocked it out in under an hour. As long as you can create your database, create your database users, and run the software install, it's definitely a one-person job.

In terms of an implementation strategy, we've really stuck with one model. There's not a lot of leeway there. Essentially, you are going to have three master servers, a client manager, and you're going to have a database somewhere. The only difference might be the choice of operating systems or whether you're going to run on a VM or a physical server. But that's pretty removed from Tidal itself. There isn't a whole lot of variation there.

When it comes to a learning curve for Tidal, I've been using it a long time, so it's pretty intuitive to me. New users need to get their bearings and to know how they can filter, and what they need to filter on to answer the questions they have. It takes them two or three times of logging in and working with it. Sometimes we provide some guidance on best practices to find their program. It can be a bit overwhelming. I don't think Tidal necessarily makes it hard, but it's just the nature of all these processes running and the things that are there. Tidal helps with it, but it doesn't keep it from being a complicated thing to try and follow and to try to understand.

What was our ROI?

Tidal Workload Automation is a no-brainer for us, given the importance of the processes that we have. The cost for coordinating, managing, and getting all these things to complete, while warning us when things are not running on time, to me, makes it a no-brainer. 

I do not know how to quantify our ROI. We get everything that we pay for in the product, and there are even features that we do not use.

What's my experience with pricing, setup cost, and licensing?

Another advantage of Tidal is that it is a pretty affordable scheduler tool that lets us do a lot. You get a lot of bang for the buck. It has always seemed pretty reasonable to me.

The licensing model is hugely flexible. In fact, sometimes we get a little bit lost on which model should we go with. Over time, it has adjusted and changed. But the current model that we have is to run with enterprise license agreements. We do not have to worry about how many agents we add and remove. That has been the easiest for us.

They have options to do one-, two-, or three-year renewals. You can space out your renewals or do things like an enterprise license agreement. You can dial into, "Hey, I just want to run this many hosts." They cover a lot of options for you. It may not make sense for a smaller shop to run an enterprise agreement. They might just want to run five agents. In their case, having that option is huge.

Given that there are no costs for upgrades and other enhancements, it is really easy to budget for Tidal. We have not had any issues.

Which other solutions did I evaluate?

When we did the initial implementation, we did a full product comparison. We looked at the top four and did a comparison of the features of what seemed like the best products at the time. Over the years, I've reached out to other vendors just to get an idea of what other features are out there in the product space. We have never really found anything that had a compelling advantage over Tidal Workload Automation that made us want to switch. It has been really stable and has definitely gotten the work done for us.

We looked at CA's AutoSys at the time, but CA has so many schedulers now that it's hard to say exactly which one that is today. IBM had Tivoli Workload Scheduler, at the time. Since then, we have had someone from ISC reach out a fair amount. We looked a little bit at Control-M from BMC Software as well. JAMS was another one that popped up.

Tidal is familiar. We know how it works and what it is doing. It also keeps a fair amount of accessibility about it. One person could sit down, deploy it, do the install, get it up and running, and then it is just a matter of setting up the agents and the workload. I have not looked at the other products in so long now that it is not even relevant today, but BMC and a couple of other schedulers were overly complex, or their user interface just was not intuitive enough for our users.

What other advice do I have?

The big thing I would say to someone who is deploying this new, aside from having a naming standard and the structure, would be to get their security groups right, up-front. That is a pretty big one. Set your owners and who your users are going to be. Think about how you are going to structure it from a user point of view.

We have two core systems here. One is for our loan origination system and the other is for allocating leads and directing leads, and they both rely on Tidal heavily. If the scheduler were to shut down for some reason and we couldn't run it, it would have a huge impact on our business. Thankfully, that's not a scenario that we encounter, but we really rely on it to drive so many of these business processes. In terms of increasing our usage of it, other business areas have started take some interest in it, but we haven't made a dedicated effort to get, for example, our SQL Server systems to be managed by the scheduler, or to do things with Amazon. We haven't really had anyone driving that effort.

In our environment, one person, me, maintains the Tidal software. That's more an organizational question of how many people do you want to have who are capable of supporting it. We have a team of six people, all systems engineers. They're not all as up-to-speed on it as I am, but if I gave them my notes for doing the install, I'm sure they could all do it.

The number of users of Tidal, in our organization, depends on the definition of "users." It touches things that impact every user in our organization. But with respect to users of the interface who log in and use it, it's only about a dozen people. Aside from the system engineers, the next biggest users would be developers or program engineers. They are people who are involved in researching updating a task to a procedure or process and they want to know what the scheduled processes are and when they run. They are also looking at what their rules are for running and how long it takes. Sometimes business analysts will be involved in that as well.

Tidal is a nine out of 10. I would say it's a 10 if we didn't have some performance struggles with the web interface.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free Tidal Automation Report and get advice and tips from experienced pros sharing their opinions.
Updated: November 2023
Product Categories
Workload Automation
Buyer's Guide
Download our free Tidal Automation Report and get advice and tips from experienced pros sharing their opinions.