What is our primary use case?
We use Tidal to run jobs across multiple application platforms, such as SAP, ECC, PDN, and Informatica, as well as jobs that run in Azure cloud. We also use it for several warehouse management jobs with OS/400 and AS/400 connectors. We have a lot of different types of connectors, then we are bringing all these jobs into Tidal so we can set up dependencies between jobs that run, e.g., an SAP job and a OS400 job may be dependent on each other in some way, allowing a cross-platform job flow.
We are currently on the most recent version.
How has it helped my organization?
We are using it for cross-platform workloads. That is probably the biggest reason that we are using it. The solution is generally good. Over the years, we have needed to do our own learning about how to manage it in terms of understanding dependencies and successors, then setting up times and so forth. However, this is the type of stuff you would have to learn with any scheduling app. We find it to be really useful. I'm hoping with the Explorer tool that they'll have better reporting so we can do some full cross-platform job stream reporting that they haven't really done much in the past. Therefore, we should be able to see some of that. In terms of managing it, I find it very useful other than the learning curve.
We use cross-platform management for so many things. We use it a lot for our warehouse management replenishment type things: to and from SAP. Once we implemented our job stream flow, things gets sorted out of house for delivery and can be update in SAP (and vice versa). Having the job stream has been helpful. Also, having it all automated makes a difference to replenishment.
We use the ability to enable admins and users to see the information relevant to them specifically in our production environment. We can, but don't always, limit someone to only seeing data that they need to see. Then, they are not overwhelmed by other data. We do allow most of our users to see all the other data just for information and to understand the environment. However, you can begin to narrow in on what you need, if you're using policies and work groups correctly. Depending on how we use it, especially in production, it lets users only be able to do what they should be doing in production. They should only be managing their jobs, possibly see other jobs, and understand if there is a delay upstream which could be impacting them. They won't be able to manage those jobs. They need to contact the right people who understand those jobs to manage them. The solution lets them work within their lanes and do the work correctly without having a negative impact upstream, and hopefully, not downstream.
There is an awareness that we are scheduling across the multiple applications and understanding that all applications don't live in their own silos. There is an impact across the organization. It gives us that holistic awareness, in general.
In the past couple of years, I have done education and we have leveraged creating alerts that go to the right people. It has allowed us to do that. Therefore, I don't get alerts for something that I shouldn't be dealing with. Now, people who own the jobs get the alerts and they can figure out if there is a problem with the application that they need to work with or if it is something with Tidal. Then, if necessary, they can elevate it up to me. Fortunately, that doesn't happen as much anymore, which makes me very happy. It gives us the alerts in time so we can handle things ideally before they become critical, and hopefully, we're doing our jobs so the right people are contacted.
What is most valuable?
I love the "where used by" feature where you can find out where a particular job action, job event, or even a connector is being used. That is really good.
I've seen a lot of improvements in the logging. It has become more useful.
I'm looking forward to working with Explorer and Repository. I haven't had time to implement those yet, but I'm pretty excited about both of those tools.
We get a lot of use out of variables within Tidal to help schedule jobs, help track things, create alerting, etc. I find those variables have a lot of use.
What needs improvement?
The solution’s drill-down functionality, so admins can investigate data or processes, depends on what we are looking at. In some places, it is better than others and getting a lot better. In the five years that I've been supporting this solution, I've seen them get much better at allowing us to get more detailed information in the logs and job activity.
I'm still hoping with Explorer to be able to see end-to-end job streams. That's not really something that's easy to see today in the web client. However, I haven't worked with Explorer yet. One of the things that we have found frustrating is not being able to see an end-to-end job stream across multiple applications within Tidal. We use jobs for that right now, but I have high hopes that we'll be able to see that in Explorer.
The reporting piece needs improvement. They are working to improve it but this is the piece that they can continue to work on. By reporting, I mean things like end-to-end job streams, historical reporting over the long-term, and forecasting. Those are some areas that I've expressed to them where they need to up their game.
We have the transport functionality where you move ops from one system to another. Right now, it's a manual process. I would love to be able to have more automated transports. Then, I'd love that to be able to tie this into our ITSM system so we can have change approvals, which are then approved, then transports automatically happen.
For how long have I used the solution?
It feels like forever. We have had it at Columbia Sportswear for seven years. I have been supporting it for five years.
What do I think about the stability of the solution?
The stability has gotten a lot better. Every time that they level a version up, there are a few months where it is a little rocky, especially because they are trying to make some real changes on the back-end. Sometimes, I'm guilty of being a bit too cutting edge with the patches that I put in place. I have learned to hold back a little and give it a couple of months. Usually by that time, they have worked out the bugs and things are pretty stable. I would say this about any system.
I'm the only one who supports Tidal, then I pull in a dev person. There is usually one person involved with setting up the VMs. However, they have that automated so it is just a request for a standard set of servers. They just push a button and the servers are built. When we get to where there is QA testing, we're usually trying to align that with a lot of other QA testing. Therefore, people are naturally testing the system as they would with any other work that they are doing. Essentially, this is all of our schedulers, which are 15 to 20 consistently. I'm not asking them to do anything that they are not already doing, except tell me if there are problems.
I have a very loose backup person but I'm very motivated not to get calls on the weekends or vacation, which is why we built in our alerting systems. We try to keep them strong, so before anything gets to me, it's been vetted by the people who can solve the problem if it is job-specific. If Tidal itself goes down though, I'm the one who gets called because I'm the one who can fix it.
What do I think about the scalability of the solution?
Tidal does a good job. We periodically have them do a performance review every six to nine months by sending them our logs. I open a ticket, then send them a bunch of logs. They take a look at them and we do any necessary tuning. We have discovered over the years, going from a small to medium to high-medium organization, that Tidal is very responsive in terms of helping us figure out how to tune systems so we have the best performance. It can handle very large scale organizations job-wise. It is just how you tune your servers, and they're very willing to help with that. The best thing that a person can do is work with Tidal support to find out exactly what is necessary on the back-end to have their system scaled out correctly. It can be done. We run about 8,000 jobs in production, but I know there are some systems which run tens of thousands of jobs of production. We haven't hit a scalability issue at all.
Regularly, 20 to 30 people use it in our organization on a week by week basis. We have about 100 users in the system. Their roles are developing, creating jobs, QA, testing job scenarios, events, and actions; everything around developing a job or job stream. Then, we have our service desk people who do the transports from QA into production. There are about four people who do this.
In production, people from each scheduling team are responsible for the health of their jobs, which can include if there are issues with the jobs running, maintenance that they have planned, setting those jobs on hold, asking me to put an outage on an adapter, rerunning jobs, or disabling/enabling jobs. It is general job development and job management.
How are customer service and technical support?
The standard tech support at Tidal is very good. You can call or open a ticket, if you get stuck on something. They are usually quick with answer or at least quick to respond to you with more information. When I have gotten stuck, I have always been able to get help and get out of it. I once spent eight hours on a weekend call with one poor guy.
The reality is you will always have issues that you have to escalate. That is just the world that we live in. 90 percent of the time, I have had a very good experience and gotten what I needed. I have been able to get support people on the phone. If we find something, and they haven't seen it, they are good at pulling in development. They are good at saying, "Okay, this is new. We will put it in a development." Now, with their new website where you can see your tickets and track things, they make it a lot easier. If you have a bug that is in development, you can track where it is and when it will probably be released. Now, there's a lot of transparency that makes it comforting to know your stuff is being worked on. These are improvements that they made as they moved away from Cisco.
When it was supported by Cisco, it was okay but it wasn't as good. Since Tidal broke away from Cisco two years ago, that was when we saw the most improvements in terms of things that we had been asking for and the delivery on them.
Which solution did I use previously and why did I switch?
I think we had a variety of solutions that were sort of stitched together.
How was the initial setup?
Its setup is around mid-level complexity. You need to do a little reading to understand how Tidal works. You need to understand things like connectors and the whole fault tolerant environment, but the data is all there to get to.
Whenever we are moving to a new operating system, I work with my infrastructure team to get new VMs built up in the right OS. I start to set them up with all the things that I need in order to build Tidal. At this point, I usually get a demo license from Tidal as I'm doing the build. This way, I can build and test but not take up a license. Then, when I'm ready to go live, I always go live in development first to QA, then production. So, I have a cut-over from the old system to the new system, then we migrate our database over. I work with my DBAs to do that. Then, I do testing in development to make sure everything is right, doing the same thing in QA. I also do more rigorous testing with the schedulers, then eventually it goes into production. It is about six weeks from development to production.
The migration to the cloud has been an extensive project. It is going generally well. A lot of what was running in the Informatica environment has now been shifted over into the Azure environment over the last couple of years. That is where some of the migration has been occurring.
What about the implementation team?
The initial setup was done by somebody else who no longer works with the company. Since then, we have moved to new operating systems over the years. These are always new systems that we build up, then migrate from the old system to the new system. I've set this up several times, so systems that we are currently running are the ones that I've set up.
What was our ROI?
Thinking of all the people involved in checking jobs on a daily basis, manually running jobs or auditing them through standalone tools, and trying to connect them. We have saved hundreds of hours weekly, which is substantial.
I am able to create something predictable and manageable in such a way that we know that we will get alerted if there's a problem and know how jobs are going to run. People can see and manage their jobs on a daily basis without having to talk to me about them. The return on investment is scope of jobs, making it so the management of jobs is not something that is handled by one team. It can be parsed out to the schedulers who know and understand those jobs so they can have some control over them, then I don't have to worry about all the different jobs streams. I just have to look from above and be able to help make sure that the system itself works.
What's my experience with pricing, setup cost, and licensing?
Our yearly licensing costs are between $10,000 to $20,000. They have always been reasonable with us. I like that non-production licensing is about half the cost of production licensing. Licensing is by adapter typically. We have had scenarios where we have had to take an adapter from one environment to another, and they've allowed us to do that. They have made it a very reasonable process. There's definitely a feeling that they will work with you.
Budgeting is pretty predictable. They changed their model last time, which is why I'm not sure exactly how much it ended up costing. I know that our licensing guy did make a decision to license us in such a way that now we have a lot more flexibility based on adding VMs that can connect to Tidal and run jobs. So, it's not a problem to budget for it.
Which other solutions did I evaluate?
We have on occasion looked at other options simply just to be aware of what is out there. We don't plan to change anything right now that I'm aware of simply because we don't have the time or budget. I'm not even sure we have the need. Every once in a while, we do look around because it's useful to go out, compare, and ensure that it's still something that fits our needs.
What other advice do I have?
Depending on how you will roll it out, engage people who will be managing the jobs earlier in process so they are aware and can help plan how Tidal is used across the environment. That is something that I wish the people who had rolled it out had done. I don't know if that was even a consideration back then. There were definitely things that I would love to change about how we do our scheduling which are just so baked in at this point that it would be such a large change. Also, make sure that you engage and use Tidal's resources. They have some great resources and know what they are doing. Work with them, as they can help you figure out how to use this tool.
There are ways that it makes life more convenient in terms of ensuring the right people get alerted for issues. We are able to see job health, jobs over a couple of days, and have some predictability, but not as much as I would like to see in terms of forecasting. If we were to stop using it, we would go to something similar simply because it's so useful to have an overall scheduling application.
I have developed some training specifically for the learning curve. The basic job stuff is pretty quick, especially because we have a lot of people who can be leaned on. When you start drilling down into things like using variables or more ad hoc type settings, the learning curve is a little higher. However, we have a lot of people using those features or settings who help each other with learning them. While it's not incredibly steep, there is a learning curve. I do an hour to two hour sessions, which are either classroom led or recorded. That is usually enough for most people to get started. Sometimes, people will come back with more questions, which I totally encourage. Then, if they start to get into some of the deeper things, like ad hoc variables, I have additional sessions that they can attend. These are usually about an hour long and get them going down the right path. I know that Tidal has developed some training, but I had put some stuff in place before they did, as I wanted to train everybody so they could do their job and not have to talk to me.
The biggest lesson that I have learnt from using Tidal is train people. Make sure that the people who manage jobs understand what they are doing and educated to the best of your ability. That has been one of my key takeaways from this. Also, don't go to the latest patch when it first comes out.
There is a lot of power within Tidal, probably a lot that we're not even using today in terms of managing jobs as well as how we can set up alerting. Also, they have great support, so I can usually get what I need.
It's pretty extensively used right now. We might shift some of our job scheduling to more on demand, then still leverage Tidal for more of the batch scheduling. At least for now, we will be using it as we are continuing to have systems added in. I even have a ticket open because we have an adapter that we just added in that is not quite working right, potentially due to me not understanding the adapter. Therefore, we're continuing to add job streams, but it will always be dependent on what applications we are adding.
Two years ago, I would have given it a six (out of 10). Today, I will give it a nine (out of 10).
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.