Try our new research platform with insights from 80,000+ expert users
it_user779256 - PeerSpot reviewer
Solutions Architect at American Express
Real User
Allows me to generate and manage synthetic data, but the interface could be better
Pros and Cons
  • "It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results."

    What is our primary use case?

    Generate synthetic test data.

    It has performed fine. It provides us the capabilities that we were anticipating.

    How has it helped my organization?

    It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results. We can automate that process. Plus, we're no longer using production data.

    What is most valuable?

    1. I am able to maintain metadata information based off of the structures and 
    2. I am able to generate and manage synthetic data from those.

    What needs improvement?

    The interface based, on our unique test case - because we are extremely unique platform - could be better. We have to do multiple steps just to create a single output. We understand that, because we are a niche architecture, it's not high on their list, but eventually we're hoping it becomes integrated and seamless.

    As noted in my answer on "initial setup", I would like to see that I don't have to do three steps, rather that it's all integrated into one. Plus, I'd like to know more about their API, because I want to be able actually call it directly using an API, and pass in specific information so that I can tune the results to my specific needs for that test case. And actually make it to where I can do it for multiple messages in one call.

    Buyer's Guide
    Broadcom Test Data Manager
    June 2025
    Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
    857,028 professionals have used our research since 2012.

    What do I think about the stability of the solution?

    Stability is fine. It's stable. It's not like it crashes or anything like that, because it's just a utility that we use to generate data. Once we generate the data, we capture it and maintain it. We don't use the tool to continually generate data, we only generate it for the specific test case, and then don't generate it again. But it gives us the ability to handle all the various combinations of variables, that's the big part.

    What do I think about the scalability of the solution?

    For our platform, scalability probably isn't really an issue. We're not planning on using it the way it was intended because we're not going to use it for continually generating more data. We want to only generate specific output that we will then maintain separately and reuse. So, the only time we will generate anything is anytime there is a different test case needed, a different condition that we need to be able to create. So, scalability is not issue.

    How are customer service and support?

    Tech support is great. We've had a couple of in-house training sessions. It's coming along fine. We're at a point now where were trying to leverage some other tools, like Agile Designer, to start managing the knowledge we're starting to capture, so that we can then begin automating the construction of this component with Agile Designer as well.

    Which solution did I use previously and why did I switch?

    We didn't have a previous solution.

    How was the initial setup?

    The truth is that I was involved in setup but they didn't listen to me. "They" are other people in the company I work for. It wasn't CA that did anything right or wrong, it was that the people that decided how to set it up didn't understand.  So we're struggling with that, and we will probably transition over. Right now we have it installed on laptops, and it shouldn't be. It should be server based. We should have a central point where we can maintain everything.

    So, the set up is fairly straightforward, except for the fact that there are three steps that we have to go through. We have to do a pre-setup, a pre-process, then we can do our generation of our information, and then there's a post-process that we have to perform, only because of the unique characteristics of our platform.

    Which other solutions did I evaluate?

    In addition to CA Test Data Manager, we evaluated IBM InfoSphere Optim. Those were the two products that were available to our company at the time when I proposed the idea of using it in this way.

    We chose CA because they had the capability of doing relationship mapping between data variables.

    What other advice do I have?

    The most important criterion when selecting a vendor is support. And obviously it comes down to: Do they offer the capabilities I'm interested in at a reasonable price, with good support.

    I rate it at seven out of 10 because of those three steps I have to go through. If they get rid of those, make it one step, and do these other things, I'd give it a solid nine. Nothing's perfect.

    For my use, based on the products out there that I have researched, this is the best one.

    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    it_user778692 - PeerSpot reviewer
    Software Developer Engineer at a financial services firm with 5,001-10,000 employees
    Real User
    It saves us time from generating the same amount of data in real-time
    Pros and Cons
    • "It saves us time from generating the same amount of data in real-time.​"
    • "We had one user that was primarily working on it, who probably spent a few months initially setting it up."

    What is our primary use case?

    What we do is we generate data that we use to generate like socials. We can also virtualize the bureaus that the software calls out to, so we can get a good response back that we can simulate.

    How has it helped my organization?

    It is just easy to manage from what I have seen so far.

    What is most valuable?

    It saves us time from generating the same amount of data in real-time.

    What needs improvement?

    There are some known bugs that I have found, but I think those are known issues. I think those issues just need to be worked out.

    For how long have I used the solution?

    Less than one year.

    What do I think about the stability of the solution?

    So far, so good.

    What do I think about the scalability of the solution?

    So far it has worked for our enterprise services, and we are pretty large. So, I would say it is fairly scalable at the moment.

    How was the initial setup?

    Initial setup was pretty straightforward. We had one user that was primarily working on it. She probably spent a few months initially setting it up, but it is just because we do not know the product. We did not know it at first, and working out all the kinks to work with our environment.

    What other advice do I have?

    I would say trial it out.

    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    PrabhakarDas - PeerSpot reviewer
    PrabhakarDasSenior Technology Architect at a tech services company with 10,001+ employees
    LeaderboardReal User

    I have been working with CA TDM from my Capgemini days, when it used to be Grid Tools and Ankur Seth was the only Indian SPOC. Later Started my journey with CA TDM and did a few POCs for BFSI, Telecom and ENU clients which came out successful and opted in as part of their transformation roadmap. In the meantime kept on my journey in delivery areas with CA TDM and other market standard tool suite and found CA to be more fkexible, client issues oriented patchings and problem solver.

    Buyer's Guide
    Broadcom Test Data Manager
    June 2025
    Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
    857,028 professionals have used our research since 2012.
    Network Engineer at a financial services firm with 1,001-5,000 employees
    Real User
    It scales very well to our network and we have a very large network

    What is our primary use case?

    Monitoring network devices using SNMP. It works very well. 

    How has it helped my organization?

    • Scalability
    • The ability to have multiple pieces of information on the same screen. 

    What is most valuable?

    • The flexibility
    • The ability to view the data the way we want it. 

    What needs improvement?

    More data visualization, the way that we are looking at data, we want to be able to see it in different ways. So, we are looking to expand the visualization of that data.

    What do I think about the stability of the solution?

    It is very stable. We have had issues, but we have worked through those issues with CA, and they have been successfully resolved. 

    What do I think about the scalability of the solution?

    It scales very well to our network. We have a very large network. Finding a solution that can actually monitor all the devices and interfaces, this product has been able to do that.

    How are customer service and technical support?

    Technical support is very good. They have performed to our expectations.

    Which solution did I use previously and why did I switch?

    We were previously using a different solution, however CA purchased that solution.

    How was the initial setup?

    Due to our environment, it was complex. The product itself is simple. 

    Which other solutions did I evaluate?

    SevOne.

    What other advice do I have?

    I would recommend this solution.

    Most important criteria when selecting a vendor: 

    • Stability
    • The size of the company
    • The ability to respond to our needs and meet our needs. 
    • The breadth of software that they have available for what we are looking to do.
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    it_user558576 - PeerSpot reviewer
    Engagement Manager at a tech services company with 5,001-10,000 employees
    MSP
    Synthetic data generation is outstanding

    What is most valuable?

    The synthetic data generation. It generates the data. By default you get the data but then you have to modify the data. There, I find that it does that amazingly well. I have not seen that feature as capable in other tools.

    How has it helped my organization?

    This data is a very, very important thing because there are a lot of challenges around this data right now and it's very complex. Creating test data right from scratch is going to be complex. But with the Data Finder's synthetic data generation, the whole copy is created and then on top of it all the data manipulation is done. 

    What needs improvement?

    I can't think of anything at this point in time.

    What do I think about the stability of the solution?

    Stability is very good. In my role as an Engagement Manager, I don't, on a day to day, use the tool. My team does that. But I have not heard any complaints.

    What do I think about the scalability of the solution?

    Scalability. It's good as well.

    How are customer service and technical support?

    I haven't used tech support personally, but my team does. I have actually been to the CA office in Scottsdale many times. The support is very good because we are, in many ways, a partner with CA.

    Which solution did I use previously and why did I switch?

    We were using our own tool, DDC2 which is a homegrown tool, as well as in some areas IBM Optim.

    How was the initial setup?

    I was not involved in the initial setup.

    What other advice do I have?

    In terms of advice it depends on what you need. Based on our experience we have seen this is a very good tool. Especially when you need to get the bulk data and make changes to it on the fly to do testing. This is the tool that you can use.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    it_user752190 - PeerSpot reviewer
    Senior System Engineer at a comms service provider with 10,001+ employees
    Vendor
    Can mask data according to your needs and statistical distribution
    Pros and Cons
    • "The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different. On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe."
    • "We are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster."

    What is our primary use case?

    Data masking, exactly what this tool is created for. We are going to use it for the incorporation into test or development environments.

    We are managing a lot of customer data, and the idea is to not have, or approve, or give a lot of permissions to read all this data. We need to mask them, but we still need to work with them, which means that developers need access to a lot of data.

    We have needed a tool where the data provided for developers should be easy and anonymized. This is probably the one and only tool with so many sophisticated features. We need those features for masking/anonymizing data with statistical distribution and with preparation of test/dev data (a lot of data).

    How has it helped my organization?

    This tool is super fast and it has solved many of our issues. It is also much better than many other solutions which are on the market. We've already tested different ones, but this one looks the best currently.

    We can deliver, first, securely; second, safely; and third, without extra permissions. We don't need to go through a whole procedure so that developers have permission to access production data. It's not needed anymore. And it will work with production data because it's almost the same data but, of course, not real. The structure of the data is the same and the context of the data is the same but the values are different.

    The features are very technical and are definitely what we need. We've got some rules, especially from security, from compliance, but we need to take care of our customer data, very securely, and subtly. There is no other product that gives you these opportunities.

    What is most valuable?

    • Masking of data. 
    • There are lots of filters, templates, vocabularies, and functions (which are very fast) to mask data according to your needs and statistical distribution, too.

    The functionality of this tool is something that changed our work. We need to manage the data, and for developers to work on actual data. On the other hand, you don't want to give this data to the developers because they are customer data that developers shouldn't see. This tool can deliver an environment which is safe for developers. Developers can work on a big amount of data, proper data, actual data, but despite the fact that they are actual, they are not true, because they are masked. For the developer, it's absolutely proper because instead of a customer's date of birth, he's got a different date of birth, which mean its actual data but not the exact data, it's already masked. 

    The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different.

    On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe.

    These are two pros you cannot find anywhere. Most tools - for example, Informatica - are taking data as they are, in the original, not masked form, then on the Informatica server you need to mask them, and then you're sending them to the destination. Here, in TDM, you already take masked data.

    What needs improvement?

    If you want to automate something, you need to figure it out. There is no easy way (software is only for Windows). I am missing a lot of terminal tools, or API for the software.

    The software is working on Windows and, from some perspectives, that might be a problem. From our perspective, it is a problem because we need to have a different team to deploy for our Windows machines. This is a con from our perspective. Not a big one, but still.

    They have already improved this product since our testing of it, so it may be that the following no longer applies.

    The interface is definitely one you need to get used to. It's not like a current interface which is really clear, easy to check. It's like from those days, some time ago, an interface that you need to get to know.

    Also, we are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster.

    We asked CA if there would be any possibility to implement our suggestions and they promised us they would but I haven't seen this product for some time. Maybe they are already implemented. The requests were very specifically related to the product we have, Teradata. This was one of the real issues. 

    Overall, there was not much, in fact, to improve.

    For how long have I used the solution?

    Less than one year.

    What do I think about the stability of the solution?

    We didn't face any issues with stability.

    The only problems we had, and we asked CA to solve, were some very deep things related to our products. It was not core issues, in fact. It was, '"We would like to have this because it's faster, or that because it's more robust or valuable."

    What do I think about the scalability of the solution?

    I cannot answer because we only did a PoC, so I have no idea how it will work, if there will be a couple of designers working with the stool.

    Still, I don't see any kind of issues because there will be only a few people working with the design of masking and the rest will be done on the scripting level, so it's possible we won't see it at all. 

    How are customer service and technical support?

    During the PoC we had a support person from CA assigned to us who helped in any way we needed.

    Which solution did I use previously and why did I switch?

    We didn't use any other resolution, we simply needed to have it implemented and we tried to figure it out. We looked at the market for what we could use. TDM was our very first choice.

    How was the initial setup?

    I didn't do the setup by myself, it was done by a person from CA. It didn't look hard. It looked pretty straightforward, even with configuration of the back-end database.

    Which other solutions did I evaluate?

    After doing our PoC we tried to figure out if there was any other solution which might fit. We tried and, from my perspective, because I was responsible for the whole project, there was no solution we might use in the same way or in a similar way. This product exactly fits our compliance and security very tightly, which is important.

    There aren't any real competitors on the market. I think they simply found a niche and they started to develop it. We really tried, there are many options out there, but there are some features only specific to this product and there are features you might need, if you, for example, work for a big organization. And these features aren't in any other product.

    There are many solutions for masking data, there are even very basic Python modules you can use for masking data but you need to take data from the source, you need to mask them, and you need to deliver the data to the destination. If you have a big organization like ours, and you have to copy one terabyte of data, it will take hours. With this solution, this terabyte is done in a couple of minutes.

    What other advice do I have?

    We did a proof of concept with TDM to see if the solution fits our needs. We did it for a couple of months, did some testing, did some analysis, and tried to determine if it fit our way of working. Now we are going to implement it in production.

    If there is a big amount of data to mask and you need to deliver it conveniently, pretty easily, there is no other solution. Configuration is easy. It's built slightly differently, the design is slightly different than any other tool, but the delivery of the masked data is much smoother than in any other solution. You don't need to use something like a stepping stone. You don't need to copy data to some place, then mask it, and then send it, because you copy data which is already masked. Data is masked on the fly, before they are copied to the destination. You don't need anything like a server in the middle. In my opinion, this is the biggest feature this software has.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    it_user572823 - PeerSpot reviewer
    AVP Quality Assurance at GM Financial
    Video Review
    Real User
    Gives you confidence in data that you're creating and keeps you out of the SOX arena, because there's no production data within that environment.

    What is most valuable?

    Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The more that you can put in controls and keep your data clean, the better off you are. There are some laws coming into effect in the next year or so that are going to really scrutinize production data being in the lower environments.

    How has it helped my organization?

    We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car, etc. We're in the car, auto loan business.

    What needs improvement?

    I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.

    What do I think about the stability of the solution?

    TDM has been around for a couple of years. I used it at my previous company, as well. It's been really stable. It's a tool that probably doesn't get utilized fully. We intend on taking that, partnering it with the SV solution and being able to generate the data for the service virtualization aspect.

    What do I think about the scalability of the solution?

    Scalability is similar along the SV lines; it's relatively easy to scale. It's a matter of how you want to set up your data distribution.

    How are customer service and technical support?

    We were very pleased with the technical support.

    Which solution did I use previously and why did I switch?

    When you have to generate the amount of loan volume that we need – 50 states, various tax laws, etc. – I needed a solution that I can produce quality data that fits the target testing we need; any extra test cases; etc. We’re more concentrated on being very succinct in the delivery and the time frame that we need to get the testing done in.

    I used CA in my previous company. I have prior working relationship with them.

    How was the initial setup?

    The initial setup was done internally. Obviously, the instructions that were online when we downloaded it, we were able to follow those and get the installation done. We did have a couple of calls into the technical solution support area and they were able to resolve it fairly quick.

    What other advice do I have?

    I think from my synthetic generation, a lot of times generating synthetic data can be cumbersome. TDM, with some of the rules aspect of it, you can generate it and have your rules in place that you know your data's going to be very consistent. When we want a particular loan to come through with a particular credit score, we can generate the data. We can select and generate the data out of TDM that will create me a data file for my in-front script, through using DevTest.

    I also push the service virtualization record to respond to the request of the loan, hitting the credit bureau, returning a certain credit score, which then gets us within that target zone for that loan we're looking for, to trigger a rule.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    it_user572907 - PeerSpot reviewer
    Senior Specialist at Cox Automotive
    Video Review
    Vendor
    The data masking is a powerful aspect of the tool and I have found the best success in the data generation features.

    What is most valuable?

    A lot of people, when they first started looking at the tool, started immediately jumping in and looking at the data masking, the data subsetting that it can do, and it works fantastically to help with the compliance issues for masking their data. That's a very powerful aspect of the tool.

    But the part I found the best success in is actually the data generation features. In really investing into that concept of generating data from the get-go, we can get rid of any of those concerns right off the bat, since we know it's all made-up data in the first place.

    We can fulfill the request of any team to very succinct and specific requirements for them each time. When I look at it as a whole, it's that data generation aspect that really is the big win for me.

    How has it helped my organization?

    When I look at the return on investment, there are not only huge financial gains on it. In fact, when I recently ran the numbers, we had about $1.1 million in savings on just the financials from 2016 alone. What it came down to is, when we started creating our data using Test Data Manager, we reduced our hours used by about 11,800 in 2016. That's real time. That's a significant, tangible benefit to the company.

    When you think about it, that's somewhere around six employees that you've now saved; let alone, you have the chance to focus on all the different testing features, instead of having them worrying about where they're going to get their test data from.

    What needs improvement?

    It's cool that right now with this tool, they're doing a lot of things to continuously improve it. I think Test Data Management as a strategy across the whole organization, has really picked up a lot of momentum, and CA’s been intelligent to say, "We have a really great product here, and we can continue to evolve it."

    Right now, they're taking everything and taking it from a desktop client and moving it into a web portal. I think there's going to be a lot of flexibility in that. If I was going to look at one thing that I am hoping they are going to improve on is – it is a great database tool – I'm not always sure about the programmatic abilities of it. Moreover, specifically, it's great in terms of referential integrity across multiple systems, multiple tables, but I do find a couple of limitations every now and then, because of trying to maintain that referential integrity; that I have to go in and try to manually make sure I want to break things.

    For how long have I used the solution?

    I've been using it for about two-and-a-half years at my current position, and I've actually been familiar with the tool for about the last five or six years.

    What do I think about the stability of the solution?

    The stability is wonderful on it. I don't think that, at any point, have I had a showstopper issue with the application. It's never caused any major issues with our systems, and I will give credit where credit's due. Even right now, as they continue to enhance the tool, it has still stayed wonderfully stable through that process, and everyone on CA’s side has been there to support on any kind of small bug or enhancement that might come up along the way.

    What do I think about the scalability of the solution?

    It has scaled tremendously. Especially, again, I don't want to harp back too much on it, but when you start looking at data generation, your options are endless in the way you want to incorporate that into your environment.

    I have my manual testers utilizing this to create data on the fly at any moment. I have my automation users who are going through a little bit more of it, getting daily builds sent to them. I have more performance guys sending requests in for hundreds of thousands of records at any given time, that might have taken them two weeks to build out before, that I can now do in a couple hours. It ties in with our pipelines out to production.

    It's a wonderful tool when it comes to the scalability.

    How are customer service and technical support?

    Any time that I've had something that I question and said, "Could this potentially be a bug," or even better, "I would love this possible enhancement", it's been a quick phone call away or an email. They respond immediately, every single time, and they communicate with me, look at what our use case is on the solutions, and then come up with an answer for me, typically on the spot. It's great.

    Which solution did I use previously and why did I switch?

    We knew we needed to invest in a new solution because our company was dealing with a lot of transformations. Not only do we still have a large root in our legacy systems, that are the iSeries, DB2-type of systems, but we have tons and tons of applications that have been built on a much larger scale in the past 40 years, since the original solutions were rolled out. Not only did we have a legacy transition occurring within our own company, but we also changed the way that our teams were built out. We went from teams that were a waterfall, iterative, top-down approach, to a much more agile shop.

    When you look at the two things together, any data solution that we were using before, maybe manual hands on keyboards, or automated scripts for it, just weren't going to cut it anymore. They weren't fast enough, and able to react enough. We started looking at it and realized that Test Data Manager by CA was the tool that could actually help to evolve that process for us.

    When selecting a vendor, I wanted someone that I'm going to have actually some kind of personal relationship with. I realized that we can't always have that with everyone that we're working with, but CA has done a wonderful job of continuously reaching out and saying, “How are you doing? How are you using our product? How do you plan on using our product? Here's what we’re considering doing. Would that work for you?" They've been a wonderful partner, in terms of communication of the road map of where this is all going.

    How was the initial setup?

    It's a great package that they have out there. It's a plug-and-play kind of system, so it executes well on its own to get up and running in the first place. When they do send releases in, it's as simple as loading the new release.

    What's kind of neat about it is, if they do have something that needs to be upgraded on an extension of the system, some of the repositories and things like that, it's smart enough to actually let you know that needs to happen. It's going to shut it down, take care of it itself, and then rebuild everything.

    Which other solutions did I evaluate?

    We evaluated other options when we first brought it in. We looked at a couple of the others. The reason that we ended up choosing Test Data Manager was that it was stronger, at the time at least, in its AS/400 abilities, which is what all of our legacy systems are built on. It was much more advanced than anything else that we were seeing on the market.

    What other advice do I have?

    It’s not something that I would often give, but I do give this a perfect rating. We've been able to solve any of the data issues that we were having initially when we first brought it in, and it's expanded everything that we can do as we looked into the future right now of where we want to go with this. That includes its tie-ins for service virtualization; that includes the way that we can build out our environments in a way that we'd never considered before. It's just always a much more dynamic world that we can react a lot faster to, and attribute most all of that to Test Data Manager.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    it_user572886 - PeerSpot reviewer
    Client Partner at a financial services firm with 11-50 employees
    Video Review
    Real User
    Provides a centralized view of the test data and how efficiently you can use it across business units.

    What is most valuable?

    The most important feature I see is how to have a centralized view of the test data; how efficiently you can use the test data across different business units, starting from generating the data that you need to use, to how to use it, repetitively; how you can grow on it, on top of the base data that you create. TDM is very, very efficient.

    How has it helped my organization?

    Now, though, my company is going through a process to be more agile, which is basically the theme of a recent CA conference I attended. While we are trying to go through the agile journey, there are some building blocks that need to be in place. Test data in the whole product lifecycle, is very, very important; how good is the data that you have; how efficiently you can run those test cases, again and again, repetitively, which is very important; and I guess the features that TDM gives fit right into that; what we are looking for, in the journey that we are having.

    What needs improvement?

    I think one thing we would like to see is how quickly it can be used like a SaaS product. You can just plug in incoming data that we have from different sources; how quickly that can be integrated and how the test data can be generated. That quickness I think is something that can be improved.

    If plugins can be developed very quickly, that will help companies like us, because we have hundreds of data sources.

    What do I think about the stability of the solution?

    Looking at the use cases that we have seen and some of the customer testimonials that we see, I think it shows it has gone through the journey and how the quality of the product is. Some of the proof of concepts that we have done, working with the technology people, we can see the stability of the product; how it can also be very useful to our company.

    What do I think about the scalability of the solution?

    I think one of the main features of TDM is how you can scale from a small organization; how you can use it in a very big organization. In our company, that is everything. Only because of that very feature, scalability, we are considering TDM.

    How are customer service and technical support?

    We have not actually used technical support, because we had gone through some of the proof of concepts, as I’ve mentioned. We were already working with some of the Test Data Management group. We are in the process of finalizing the last couple of products that we are looking at and TDM definitely is at the top.

    Which solution did I use previously and why did I switch?

    We were previously using several products, including some in-house products. From my previous experience working with CA, I knew some of the products that they offer. While during the process of the RFP and also some of the new contracts; old contracts are getting renewed; we definitely saw it come to mind and that's how we turned to CA and started this engagement.

    How was the initial setup?

    We are here to buy it, as I’ve mentioned, but there was an initial setup to do the proof of concepts. I was involved in it with some of their technology people.

    It was easy I think because of the experts who were there. They know how to interact with people who don't know the product. When we evolved through that product knowledge, I think they also took us through the journey. It was very easy to interact with them.

    Which other solutions did I evaluate?

    There were also other vendors on our shortlist and, as I’ve mentioned, scalability was one of the main reasons because we are growing faster than ever. The data is growing by TBs and TBs, so I think it was one of the big reasons and secondly, I think if we also look at the market, the rating for TDM is very good. I think it was a unilateral decision that TDM should be one of the final products that we should go for.

    What other advice do I have?

    Use this product with a proof of concept for your organization. The product is very agile. It can fit into small and big organizations, so don't be afraid of that. The product has a lot of features; the way it has been, the product works, is the features. It can work for you as a smaller organization. It can work for a very large organization. Scalability is there. Just use it as a proof of concept and you'll see the power of the TDM.

    If we are able to get those – maybe it is there, and we just have to see it in my company – plugging in from all of those different sources, I think that will definitely make it 100% the product that we're looking for.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.
    Updated: June 2025
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.