When it comes to use cases for Broadcom Test Data Manager, clients may use it for data masking. From a service provider perspective, regarding the data masking, data subsetting, and data virtualization features, many products in the market provide the same services. From a technical side, it's a very fake application. In the bigger solution, if this application was stuck, it's because we didn't design the relationship between tables and databases on the database layer; we designed this relation between tables on the application layer. This makes the data retrieval in and out from the database to the application much faster and more useful. Any application regarding data masking needs to read this relationship between tables to build a background for ERD. The best solution for data masking, data subsetting, and data virtualization is to work in the database directly, reading a value from a table directly and transforming it, which will be useful and fast and use the same database infrastructure specification servers. Any other application in the market working in this solution is a fake solution, useful only in small- and medium-sized companies, not big sizes like telecoms, banking, or financial solutions. My point of view is based on working with many applications which perform similar functions, and they read tables from the database, read the values, and generate another set of values in a different schema, which is not useful as it requires a lot of operations. The effort to read the relationship between tables and build an ERD diagram necessitates expertise to ensure the relationship is correct. Therefore, why bother to build this relationship when I can read the table directly from the database, transform the values into another table directly for masking? The last feature I'm speaking about is called the synthetic data generation feature.
Test data management consultant at Tech Mahindra Limited
Real User
2023-05-12T10:07:00Z
May 12, 2023
Broadcom Test Data Manager is used for test data management requirements, along with data generation, data masking, data subsetting, and service virtualization. Also, the solution can be connected to your automation tool.
IT Specialist at a financial services firm with 1,001-5,000 employees
Real User
2019-01-08T10:53:00Z
Jan 8, 2019
We use it for data generation, for performance testing, and other test cases. We also use data masking and data profiling for functional testing. Data masking is one of the important aims in our procurement of this tool because we have some sensitive data in production. We have to mask it to use it in a testing environment. Our real concern is masking and we are learning about this subject.
TDM is something people do all the time. You cannot say it is something you're going to do from scratch. For every client, there is a different scenario. There are a lot of use cases. But a couple of use cases are common everywhere. One of them is the data when it is not there in production. How do you create that data? Synthetic data creation is one use case challenge that is common across the board. In addition, the people who do the testing are not very conversant with the back end or with the different types of databases, mainframes, etc. And most of the time they don't write very good SQL to be able to find the data they are going to do their testing with. So data mining is a major concern in most places. The use cases are diverse. You cannot point to many common things and say that this will work or this will not. Every place, even though it's a TDM scenario, is different. Some places have very good documentation, so you can directly start with extraction, masking, and loading. But for most places that is not possible because the documentation is not there. There are multiple use cases. You cannot say that one size fits all. In the testing cycle, when there is a need for test data management tools, we use CA TDM to put up the feed.
We use a lot of creating data mods and test matching features. We did establish the subsetting cloning process as well, but based on the client requirements. Creating data mods and using test matching features are something which we have found fits our purpose. We use most of the test matching features in our testing processes and also the integration with App Test is something we heavily use.
Broadcom Test Data Manager focuses on creating, managing, and provisioning test data for software testing, ensuring data privacy and compliance, and supporting testing across multiple environments efficiently.
Broadcom Test Data Manager integrates with databases, automates data generation, and enhances test data quality and reliability, leading to improved testing accuracy and faster release cycles. It includes a self-service portal, ease of use, and capabilities for data generation,...
When it comes to use cases for Broadcom Test Data Manager, clients may use it for data masking. From a service provider perspective, regarding the data masking, data subsetting, and data virtualization features, many products in the market provide the same services. From a technical side, it's a very fake application. In the bigger solution, if this application was stuck, it's because we didn't design the relationship between tables and databases on the database layer; we designed this relation between tables on the application layer. This makes the data retrieval in and out from the database to the application much faster and more useful. Any application regarding data masking needs to read this relationship between tables to build a background for ERD. The best solution for data masking, data subsetting, and data virtualization is to work in the database directly, reading a value from a table directly and transforming it, which will be useful and fast and use the same database infrastructure specification servers. Any other application in the market working in this solution is a fake solution, useful only in small- and medium-sized companies, not big sizes like telecoms, banking, or financial solutions. My point of view is based on working with many applications which perform similar functions, and they read tables from the database, read the values, and generate another set of values in a different schema, which is not useful as it requires a lot of operations. The effort to read the relationship between tables and build an ERD diagram necessitates expertise to ensure the relationship is correct. Therefore, why bother to build this relationship when I can read the table directly from the database, transform the values into another table directly for masking? The last feature I'm speaking about is called the synthetic data generation feature.
We use the solution to manage test cases.
Broadcom Test Data Manager is used for test data management requirements, along with data generation, data masking, data subsetting, and service virtualization. Also, the solution can be connected to your automation tool.
We use it for enterprise-level solutions.
We use it for data generation, for performance testing, and other test cases. We also use data masking and data profiling for functional testing. Data masking is one of the important aims in our procurement of this tool because we have some sensitive data in production. We have to mask it to use it in a testing environment. Our real concern is masking and we are learning about this subject.
TDM is something people do all the time. You cannot say it is something you're going to do from scratch. For every client, there is a different scenario. There are a lot of use cases. But a couple of use cases are common everywhere. One of them is the data when it is not there in production. How do you create that data? Synthetic data creation is one use case challenge that is common across the board. In addition, the people who do the testing are not very conversant with the back end or with the different types of databases, mainframes, etc. And most of the time they don't write very good SQL to be able to find the data they are going to do their testing with. So data mining is a major concern in most places. The use cases are diverse. You cannot point to many common things and say that this will work or this will not. Every place, even though it's a TDM scenario, is different. Some places have very good documentation, so you can directly start with extraction, masking, and loading. But for most places that is not possible because the documentation is not there. There are multiple use cases. You cannot say that one size fits all. In the testing cycle, when there is a need for test data management tools, we use CA TDM to put up the feed.
We use a lot of creating data mods and test matching features. We did establish the subsetting cloning process as well, but based on the client requirements. Creating data mods and using test matching features are something which we have found fits our purpose. We use most of the test matching features in our testing processes and also the integration with App Test is something we heavily use.