When it comes to use cases for Broadcom Test Data Manager, clients may use it for data masking.
From a service provider perspective, regarding the data masking, data subsetting, and data virtualization features, many products in the market provide the same services. From a technical side, it's a very fake application. In the bigger solution, if this application was stuck, it's because we didn't design the relationship between tables and databases on the database layer; we designed this relation between tables on the application layer. This makes the data retrieval in and out from the database to the application much faster and more useful. Any application regarding data masking needs to read this relationship between tables to build a background for ERD. The best solution for data masking, data subsetting, and data virtualization is to work in the database directly, reading a value from a table directly and transforming it, which will be useful and fast and use the same database infrastructure specification servers.
Any other application in the market working in this solution is a fake solution, useful only in small- and medium-sized companies, not big sizes like telecoms, banking, or financial solutions. My point of view is based on working with many applications which perform similar functions, and they read tables from the database, read the values, and generate another set of values in a different schema, which is not useful as it requires a lot of operations. The effort to read the relationship between tables and build an ERD diagram necessitates expertise to ensure the relationship is correct. Therefore, why bother to build this relationship when I can read the table directly from the database, transform the values into another table directly for masking?
The last feature I'm speaking about is called the synthetic data generation feature.