My company predominantly uses Delphix because, for large data masking in terabytes, we need the kind of engine where it can play very well on an NMVe hard disk. My company also needs a master-slave architecture, like the one offered under big data or Hadoop's framework. Speed and the performance the tool offers are paramount to my company. When I looked at the topological architecture of how Informatica does the data masking, I saw that it is slightly different from the big data framework.
When it comes to unstructured and structured data, if Delphix can comply with the requirements for most of the structured data, it would be good, especially if it can include the data scanning part. The tool is useful for data masking. If I have a repository or if I give the repository to the tool, and if it scans all the data to figure out if there is any kind of PII/NPI data, I think it will make the tool full and complete.
I have experience with Delphix.
Stability-wise, I rate the solution an eight out of ten.
My company has around 500 users of the tool.
Scalability-wise, I rate the solution an eight and a half out of ten.
I rate the technical support a seven out of ten.
My company tried various other products during one of the PoC phases before finalizing Delphix for large-scale usage.
My company purchased Delphix since it met our budgetary requirements. The product is not cheap, but I can say that it is a good product for the functionalities it provides. I rate the product price seven to eight on a scale of one to ten, where one is the low price, and ten is the high price.
The performance offered by the product, its price, and its ability to handle structured and unstructured data were good when compared to the other products in the market.
Delphix is actually meant for data masking. Speaking about how my company uses the tool for DevOps workflows, I would say that my company has a Jenkins pipeline, and we write all the scripts, after which the onboarding is completed in Delphix. In our company, we have a service process between the producers and the consumers. Once the onboarding is done, all the rules and data masking are taken care of, which is like a one-time deal. Unless the data structure changes, my company used to go to Collibra and get the schema to ensure that the requested fields were there in the schema, after which we used to read the data specific to that catalog and then mask it. The script gets triggered by Jenkins.
I have no doubts about how the solution provides and ensures data compliance and security. My company has been using Delphix for quite some time now. Initially, my company didn't have to deal with any big data. After my company had a requirement to deal with the big data in relation to its migration to the cloud, we had tons and tons of data coming in, owing to which we had to deploy the big data kind of a cluster like Delphix. My company has unstructured, structured, and some semi-structured data as well to do the masking, but we are able to deal with only a few such data and not with all of it. I have no problems with relational databases. In semi-structured data, I do have problems, but not with all of them, including data in Word and PDFs.
Delphix has been the most valuable in improving data operations, predominantly in the area of data masking from higher to lower.
Delphix's virtual data infrastructure has impacted our company's data management processes really well, especially whenever we want to deal with the area of testing in data science and wherever we need to write the models while taking data from a higher environment to the lower environment, maintaining the data masking posture, following the industry regulations and work in compliance with the security guidelines, which helped us tremendously, to fit the tool into our architecture framework.
My company has an agreement with Delphix to take care of the maintenance of the product. One person from my company will take care of the maintenance.
I recommend the product to other people who want to use it.
I rate the tool an eight out of ten.