What is our primary use case?
We have many use cases.
We have a use case to understand our metadata, understand where it is, and understand where our authoritative data systems are. We need to understand the data systems that we have. We also need to link the data models that we have to these data systems so that we know which data models are supporting which database applications. We're also linking our business data models to our physical implementation so that our data governance team is driving our data and our understanding of our data. That is one use case for the Metadata Manager. Another is the creation of automated reports that will show the changes that are made in production after a production release.
Our use cases for the Mapping Manager are around understanding where our data movement is happening and how our data is being transformed as it's moved. We want automated data lineage capabilities at the system database environment table and column levels, as well as automated impact analysis. If someone needs to make a change to a specific column in a specific database, what downstream applications or databases will be impacted? Who do we have to contact to tell that we're making changes?
When thinking about the Mapping Manager, we do have another use case where we want to understand not only the data design of the mapping, but the actual implementations of the mapping. We want to understand, from a data design standpoint, the data lineage that's in the data model, as well as the data lineage in a source-to-target mapping document. But we also want to understand the as-implemented data lineage, which comes in our Informatica workflows and jobs. So we want to automatically ingest our Informatica jobs and create mapping documents from those jobs so that we have the as-designed data lineage, as well as the as-implemented data lineage.
In addition, with regard to our data literacy, we want to understand our business terminology and the definitions of our business terms. That information drives not only our data modeling, but it drives our understanding of the data that is in our datastores, which are cataloged in the Metadata Manager. This further helps us to understand what we're mapping in our source-to-target mapping documents in the Mapping Manager. We want to associate our physical columns and our data model information with our business glossary. But taking that a step further, when you think about code sets, we also need to understand the data. So if we have a specific code set, we need to understand if we are going to see those specific codes in that database, or if we are going to see different codes that we have to map to the governed code set.
That's where the Codeset Manager comes into play for us because we need to understand what our governed code sets are. And we need to understand and automatically be able to map our code sets to our business terminology, which is automatically linked to our physical tables and columns. And that automatically links the code set values or the crosswalks that were created when we have a data asset that does not have all of the conforming values that are in the governed code set.
We also have reporting use cases. We create a lot of reports. We have reports to understand who the Data Intelligence Suite users are, when they last logged in, the work that they're doing, and for automatically assigning work from one person to another person. We also need automated reports that look at our mappings and help us understand where our gaps are, where we need a code set that we don't already have a governed code set for. And we're also creating data dictionary reports, because we want to understand very specific information about our data models, our datastores, and our business data models, as well as the delivery data models.
We are currently using the
- Resource Manager
- Metadata Manager
- Mapping Manager
- Codeset Manager
- Reference Data Manager
- Business Glossary Manager.
How has it helped my organization?
One of the ways this is helping to improve our delivery is through the increased understanding of what the data is, so that we're not mapping incorrect data from a source to a target.
We also have additional understanding of where our best data is. For example, when you think of the HL7 FHIR work and the need to map customer data to a specific FHIR profile, we need to understand where our best data is, as well as the definition of the data so that we are mapping the correct data. Health-interoperability requires us to provide the customer with the data they request when they request it. There are multiple levels of complexity in doing that work. The Data Intelligence Suite is helping us to manage and document all of those complexities to ensure that we are delivering the right data to the customer when they request it.
erwin DI also provides us with a real-time, understandable data-pipeline. One of the use cases that we didn't talk about is that we set up batch jobs to automate the metadata ingestion, so that we always have up to date and accurate metadata. It saves us a great deal because we always know where our metadata is, and what our data is, versus having to spend weeks hunting down information. For example, if we needed to make a change to a datastore, and we needed to understand the other datastores that are dependent on that data, we know that at a moment's notice. It's not delayed by a month. It's not a case of someone either having to manually look through Excel spreadsheet mapping documents or needing to get a new degree in a software tool such as Informatica or DataStage or Ab Initio, or even reading Python. We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool.
The solution also automates critical areas of our data governance and data management infrastructure. The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company.
In terms of how this affects the quality and speed of delivery of data, I did use-case studies before we brought the Data Intelligence Suite into our company. Some of those use cases included research into impact analysis taking between six and 16 weeks to figure out where data is, or where an impact would be. Having the mapping documents drive the data lineage and impact analysis in the Data Intelligence Suite means that data investigation into impact analysis takes minutes instead of weeks. The understanding of what the data is, is critical to any company. And being able to find that information with the click of a button, versus having to request access to share drives and Confluence and SharePoint drives and Alation, and anywhere that metadata could be, is a notable difference. Having to ask people, "Do you have this information?" versus being able to go and find it yourself saves incredible amounts of time. And it enables everyone, whether it's a business person or a designer, or a data architect, a data modeler, or a developer. Everyone is able to use the tool and that is extremely important, because you need a tool that is user-friendly, intuitive, and easily understood, no matter your technical capabilities.
Also, the things around production that the solution can do have been very helpful to us. This includes creating release reports, so that we know what production looked like prior to an implementation versus what it looks like afterward. It helps with understanding any new data movement that was implemented versus what it was previously. Those are the production implementations that are key for us right now.
Another aspect is that the solution’s data cataloging, data literacy, and automation have been extremely important in helping people understand what the data is so that they use it correctly. That happens at all levels.
The responsiveness of the tool has been fantastic. The amount of time that it takes to do work has been so significantly decreased. If you were creating a mapping document, especially if you were doing it in an Excel spreadsheet, you would have to manually type in every single piece of information: the name of the system, the name of the table, the name of the column, the data type, the length of the column. Any information that you needed to put into a source-to-target mapping document would have to be manually entered.
Especially within the Mapping Manager, the ability to automatically create the mapping document through drag-and-drop functionality of the metadata that is in the system catalog, within the Metadata Manager, results in savings on the order weeks or days. When you drag and drop the information from the metadata catalog into the mapping document, the majority of the mapping document is filled out, and the only thing that you have to do manually is put the information in about the type of data movement or transformation that you're going to do on the data. And even some of that is automated, or could be automated. You're talking about significant time savings.
And because you have all of the information right there in the tool, you don't have to look at different places to find the understanding of the data that you're working with. All of the information is right there, which is another time savings. It's like one-stop shopping. You can either go to seven stores to get everything you want, or you can go to one store. Which option would you choose? Most people would prefer to go to one store. And that's what the Data Intelligence Suite gives you: one place.
I can say, in general that a large number of hours are saved, depending on the work that is being done, because of the automation capabilities and the ability to instantly understand what your data is and where it is. We are working on creating metrics. For example, we have one metric where it has taken someone hours to do research, to understand what the data is and where it is and map it to a business term, versus where it has taken less than two minutes to map 600 physical columns to a business term.
What is most valuable?
We are looking forward to using the AI match capability. We are using several Smart Data Connectors, as well as the Reporting Manager and the Workflow Manager.
We are customizing our own installation of the erwin Data Intelligence Suite by adding fields as extended properties that do not already exist, that are of value to us, as well as changing the user-defined fields that are in the Data Intelligence Suite. We're renaming them so that we can put very specific information into those user-defined properties.
The customization and the ability to add information is extremely valuable to us because there is no tool on the market that is going to be able to accommodate, out-of-the-box, everything that every customer will use. Being able to tailor the tool to meet our needs, and add additional metadata, is very valuable to us.
Also, in terms of the solution's integrated data catalog and data literacy when it comes to mapping, profiling, and automated lineage analysis, it is incredibly important to have that business glossary and understand what the data is — the definitions of the data — so that you use it correctly. You can't move data when you don't understand what it is. You can't merge data with other data unless you know what it is or how to use it. Those business definitions help us with all of that: with the mapping and with being able to plan the movement of one data element into another data element. The data lineage, understanding where the data is and how it moves, is very critical.
What needs improvement?
The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested.
erwin has been a fantastic partner with regard to our suggestions for enhancements, and that's why I'm having difficulty thinking of areas for improvement of the solution. They are delivering enhancements that we've requested in every release.
For how long have I used the solution?
We've been using erwin Data Intelligence for Data Governance for 19 months.
What do I think about the stability of the solution?
We're very impressed with the stability.
What do I think about the scalability of the solution?
We find it to be very scalable. We currently have it connected to and pulling the metadata in from four different database types. We currently have it connected to automatically ingest mapping information from Informatica and we are importing different types of metadata that is captured in Excel spreadsheets. The tool is able to not only ingest all of this information, but present it in a usable fashion.
We have two different types of users. We have the people who are using the Data Intelligence Suite back-end, which is where the actual work is done. We have over 50 users there. And on the Business User Portal, which is the read-only access to the work that's being done in the back-end, we have over 100 users. Everyone who sees the tool wants to use it, so the desire for adoption is incredibly high.
How are customer service and technical support?
The technical and customer support are outstanding. erwin honestly goes above and beyond to ensure the success of its customers. Their people are available as needed to assist with the implementations and the upgrades.
They are also very willing to listen to enhancement requests and understand what the business or technical impact of the request is. They have been incredibly responsive with the inclusion of enhancement requests. I couldn't ask for more. They're really an example of the highest level of customer service that anyone could provide.
Which solution did I use previously and why did I switch?
We have had multiple other tools and our metadata is currently fractured across multiple tools because we haven't had a good integration point for all of our information. erwin Data Intelligence Suite gives us that one, fantastic, single point of integration. That means we do not have to remain fractured across other tools, but also we don't need to reinvent the wheel and recreate a new system to contain all of our metadata. We have an opportunity to have it in a single place, working with it from a technical standpoint, governing it from a business standpoint, and integrating both the business and technical knowledge in a single location.
The tools we replaced were homegrown tools that made information available in a very manual fashion. We have replaced Excel spreadsheets as our documentation of mapping. We are replacing many different types of data sharing sites by having all of our information, our metadata, in a single location.
How was the initial setup?
We found the initial setup to be very straightforward. The user manuals are very clear for the users who are doing the work. And whenever there was a need for assistance with the implementation of the back-end database or the software, erwin was just a phone call away and has always been available to answer any questions or assist as needed. They're just fantastic partners.
It took us about a day when it was first set up, and it is just a matter of a couple hours, now, as we do upgrades to the software.
In terms of our implementation strategy, we have segregation of duties within our company. We have one team that is responsible for delivery, a separate team that is responsible for production support, another team that is responsible for the creation of the database behind the tool, and another team that is responsible for the installation of the software. It's the coordination of the different people who are supporting the tool that takes the most effort.
There are eight people maintaining the solution, because of the segregation of duties. We have a primary and a backup, within each of the four teams, who are doing the delivery or support.
What was our ROI?
We have absolutely seen return on our investment with Data Intelligence so far. There has been an increase in delivery speed and the decrease of project costs that results. The decrease in time to find the information you need to do your job, versus the larger amount of time needed to research without erwin, has been invaluable.
What's my experience with pricing, setup cost, and licensing?
The one thing that you want to make sure of is that you have enough licenses to cover the people who will be administering the tool, as well as the people who are using the tool. You have to know not only the people who will be using the tool but the teams that will be supporting it. That was something we did not know ahead of time: the number of support licenses that we would need.
Which other solutions did I evaluate?
There are other vendor tools that do not have all of the capabilities, or they're trying to have the capabilities that Data Intelligence Suite has, but they are more complex to use or do not have the fast performance that the Data Intelligence Suite has.
There are many tools available for business term, management, codeset management, and data lineage, as well as metadata and mapping capabilities.
Collibra was on the market prior to the Data Intelligence Suite, but since erwin's acquisition of the Data Intelligence Suite, erwin has brought their software along faster and incorporated more useful capabilities than some of the other vendor products. And some of the other products are limited because they have per-server costs, where erwin Data Intelligence Suite has not had that kind of cost. It can connect to the systems where the metadata resides and is able to ingest that metadata without additional costs.
The user-friendliness of the erwin tool made it much easier for users to adopt and desire to adopt because it was easier to ramp up and utilize and understand, compared to other tools that we looked at. Another difference was the completeness of the erwin tool versus having to work with tools that have some of the capabilities but not all of the capabilities. It was that "one stop-shopping" versus having to go to multiple tools.
What other advice do I have?
Erwin currently supports two implementations of this product: one on a SQL Server database and the other on an Oracle Database. It seems that the SQL Server database may have fewer complications than the Oracle Database. We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server.
We are not currently using forward engineering capabilities of the Data Intelligence suite. We do use erwin Data Modeler for forward engineering the data definition language that is used to change the actual databases where the data resides. We are currently using the Informatica reverse smart connector so that we can understand what is in the Informatica jobs, jobs which may not have been designed with, or have, a source-to-target mapping document. That's as opposed to having a developer create data movement without any documentation to support it. We look forward to potentially using the capability to create Informatica jobs, or other types of jobs, based on the mapping work, so that we can automate our work more and decrease our delivery time and cost to deliver while increasing our accuracy of delivery.
We've learned several lessons from using erwin Data Intelligence Suit. One lesson is around adoption. There will be better adoption through ease of use. We do have another product in-house and the largest complaint about that product is that it's extremely difficult to use. The ease of use with the Data Intelligence Suite has significantly improved our adoption rate.
Also, having all of the information in one place has significantly improved our adoption and people's desire to use the tool, rather than looking here, there, and everywhere for their information. The automated data lineage and impact analysis being driven from the mapping documents are astounding in reducing the time to research impact analysis from six to 16 weeks down to minutes, because it's a couple of clicks with a mouse. Having all of the information in one place also improves our knowledge about where our data is and what it is so that we can use it in the best possible ways.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Thanks for the great review! How do you find the interaction between the cloud instance of DIS obtaining metadata from on-prem DBMS solutions?