Coming October 25: PeerSpot Awards will be announced! Learn more

erwin Data Intelligence by Quest OverviewUNIXBusinessApplication

erwin Data Intelligence by Quest is #4 ranked solution in Data Governance tools. PeerSpot users give erwin Data Intelligence by Quest an average rating of 8.6 out of 10. erwin Data Intelligence by Quest is most commonly compared to Microsoft Purview: erwin Data Intelligence by Quest vs Microsoft Purview. erwin Data Intelligence by Quest is popular among the large enterprise segment, accounting for 76% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 26% of all views.
erwin Data Intelligence by Quest Buyer's Guide

Download the erwin Data Intelligence by Quest Buyer's Guide including reviews and more. Updated: September 2022

What is erwin Data Intelligence by Quest?

The erwin Data Intelligence Suite (erwin DI) combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog. Then make it accessible and understandable within context via role-based views. This complete metadata-driven approach to data governance facilitates greater IT and business collaboration.

erwin Data Intelligence by Quest was previously known as erwin DG, erwin Data Governance.

erwin Data Intelligence by Quest Customers

Oracle, Infosys, GSK, Toyota Motor Sales, HSBC

erwin Data Intelligence by Quest Video

erwin Data Intelligence by Quest Pricing Advice

What users are saying about erwin Data Intelligence by Quest pricing:
  • "Smart Data Connectors have some costs, and then there are user-based licenses. We spend roughly $150,000 per year on the solution. It is a yearly subscription license that basically includes the cost for Smart Data Connectors and user-based licenses. We have around 30 data stewards who maintain definitions, and then we have five IT users who basically maintain the overall solution. It is not a SaaS kind of operation, and there is an infrastructure cost to host this solution, which is our regular AWS hosting cost."
  • "The licensing cost was very affordable at the time of purchase. It has since been taken over by erwin, then Quest. The tool has gotten a bit more costly, but they are adding more features very quickly."
  • "The solution is aggressively priced."
  • "We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users."
  • erwin Data Intelligence by Quest Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Tracy Hautenen Kriel - PeerSpot reviewer
    Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
    Real User
    Top 5Leaderboard
    We always know where our data and metadata are, versus having to spend weeks hunting down information
    Pros and Cons
    • "The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company."
    • "We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool."
    • "The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested."
    • "We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server."

    What is our primary use case?

    We have many use cases.

    We have a use case to understand our metadata, understand where it is, and understand where our authoritative data systems are. We need to understand the data systems that we have. We also need to link the data models that we have to these data systems so that we know which data models are supporting which database applications. We're also linking our business data models to our physical implementation so that our data governance team is driving our data and our understanding of our data. That is one use case for the Metadata Manager. Another is the creation of automated reports that will show the changes that are made in production after a production release.

    Our use cases for the Mapping Manager are around understanding where our data movement is happening and how our data is being transformed as it's moved. We want automated data lineage capabilities at the system database environment table and column levels, as well as automated impact analysis. If someone needs to make a change to a specific column in a specific database, what downstream applications or databases will be impacted? Who do we have to contact to tell that we're making changes?

    When thinking about the Mapping Manager, we do have another use case where we want to understand not only the data design of the mapping, but the actual implementations of the mapping. We want to understand, from a data design standpoint, the data lineage that's in the data model, as well as the data lineage in a source-to-target mapping document. But we also want to understand the as-implemented data lineage, which comes in our Informatica workflows and jobs. So we want to automatically ingest our Informatica jobs and create mapping documents from those jobs so that we have the as-designed data lineage, as well as the as-implemented data lineage.

    In addition, with regard to our data literacy, we want to understand our business terminology and the definitions of our business terms. That information drives not only our data modeling, but it drives our understanding of the data that is in our datastores, which are cataloged in the Metadata Manager. This further helps us to understand what we're mapping in our source-to-target mapping documents in the Mapping Manager. We want to associate our physical columns and our data model information with our business glossary. But taking that a step further, when you think about code sets, we also need to understand the data. So if we have a specific code set, we need to understand if we are going to see those specific codes in that database, or if we are going to see different codes that we have to map to the governed code set.

    That's where the Codeset Manager comes into play for us because we need to understand what our governed code sets are. And we need to understand and automatically be able to map our code sets to our business terminology, which is automatically linked to our physical tables and columns. And that automatically links the code set values or the crosswalks that were created when we have a data asset that does not have all of the conforming values that are in the governed code set. 

    We also have reporting use cases. We create a lot of reports. We have reports to understand who the Data Intelligence Suite users are, when they last logged in, the work that they're doing, and for automatically assigning work from one person to another person. We also need automated reports that look at our mappings and help us understand where our gaps are, where we need a code set that we don't already have a governed code set for. And we're also creating data dictionary reports, because we want to understand very specific information about our data models, our datastores, and our business data models, as well as the delivery data models.

    We are currently using the

    • Resource Manager
    • Metadata Manager
    • Mapping Manager
    • Codeset Manager
    • Reference Data Manager
    • Business Glossary Manager.

    How has it helped my organization?

    One of the ways this is helping to improve our delivery is through the increased understanding of what the data is, so that we're not mapping incorrect data from a source to a target. 

    We also have additional understanding of where our best data is. For example, when you think of the HL7 FHIR work and the need to map customer data to a specific FHIR profile, we need to understand where our best data is, as well as the definition of the data so that we are mapping the correct data. Health-interoperability requires us to provide the customer with the data they request when they request it. There are multiple levels of complexity in doing that work. The Data Intelligence Suite is helping us to manage and document all of those complexities to ensure that we are delivering the right data to the customer when they request it.

    erwin DI also provides us with a real-time, understandable data-pipeline. One of the use cases that we didn't talk about is that we set up batch jobs to automate the metadata ingestion, so that we always have up to date and accurate metadata. It saves us a great deal because we always know where our metadata is, and what our data is, versus having to spend weeks hunting down information. For example, if we needed to make a change to a datastore, and we needed to understand the other datastores that are dependent on that data, we know that at a moment's notice. It's not delayed by a month. It's not a case of someone either having to manually look through Excel spreadsheet mapping documents or needing to get a new degree in a software tool such as Informatica or DataStage or Ab Initio, or even reading Python. We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool.

    The solution also automates critical areas of our data governance and data management infrastructure. The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company.

    In terms of how this affects the quality and speed of delivery of data, I did use-case studies before we brought the Data Intelligence Suite into our company. Some of those use cases included research into impact analysis taking between six and 16 weeks to figure out where data is, or where an impact would be. Having the mapping documents drive the data lineage and impact analysis in the Data Intelligence Suite means that data investigation into impact analysis takes minutes instead of weeks. The understanding of what the data is, is critical to any company. And being able to find that information with the click of a button, versus having to request access to share drives and Confluence and SharePoint drives and Alation, and anywhere that metadata could be, is a notable difference. Having to ask people, "Do you have this information?" versus being able to go and find it yourself saves incredible amounts of time. And it enables everyone, whether it's a business person or a designer, or a data architect, a data modeler, or a developer. Everyone is able to use the tool and that is extremely important, because you need a tool that is user-friendly, intuitive, and easily understood, no matter your technical capabilities.

    Also, the things around production that the solution can do have been very helpful to us. This includes creating release reports, so that we know what production looked like prior to an implementation versus what it looks like afterward. It helps with understanding any new data movement that was implemented versus what it was previously. Those are the production implementations that are key for us right now.

    Another aspect is that the solution’s data cataloging, data literacy, and automation have been extremely important in helping people understand what the data is so that they use it correctly. That happens at all levels.

    The responsiveness of the tool has been fantastic. The amount of time that it takes to do work has been so significantly decreased. If you were creating a mapping document, especially if you were doing it in an Excel spreadsheet, you would have to manually type in every single piece of information: the name of the system, the name of the table, the name of the column, the data type, the length of the column. Any information that you needed to put into a source-to-target mapping document would have to be manually entered.

    Especially within the Mapping Manager, the ability to automatically create the mapping document through drag-and-drop functionality of the metadata that is in the system catalog, within the Metadata Manager, results in savings on the order weeks or days. When you drag and drop the information from the metadata catalog into the mapping document, the majority of the mapping document is filled out, and the only thing that you have to do manually is put the information in about the type of data movement or transformation that you're going to do on the data. And even some of that is automated, or could be automated. You're talking about significant time savings.

    And because you have all of the information right there in the tool, you don't have to look at different places to find the understanding of the data that you're working with. All of the information is right there, which is another time savings. It's like one-stop shopping. You can either go to seven stores to get everything you want, or you can go to one store. Which option would you choose? Most people would prefer to go to one store. And that's what the Data Intelligence Suite gives you: one place.

    I can say, in general that a large number of hours are saved, depending on the work that is being done, because of the automation capabilities and the ability to instantly understand what your data is and where it is. We are working on creating metrics. For example, we have one metric where it has taken someone hours to do research, to understand what the data is and where it is and map it to a business term, versus where it has taken less than two minutes to map 600 physical columns to a business term.

    What is most valuable?

    We are looking forward to using the AI match capability. We are using several Smart Data Connectors, as well as the Reporting Manager and the Workflow Manager.

    We are customizing our own installation of the erwin Data Intelligence Suite by adding fields as extended properties that do not already exist, that are of value to us, as well as changing the user-defined fields that are in the Data Intelligence Suite. We're renaming them so that we can put very specific information into those user-defined properties.

    The customization and the ability to add information is extremely valuable to us because there is no tool on the market that is going to be able to accommodate, out-of-the-box, everything that every customer will use. Being able to tailor the tool to meet our needs, and add additional metadata, is very valuable to us.

    Also, in terms of the solution's integrated data catalog and data literacy when it comes to mapping, profiling, and automated lineage analysis, it is incredibly important to have that business glossary and understand what the data is — the definitions of the data — so that you use it correctly. You can't move data when you don't understand what it is. You can't merge data with other data unless you know what it is or how to use it. Those business definitions help us with all of that: with the mapping and with being able to plan the movement of one data element into another data element. The data lineage, understanding where the data is and how it moves, is very critical.

    What needs improvement?

    The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested.

    erwin has been a fantastic partner with regard to our suggestions for enhancements, and that's why I'm having difficulty thinking of areas for improvement of the solution. They are delivering enhancements that we've requested in every release.

    Buyer's Guide
    erwin Data Intelligence by Quest
    September 2022
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: September 2022.
    632,611 professionals have used our research since 2012.

    For how long have I used the solution?

    We've been using erwin Data Intelligence for Data Governance for 19 months.

    What do I think about the stability of the solution?

    We're very impressed with the stability. 

    What do I think about the scalability of the solution?

    We find it to be very scalable. We currently have it connected to and pulling the metadata in from four different database types. We currently have it connected to automatically ingest mapping information from Informatica and we are importing different types of metadata that is captured in Excel spreadsheets. The tool is able to not only ingest all of this information, but present it in a usable fashion.

    We have two different types of users. We have the people who are using the Data Intelligence Suite back-end, which is where the actual work is done. We have over 50 users there. And on the Business User Portal, which is the read-only access to the work that's being done in the back-end, we have over 100 users. Everyone who sees the tool wants to use it, so the desire for adoption is incredibly high.

    How are customer service and support?

    The technical and customer support are outstanding. erwin honestly goes above and beyond to ensure the success of its customers. Their people are available as needed to assist with the implementations and the upgrades.

    They are also very willing to listen to enhancement requests and understand what the business or technical impact of the request is. They have been incredibly responsive with the inclusion of enhancement requests. I couldn't ask for more. They're really an example of the highest level of customer service that anyone could provide.

    Which solution did I use previously and why did I switch?

    We have had multiple other tools and our metadata is currently fractured across multiple tools because we haven't had a good integration point for all of our information. erwin Data Intelligence Suite gives us that one, fantastic, single point of integration. That means we do not have to remain fractured across other tools, but also we don't need to reinvent the wheel and recreate a new system to contain all of our metadata. We have an opportunity to have it in a single place, working with it from a technical standpoint, governing it from a business standpoint, and integrating both the business and technical knowledge in a single location.

    The tools we replaced were homegrown tools that made information available in a very manual fashion. We have replaced Excel spreadsheets as our documentation of mapping. We are replacing many different types of data sharing sites by having all of our information, our metadata, in a single location.

    How was the initial setup?

    We found the initial setup to be very straightforward. The user manuals are very clear for the users who are doing the work. And whenever there was a need for assistance with the implementation of the back-end database or the software, erwin was just a phone call away and has always been available to answer any questions or assist as needed. They're just fantastic partners.

    It took us about a day when it was first set up, and it is just a matter of a couple hours, now, as we do upgrades to the software.

    In terms of our implementation strategy, we have segregation of duties within our company. We have one team that is responsible for delivery, a separate team that is responsible for production support, another team that is responsible for the creation of the database behind the tool, and another team that is responsible for the installation of the software. It's the coordination of the different people who are supporting the tool that takes the most effort.

    There are eight people maintaining the solution, because of the segregation of duties. We have a primary and a backup, within each of the four teams, who are doing the delivery or support.

    What was our ROI?

    We have absolutely seen return on our investment with Data Intelligence so far. There has been an increase in delivery speed and the decrease of project costs that results. The decrease in time to find the information you need to do your job, versus the larger amount of time needed to research without erwin, has been invaluable.

    What's my experience with pricing, setup cost, and licensing?

    The one thing that you want to make sure of is that you have enough licenses to cover the people who will be administering the tool, as well as the people who are using the tool. You have to know not only the people who will be using the tool but the teams that will be supporting it. That was something we did not know ahead of time: the number of support licenses that we would need.

    Which other solutions did I evaluate?

    There are other vendor tools that do not have all of the capabilities, or they're trying to have the capabilities that Data Intelligence Suite has, but they are more complex to use or do not have the fast performance that the Data Intelligence Suite has.

    There are many tools available for business term, management, codeset management, and data lineage, as well as metadata and mapping capabilities. 

    Collibra was on the market prior to the Data Intelligence Suite, but since erwin's acquisition of the Data Intelligence Suite, erwin has brought their software along faster and incorporated more useful capabilities than some of the other vendor products. And some of the other products are limited because they have per-server costs, where erwin Data Intelligence Suite has not had that kind of cost. It can connect to the systems where the metadata resides and is able to ingest that metadata without additional costs.

    The user-friendliness of the erwin tool made it much easier for users to adopt and desire to adopt because it was easier to ramp up and utilize and understand, compared to other tools that we looked at. Another difference was the completeness of the erwin tool versus having to work with tools that have some of the capabilities but not all of the capabilities. It was that "one stop-shopping" versus having to go to multiple tools.

    What other advice do I have?

    Erwin currently supports two implementations of this product: one on a SQL Server database and the other on an Oracle Database. It seems that the SQL Server database may have fewer complications than the Oracle Database. We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server.

    We are not currently using forward engineering capabilities of the Data Intelligence suite. We do use erwin Data Modeler for forward engineering the data definition language that is used to change the actual databases where the data resides. We are currently using the Informatica reverse smart connector so that we can understand what is in the Informatica jobs, jobs which may not have been designed with, or have, a source-to-target mapping document. That's as opposed to having a developer create data movement without any documentation to support it. We look forward to potentially using the capability to create Informatica jobs, or other types of jobs, based on the mapping work, so that we can automate our work more and decrease our delivery time and cost to deliver while increasing our accuracy of delivery.

    We've learned several lessons from using erwin Data Intelligence Suit. One lesson is around adoption. There will be better adoption through ease of use. We do have another product in-house and the largest complaint about that product is that it's extremely difficult to use. The ease of use with the Data Intelligence Suite has significantly improved our adoption rate.

    Also, having all of the information in one place has significantly improved our adoption and people's desire to use the tool, rather than looking here, there, and everywhere for their information. The automated data lineage and impact analysis being driven from the mapping documents are astounding in reducing the time to research impact analysis from six to 16 weeks down to minutes, because it's a couple of clicks with a mouse. Having all of the information in one place also improves our knowledge about where our data is and what it is so that we can use it in the best possible ways.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Senior Director at a retailer with 10,001+ employees
    Real User
    Top 20
    Gives a visual representation of how the data flows from tables to the metrics and has a big impact in terms of the transparency and understanding of data
    Pros and Cons
    • "Being able to capture different business metrics and organize them in different catalogs is most valuable. We can organize these metrics into sales-related metrics, customer-related metrics, supply chain-related metrics, etc."
    • "There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements."

    What is our primary use case?

    Our primary use case is that we want to enable self-service for different business teams to be able to find different data. We are using erwin Data Intelligence platform to enable data literacy and to enable different users to be able to find the data by using the data catalog.

    It can be hosted on-premise or in the cloud. We chose to run it in the cloud because the rest of our analytics infrastructure is running in the cloud. It made natural sense to host it in the cloud.

    How has it helped my organization?

    I represent IT teams, and a lot of times different business teams want to do data analysis. Before using erwin Data Intelligence Suite, they used to constantly come to IT teams to understand things like how is the data is organized and what type of queries or table they should use. It used to take a lot of my team's time to answer those questions. Some of those questions were pretty repetitive. With erwin Data Intelligence Suite, they can now do a self-service. There is a business user portal using which they can search different tables. They can do the search in different ways. If they already know the table name, they can just directly search for that table name, and they will find the definition of each column there, and that would help them in understanding how to use that table. In some cases, they may not know the exact table name, but they may know, for example, a business metric. In such a case, they can search by using a business metric, and, inside the tool, they can link those business metrics to the underlying tables from which these metrics get calculated. They can get to the table definitions through that route as well. This is helping all of our business analysts to do the self-service analytics, and, at the same time, we can enforce some governance around it. Because we enabled self-service for different business analysts, it has improved the speed. It has easily reduced at least 20% of the time that my IT team had to spend answering questions from different business teams. The benefit is probably even more for business teams, and I think they are faster by at least 30% in terms of being able to get the data that they need and perform their analysis based on that. I would expect at least 25% savings in time.

    It has a big impact in terms of the transparency of the data. Everybody is able to find the data by using the catalog, and they can also see how the data is getting loaded through different tables. It has given a lot of transparency. Based on this transparency, we were able to have a good discussion about how the data should be organized so that we can collaborate better with the business in terms of data organization. We were also able to change some of our table structures, data models, etc.

    By using the data catalog, we have definitely improved in terms of maturity as a data-driven decision-maker organization. We are now getting to a level where everybody understands the data. Everyone understands how it is organized, and how they can use this data for different business decisions. The next level for us would be to go and use some of these advanced features such as AI Data Match.

    In terms of the effect of the data pipeline on our speed of analysis, understanding the data pipeline and the data flow is helpful in identifying a problem and resolving it quickly. A lot of times there is some level of ambiguity, and businesses don't understand that how the data flows. Understanding the data pipeline helps them in quickly identifying the problems. They can solve the identified problems and bottlenecks in the data flow. For example, they can identify the data set that is required for a specific analysis and then bring in the data from another system. 

    In terms of the money and time that the real-time data pipeline has saved us, it is hard to quantify the amount in dollars. In terms of time, it has saved us 25% time on the analysis part.

    It has allowed us to automate a lot of stuff for data governance. By using Smart Data Connectors, we are automatically able to pull metric definitions from our reporting solution. We are then able to put an overall governance and approval process on top of that. Whenever a new business metric needs to be created, the data stewards who have Write access to the tool can go ahead and create those definitions. Other data stewards can peer-review their definitions. Our automated workflow then takes that metric to an approved state, and it can be used across the company. We have definitely built a lot of good automation with the help of this tool.

    It definitely affects the quality of data. A lot of times, different teams may have different levels of understanding, and they might have different definitions of a particular metric. A good example is the customer lifetime value. This metric is used by multiple departments, but each department can have its own metric definition. In such a case, they will get different values for this metric, and they won't make consistent decisions across. If they have a common definition, which is governed by this tool, then everybody can reference that. When they do the analysis, they will get the same result, which leads to a better quality of decision-making across the company.

    It affects data delivery in terms of making correct decisions. Ultimately, we are using all of this data to get some insights and then make decisions based on that. It is not so much of the cost but more of the risk that it affects.

    What is most valuable?

    Being able to capture different business metrics and organize them in different catalogs is most valuable. We can organize these metrics into sales-related metrics, customer-related metrics, supply chain-related metrics, etc. 

    Data catalog and data literacy are really the great capabilities of this solution that we leverage. A data catalog is coupled with the business glossary, which enables data literacy. In the business glossary, we can maintain definitions of different business terms and metrics, and then the data catalog can be searched with them. 

    Data lineage is very important to us. It is related to the origin of the data. For example, if a particular metric gets calculated from certain data, how did this data originate? From which source system or table did this data originate? After we have the data, lineage is populated, and some advanced users, such as data scientists, can use this data lineage to get to the details. Sometimes, they are interested in kind of more raw data so that they can get details of the raw table from which these metrics are getting calculated. They can use that raw data for their machine learning or AI use cases.

    I used a couple of different Smart Data Connectors, and they were great. One of the Smart Data Connectors that we used was for our Microstrategy BI solution, so it was a Microstrategy Smart Data Connector. Microstrategy is our enterprise reporting tool, and a lot of the metrics were already built in different reports in Microstrategy. So, it made sense to use this connector to extract all the metrics that were already being used. By using this connector, we could connect to Microstrategy, pull all the metrics and reports from that, and then populate our business glossary with those details. This was a big advantage of using the Microstrategy Smart Data Connector. Another Smart Data Connector that we used was the Python Connector. It enabled us to build the data lineage. We already have a lot of ETL kind of processes built by using Python and SQL, and this connector can reverse engineer that and graphically show how the data flows from the source. This work was done by our data engineers or IT teams, but the business teams didn't understand how it is built. So, by giving them a visual representation of that, they became more data literate. They understood how the data flows from different tables and ultimately lands in the final tables.

    It can be stood up quickly with minimal professional services. Its installation and configuration are not that complicated, and we can easily and quickly stand it up. We wanted to get faster time to value. So, we did a small professional services engagement to come up to speed in terms of how to use the product. Its installation and configuration were pretty quick, but afterward, for configuring it, we wanted to make sure that we have the right processes established within the tool.

    What needs improvement?

    There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements.

    For how long have I used the solution?

    We have been using erwin Data Modeler for quite a while. We have been using erwin Data Intelligence Suite for about one and a half years.

    What do I think about the scalability of the solution?

    It seems to be easily scalable. I haven't seen any problems so far from the scalability aspect. We have strong support, so whenever I have some issues or there is something for which I need technical support, their support is always there to answer the questions. Their support has been great.

    We have a few key users, such as data domain experts, and we have different business areas, such as marketing, sales, finance, supply chain, etc. Each of them has a domain expert who also has an account in erwin to maintain definitions. The rest of the organization kind of gets a read-only view into that. 

    We have about 30 people who can maintain those definitions, and the rest of the organization can find the data or the definitions of that data. These 30 people include data stewards or data domain specialists, and they maintain the definitions of different business terms, glossary terms, and business metrics. There are about five different IT users who actually configure data lineages and data catalog definitions. These are the core teams that basically make sure that the catalog and Data Intelligence Suite are populated with the data. There are more than 200 corporate business users who then find this data after it is populated in the catalog. 

    I would expect its usage to grow from 200 people to 2,000 people within the next year. When we become more mature at using this data and analytics, we will use the advanced features within the tool.

    How are customer service and support?

    In terms of the support that I'm getting, I'm able to get all my requests fulfilled. The only thing that happened was Erwin got sold and then Quest acquired them, but so far, I haven't seen any issues because of this acquisition.

    When we upgraded the version, we had some issues related to Smart Data Connector not working properly, so we had to log a ticket with them, and they were responsive. They set up meetings with us to go through the problem and helped us in resolving the problem. Their support has been pretty responsive. When we submitted tickets, we got immediate attention and resolution.

    How was the initial setup?

    The initial setup was straightforward. We only had to work with the Erwin team to get some of the Smart Data Connectors configured properly.

    Its deployment was within three months. Installing, configuring, and getting it up to speed wasn't that much of a pain. Getting business users to use the tool and making sure that they are leveraging it for their day-to-day tasks is what takes more time. It is more of a change management exercise.

    In terms of the implementation strategy, we worked with Erwin's team. In fact, I hired their professional services as well because I wanted to make sure we get up to speed pretty quickly. The installation, configuration, and some of the cleaning were done with Erwin's professional services team. After my team was up to speed, we retained some of the key data stewards. We included them as part of the planning, and they are now kind of driving the adoption and use of the tool across the company.

    What about the implementation team?

    We didn't use any company other than Erwin's team. 

    You don't need many resources for its deployment and maintenance. You just need one person, and this person also doesn't have to be full-time. Only in the initial stages, you have to spend time adjusting or populating these definitions. 

    What was our ROI?

    We are in the early stage of ROI. The ROI is more in terms of the time that we saved from the analysis. If the analysis is much faster, let's say by 30%, we can get some of the insights faster. We can then accordingly make business decisions, which will give the business value. So, right now, the ROI is in terms of being able to be faster to market with some of the businesses.

    What's my experience with pricing, setup cost, and licensing?

    Smart Data Connectors have some costs, and then there are user-based licenses. We spend roughly $150,000 per year on the solution.

    It is a yearly subscription license that basically includes the cost for Smart Data Connectors and user-based licenses. We have around 30 data stewards who maintain definitions, and then we have five IT users who basically maintain the overall solution. It is not a SaaS kind of operation, and there is an infrastructure cost to host this solution, which is our regular AWS hosting cost.

    Which other solutions did I evaluate?

    When we were looking for a data catalog solution, we evaluated two or three other solutions. We evaluated data catalogs from both Alation and Collibra. We chose Erwin because we liked the overall solution that Erwin offered as compared to the other solutions.

    One of the great features that Erwin provided was the mind map feature, which I did not see in any of the other tools that we used. A mind map gives a visual representation of how the data flows from tables to the metrics. Another great feature was being able to pull the metric definitions automatically from our reporting system. These were the two great positives for us, which I did not see in the other solutions when we did the proof of concept.

    What other advice do I have?

    We are not using erwin's AI Match feature to automatically discover and suggest relationships and associations between business terms and physical metadata. We are still trying to get all of our data completely mapped in there. After that, we will get to the next level of maturity, which would be basically leveraging in some of the additional features such as AI Match.

    Similarly, we have not used the feature for generating the production code through automated code engineering. Currently, we are primarily doing the automation by using Smart Data Connectors to build some data lineages, which is helping with the overall understanding of the data flow. Over the next few months, as it gets more and more updated, we might see some benefits in this area. I would expect at least 25% savings in time.

    It provides a real-time understandable data pipeline to some level. Advanced users can completely understand its real-time data pipeline. Average users may not be able to understand it.

    Any organization that is looking into implementing this type of solution should look at its data literacy and maturity in terms of data literacy. This is where I really see the big challenge. It is almost like a change management exercise to make sure people understand how to use the data and build some of the processes around the data governance. The maturity of the organization is really critical, and you should make your plans accordingly to implement it.

    The biggest lesson that I have learned from using this solution is probably around how to maintain the data dictionary, which is really critical for enabling data literacy. A lot of times, companies don't have these data dictionaries built. Building the data dictionary and getting it populated into the data catalog is where we spend some of the time. A development process needs to be established to create this data dictionary and maintain it going forward. You have to just make sure that it is not a one-time exercise. It is a continuous process that should be included as part of the development process.

    I would rate erwin Data Intelligence for Data Governance an eight out of 10. If they can improve its user interface, it will be a great product.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    erwin Data Intelligence by Quest
    September 2022
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: September 2022.
    632,611 professionals have used our research since 2012.
    Architect at a insurance company with 10,001+ employees
    Real User
    Top 20
    Manages all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions
    Pros and Cons
    • "It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool."
    • "We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today."

    What is our primary use case?

    The Data Intelligence suite helps us manage all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions. These are all managed in this tool. There is definitely extended use of this data set, which includes using the metadata that we have built. Using that metadata, we also integrate with other ETL tools to pull the metadata and use it in our data transformations. It is very tightly coupled with our data processing in general. We also use erwin Data Modeler, which helps us build our metadata, business definitions, and the physical data model of the data structures that we have to manage. These two tools work hand-in-hand to manage our data governance metadata capabilities and support many business processes.

    I manage the data architecture plus manage the whole data governance team who designed the data pipelines. We designed the overall data infrastructure plus the actual governance processes. The stewards, who work with the data in the business, set up the metadata and manage this tool everyday end-to-end.

    How has it helped my organization?

    The benefit of the solution was the adoption of a lot of business partners using and leveraging our data through our governance processes. We have matrices of how many users have been capturing and using it. We have data consultants and other data governance teams who are set up to review these processes and ensure that nobody is really bypassing them. We use this tool in the middle of our work processes for utilization of data on the tail-end, letting the business do self-service, and build our own IT things.

    When we manage our data processes, we know that there are some upward sources or downstream systems. We know that they could be impacted based on some changes coming in from the source or some related to the lineage and impact analysis that this tool brings to the table. We have been able to identify system changes which could impact all downstream systems. That is a big plus because IT and production support teams are now able to use this tool to identify the impact of any issues with the data or any data quality gaps. They can notify all the recipients upfront with the product business communications of any impacts.

    For any company mature enough to have implemented any of these data governance rules or principles, these are the building blocks of the actual process. The criticality is such because we want the business to self-service. We can build data lakes or data warehouses using our data pipelines, but if nobody can actually use the data to be able to see what information they have available without going through IT sources, that defeats the whole purpose of doing this additional work. It is a data platform that allows any business process to come in and be self-service, building their own processes without a lot of IT dependencies.

    There is a data science function where a lot of critical operational reporting can be done. Users leverage this tool to be able to discover what information is available, and it's very heavily used.

    If we start capturing additional data about some metadata, then we can define our own user-defined attributes, which we can then start capturing. It does provide all the information that we want to manage. For our own processes, we have some special tags that we have been able to configure quickly through this tool to start capturing that information.

    We have our own homegrown solutions built around the data that we are capturing in the tool. We build our own pipelines and have our own homegrown ETL tools built using Spark and cloud-based ecosystems. We capture all the metadata in this tool and all the transformation business rules are captured there too. We have API-level interfaces built into the tool to pull the data at the runtime. We then use that information to build our pipelines.

    This tool allows us to bring in any data stewards in the business area to use this tool and set up the metadata, so we don't have to spend a lot of time in IT understanding all the data transformation rules. The business can set up the business metadata, and once it is set up, IT can then use the metadata directly, which feeds into our ETL tool.

    Impact analysis is a huge benefit because it gives us access to our pipeline and data mapping. It captures the source systems from which the data came. For each source system, there is good lineage so we can identify where it came from. Then, it is loaded into our clean zone and data warehouse, where I have reports, data extracts, API calls, and the web application layer. This provides access to all the interfaces and how information has been consumed. Impact analysis, at an IT and field levels, lets me determine:

    • What kind of business rules are applied. 
    • How data has been transformed from each stage. 
    • How the data is consumed and moved to different data marts or reporting layers. 

    Our visibility is now huge, creating a good IT and business process. With confidence, they can assess where the information is, who is using it, and what applications are impacted if that information is not available, inaccurate, or if there are any issues at the source. That impact analysis part is a very strong use case of this tool.

    What is most valuable?

    The most critical features are the metadata management and data mapping, which includes the reference data management and code set management. Its capabilities allow us to capture metadata plus use it to define how the data lineage should be built, i.e., the data mapping aspects of it. The data mapping component is a little unique to this tool, as it allows the entire data lineage and impact analysis to be easily done. It has very good visuals, which it displays in a build to show the data lineage for all the metadata that we are capturing.

    Our physical data mapping is using this tool. The component of capturing the metadata, integrating the code set managers and reference data management aspects of it with the data pipeline are unique to this tool. They are definitely the key differentiators that we were looking for when picking this tool.

    erwin DI provides visibility into our organization’s data for our IT, data governance, and business users. There is a business-facing view of the data. There is an IT version of the tool that allows us to set up the metadata managed by our IT users or data stewards, who are users of the data, to set up the metadata. Then, the same tool has a very good business portal that takes the same information in a read-only way and presents it back in a very business-user friendly way. We call it a business portal. This suite of applications provides us end-to-end data governance from both the IT's and business users' perspective.

    It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool.

    The business portal allows us to search the metadata and do data discovery. Business users come in and present data catalog-type information. This means all the metadata that we capture, such as AI masking, dictionaries, and the data dictionary, is set up as well. That aspect is very heavily used.

    There are a lot of Data Connectors that gather the data from all different source systems, like metadata from many data stores. We configure those Data Collectors, then install them. The Data Connector that helps us load all the metadata from the erwin Data Modeler tool is XML-based.

    The solution delivers up-to-date and detailed data lineage. It provides you all the business rules that data fields are going through by using visualization. It provides very good visualization, allowing us to quickly assess the impact in an understandable way.

    All the metadata and business glossaries are captured right there in the tool. All of these data points are discoverable, so we can search through them. Once you know the business attribute you are looking for, then you are able to find where in the data warehouse this information lives. It provides you technical lineage right from the business glossary. It provides a data discovery feature, so you are able to do a complete discovery on your own.

    What needs improvement?

    The data quality has so many facets, but we are definitely not using the core data quality features of this tool. The data quality has definitely improved because the core data stewards, data engineers, data stewards, and business sponsors know what data they are looking for and how the data should move. They are setting up those rules. We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today.

    For how long have I used the solution?

    I have been using it for four or five years.

    What do I think about the stability of the solution?

    We had a couple of issues here and there, but nothing drastic. There has been a lot of adoption of the tool increasing data usage. There have been a few issues with this, but not blackout-type issues, and we were able to recover. 

    There were some stability issues in the very beginning. Things are getting better with its community piece.

    What do I think about the scalability of the solution?

    Scalability has room for improvement. It tends to slow down when we have large volumes of data, and it takes more time. They could scale better, as we have seen some degradation in performance when we work with large data sets.

    How are customer service and support?

    We have some open tickets with them from time to time. They have definitely promptly responded and provided solutions. There have been no issues.

    Support has changed hands many times, though we always land on a good support model. I would rate the technical support as seven out of 10.

    They cannot just custom build solutions for us. These are things that they will deliver and add to releases. 

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We were previously using Collibra and Talend data management. We switched this tool to help us build our data mapping, not just field-level mapping. There are also aspects of code set management, where we are translating different codes that we are standardizing to enterprise codes. With the reference data management aspects of it, we can build our own data sets within the tool and that data set is also integrated with our data pipeline.

    We were definitely not sticking with the Talend tool because it increased our delivery time for data. When we were looking for other platforms, we needed a tool that captured data mapping in a way that a systematic program could actually read and understand it, then generate the dynamic code for an ETL processor pipeline.

    How was the initial setup?

    It was through AWS. The package was very easy to install. 

    What was our ROI?

    If I use a traditional ETL tool and build it through an IT port, it would take five days to build very simple data mapping to get it to the deployment phase. Using this solution, the IT cost will be cut down to less than a day. Since the business requirements are now captured directly in the tool, I don't need IT support to execute it. The only part being executed and deployed from the metadata is my ETL code, which is the information that the business will capture. So, we can build data pipelines at a very rapid rate with a lot of accuracy. 

    During maintenance times, when things are changing and updating, businesses will not have access to their ETL tool, code, and the rules executed in the code. However, using this tool with its data governance and data mapping, the data captured is what actually it will be. The rules are first defined, then they are fed into the ETL process. This is done weekly because we dynamically generate the ETL from our business users' mapping. That definitely is a big advantage. Our data will never be off the rules that the business has set up.

    If people cannot do discovery on their own, then you will be adding a lot of resource power, i.e., manpower, to support the business usage of the data. A lot of money is saved because we can run a very lean shop and don't have to onboard a lot of resources. This saves a lot on manpower costs as well.

    What's my experience with pricing, setup cost, and licensing?

    The licensing cost was very affordable at the time of purchase. It has since been taken over by erwin, then Quest. The tool has gotten a bit more costly, but they are adding more features very quickly. 

    Which other solutions did I evaluate?

    We did a couple of demos with data catalog-type tools, but they didn't have the complete package that we were looking for.

    What other advice do I have?

    Our only systematic process for refreshing metadata is from the erwin Data Modeler tool. Whenever those updates are done, we then have a systematic way to update the metadata in our reference tool.

    I would rate the product as eight out of 10. It is a good tool with a lot of good features. We have a whole laundry list of things that we are still looking for, which we have shared with them, e.g., improving stability and the product's overall health. The cost is going up, but it provides us all the information that we need. The basic building blocks of our governance are tightly coupled with this tool.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Manoj Narayanan - PeerSpot reviewer
    Practice Director - Digital & Analytics Practice at HCL Technologies
    Real User
    Top 5Leaderboard
    Metadata harvesters, data catalogs, and business glossaries help standardize data and create transparency
    Pros and Cons
    • "erwin has tremendous capabilities to map right from the business technologies to the endpoint, such as physical entities and physical attributes, from a lineage standpoint."
    • "Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier."

    What is our primary use case?

    Our clients use it to understand where data resides, for data cataloging purposes. It is also used for metadata harvesting, for reverse engineering, and for scripting to build logic and to model data jobs. It's used in multiple ways and to solve different types of problems.

    How has it helped my organization?

    Companies will say that data is their most valuable asset. If you, personally, have an expensive car or a villa, those are valued assets and you make sure that the car is taken for service on a regular basis and that the house is painted on a regular basis. When it comes to data, although people agree that it is one of the most valued assets, the way it is managed in many organizations is that people still use Excel sheets and manual methods. In this era, where data is growing humongously on a day-to-day basis—especially data that is outside the enterprise, through social media—you need a mechanism and process to handle it. That mechanism and process should be amply supported with the proper technology platform. And that's the type of technology platform provided by erwin, one that stitches data catalogs together with business glossaries and provides intelligent connectors and metadata harvesters. Gone are the days where you can use Excel sheets to manage your organization. erwin steps up and changes the game to manage your most valued asset in the best way possible.

    The solution allows you to automate critical areas of your data governance and data management infrastructure. Manual methods for managing data are no longer practical. Rather than that, automation is really important. Using this solution, you can very easily search for something and very easily collaborate with others, whether it's asking questions, creating a change request, or creating a workflow process. All of these aspects are really important. With this kind of solution, all the actions that you've taken, and the responses, are in one place. It's no longer manual work. It reduces the complexity a lot, improves efficiency a lot, and time management is much easier. Everything is in a single place and everybody has an idea of what is happening, rather than one-on-one emails or somebody having an Excel sheet on their desktop.

    The solution also affects the transparency and accuracy of data movement and data integration. If people are using Excel sheets, there is my version of truth versus your version of truth. There's no source of truth. There's no way an enterprise can benefit from that kind of situation. Bringing in standardization across the organization happens only through tools like metadata harvesters, data catalogs, business glossaries, and stewardship tools. This is what helps bring transparency.

    The AIMatch feature, to automatically discover and suggest relationships and associations between business terms and physical metadata, is another very important aspect because automation is at the heart of today's technology. Everything is planned at scale. Enterprises have many data users, and the number of data users has increased tremendously in the last four or five years, along with the amount of data. Applications, data assets, databases, and integration technologies have all evolved a lot in the last few years. Going at scale is really important and automation is the only way to do so. You can't do it working manually.

    erwin DI’s data cataloging, data literacy, and automation have reduced a lot of complexities by bringing all the assets together and making sense out of them. It has improved the collaboration between stakeholders a lot. Previously, IT and business were separate things. This has brought everybody together. IT and business understand the need for maintaining data and having ownership for that data. Becoming a data-literate organization, with proper mechanisms and processes and tools to manage the most valued assets, has definitely increased business in terms of revenues, customer service, and customer satisfaction. All these areas have improved a lot because there are owners and stewards from business as well as IT. There are processes and tools to support them. The solution has helped our clients a lot in terms of overall data management and driving value from data.

    What is most valuable?

    • Metadata harvesting
    • business glossaries and data catalogs

    In an enterprise there will already have been a lot of investment in technology over the last one or two decades. It's not practical for an organization to scrap what they have built over that time and embrace new technology. It's important for us to ensure that whatever investments have been made can be used. erwin's metadata managers, metadata hooks, and its reverse engineering capabilities, ensure that the existing implementation and technology investments are not scrapped, while maximizing the leveraging of these tools. These are unique features which the competition is lacking, though many of them are catching up. erwin is one of the top providers in those areas. Customers are interested because it's not a scrap-and-rebuild, rather it's a build on to what they already have.

    I would rate the solution’s integrated data catalog and data literacy, when it comes to mapping, profiling, and automated lineage analysis at eight out of 10. erwin has tremendous capabilities to map right from the business technologies to the endpoint, such as physical entities and physical attributes, from a lineage standpoint. Metadata harvesting is also an important aspect for automating the whole thing. And cataloging and business glossaries cannot work on their own. They need to go hand-in-glove when it comes to actual data analysis. You need to be able to search and find out what data resides where. It is a very well-stitched, integrated solution.

    In terms of the Smart Data Connectors, automating metadata for reverse engineering or forward engineering is a great capability that erwin provides. Keeping technology investments intact is something which is very comforting for our clients and these capabilities help a client build on, rather than rebuild. That is one of the top reasons I go for erwin, compared to the competition.

    What needs improvement?

    I would like to see a lot more AI infusion into all the various areas of the solution. 

    Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier. 

    Overall, automation for associating business terms to data items, and having automatic relationship discovery, can be improved in the upcoming releases. But I'm sure that erwin is innovating a lot.

    For how long have I used the solution?

    We have been implementing erwin Data Intelligence for Data Governance since the 2017-2018 time frame. We don't use it in our company, but we have to build capabilities in the tool as well as learn how best to implement the tool, service the tool, etc. We understand the full potential of the tool. We recommend the tool to our customers during RFPs. Then we help them use the product.

    HCL Technologies is one of the top three ID service organizations in India, with around 150,000 employees. We have a practice specifically for data and analytics and within that we cover data governance, data modeling, and data integration. I lead the data management practice including glossary, business lineage, and metadata integration. I have used all of that. 

    We are Alliance partners with Erwin and have partnered with them for three or four years.

    We serve many clients and we have a fortnightly catch up with erwin Alliance people. We have implemented it in different ways for our customers.

    What do I think about the stability of the solution?

    It is stable. 

    What do I think about the scalability of the solution?

    It can scale to large numbers of people and processes. It can connect to multiple sources of data within an organization to harvest metadata. It can connect to multiple data assets to bring the metadata into the solution. From a performance standpoint, a scaling standpoint, we've not seen an issue.

    How are customer service and technical support?

    We are Alliance partners, so whenever we go to clients and there are specific instances where we lack thorough knowledge of the erwin tools, we touch base with erwin's product team. We have worked together to tweak the product or to give our clients a seamless experience. 

    We have also had their Alliance team give our developer community sessions on erwin DI, usages, and PoCs. We've done collaborated multiple times with erwin's product presales community.

    How was the initial setup?

    It's really straightforward. There are user-friendly tools so that a business user can very quickly access the tools. It's easy to create terminologies and give definitions. Even for an IT person, you don't need to be an architect to really understand how data catalogs work or how mapping can be created between data elements. They are all UI-driven so it's very easy to deploy or to create an overall data ecosystem.

    The time it takes to deploy depends. Product deployment may not take a lot of time, between a couple of days and a week. I have not done it for an enterprise, but I'm assuming that it wouldn't be too much of a task to deploy erwin in an organization.

    The important aspect is to bring in the data literacy and increase use throughout the organization to start seeing the benefit. People may not move from their comfort zone so easily. That would be the part that can take time. And that is where a partner like us, one that can bring change management into the organization and hand-hold the organization to start using this, can help them understand the benefits. It is not that the CEO or CTO of the organization must understand the benefits and decide to go for it, but all the people—senior management, mid-management, and below—should buy into the idea. They only buy into the idea if they see the benefit from it, and for that, they need to start using the product. That is what takes time.

    Our deployment plan is similar across organizations, but building the catalog and building the glossaries would depend on the organization. Some organizations have a very strong top-down push and the strategy can be applied in a top-down approach. But in some cases, we may still need to get the buy-in. In those cases we would have to start small, with a bottom-up approach, and slowly encourage people to use it and scale it to the enterprise. From a tool-implementation standpoint, it might be all the same, but scaling the tool across the organization may need different strategies.

    In our organization, there are 400 to 500 people, specifically on the data management side, who work for multiple clients of ours. They are developers, leads, and architects, at different levels. The developers and the leads look at the deployment and actual business glossary and data catalog creation using the tool for metadata harvesting, forward engineering, and reverse engineering. The architects generally connect with the business and IT stakeholders to help them understand how to go about things. They create business glossaries and business processes on paper and those are used as the design for the data leads who then use the tool to create them.

    What was our ROI?

    We struggle when it comes to ROI because data governance and data management are parts of an enterprise strategy, as opposed to a specific, pinpointed problem. An organization might be able to use the overall data management strategy for multiple things, whether it's customer satisfaction, customer churn, targeted marketing, or improving the bottom line. When we clean the data and bring some method to the madness, it creates a base and, from there, an organization can really start reaping the benefits.

    They can apply analytics to the clean data and have right ownership of the data. The overall process is important as it is the base for an organization to start asking: "Now that I have the right data and it is quality compliant, what can I deduce from the data?" There may not be a dollar value to that straight away, but if you really want to bring in dollar value from your data, you need to have the base set properly. Otherwise it is garbage in, garbage out. Organizations understand that, even though there is no specific increase in sales or bottom-line improvement. Even if that dollar value is not apparent to the customer, they understand that this process is important for them to get to that stage. That is where the return on investment comes in.

    What's my experience with pricing, setup cost, and licensing?

    The solution is aggressively priced. We can compete with most of them. 

    It is up to erwin and its pricing strategy, but if the Smart Connectors—at least a few of them which are really important—can be embedded into the product, that would be great. 

    But overall, I feel the pricing is correct right now.

    Which other solutions did I evaluate?

    There are a number of competitors including Informatica, IBM, Collibra, Alation; multiple organizations that offer similar features. But Erwin has an edge on metadata harvesting.

    What other advice do I have?

    It is a different experience. Collaboration and communication are very important when you want to harvest the value from the humongous amount of data that you have in your organization. All these aspects are soft aspects, but are very important when it comes to getting value from data.

    Data pipelines are really important because of the kinds of data that are spread across different formats, in differing granularity. You need to have a pipeline which removes all the complexities and connects many types of sources, to bring data into any type of target. Irrespective of the kind of technology you use, your data platform should be adaptive enough to bring data in from any types of sources, at any intervals, in real-time. It should handle any volume of data, structured and unstructured. That kind of pipeline is very important for any analysis, because you need to bring in data from all types of sources. Only then you can do a proper analysis of data. A data pipeline is the heart of the analysis.

    Overall, erwin DI is not so costly and it brings a lot of unique features, like metadata hooks and metadata harvesters, along with the business glossaries, business to business mapping, and technology mapping. The product has so many nice features. For an organization that wants to realize value from the potential of its data, it is best to go with erwin and start the journey.

    Disclosure: My company has a business relationship with this vendor other than being a customer: Alliance Partner
    PeerSpot user
    Maximilian Te - PeerSpot reviewer
    Business Intelligence BA at a insurance company with 10,001+ employees
    Real User
    Top 20
    Good traceability and lineage features, impact analysis is helpful in the decision-making process, and the support is good
    Pros and Cons
    • "Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream."
    • "There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool."

    What is our primary use case?

    Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents.

    As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.

    How has it helped my organization?

    The automated data lineage is very useful. We used to work in Excel, and there is no way to trace the lineage of the data. Since we started working with DI, we have been able to quickly trace the lineage, as well as do an impact analysis.

    We do not use the ETL functionality. I do know, however, that there is a feature that allows you to export your mapping into Informatica.

    Using this product has improved our process in several ways. When we were using Excel, we did not know for sure that what was entered in the database was what had been entered into Excel. One of the reasons for this is that Excel documents contain a lot of typos. Often, we don't know the data type or the data length, and these are some of the reasons that lineage and traceability are important. Prior to this, it was zero. Now, because we're able to create metadata from our databases, it's easier for us to create mappings. As a result, the typos virtually disappeared because we just drag-and-drop each field instead of typing it. 

    Another important thing is that with Excel, it is too cumbersome or next to impossible to document the source path for XSD files. With DI, since we're able to model it in the tool, we can drag and drop and we don't have to type the source path. It's automatic.

    This tool has taken us from having nothing to being very efficient. It's really hard to compare because we have never had these features before.

    The data pipeline definitely improved the speed of analysis in our use case. We have not timed it but having the lineage, and being able to just click, makes it easier and faster. We believe that we are the envy of other departments that are not using DI. For them to conduct an impact analysis takes perhaps a few minutes or even a few hours, whereas, for us, it takes less than one minute to complete.

    We have automated parts of our data management infrastructure and it has had a positive effect on our quality and speed of delivery. We have a template that the system uses to create SQL code for us. The code handles the moving of data and if they are direct move fields, it means that we don't need a person to code this operation. Instead, we just run the template.

    The automation that we use is isolated and not for everything, but it affects our cost and risk in a positive way because it works efficiently to produce code.

    It is reasonable to say that DI's generation of production code through automated code engineering reduces the cost from initial concept to implementation. However, it is only a small percentage of our usage.

    With respect to the transparency and accuracy of data movement and data integration, this solution has had a positive impact on our process. If we bring a new source system into the data warehouse and the interconnection between that system and us is through XML then it's easier for us to start the mapping in DI. It is both efficient and effective. Downstream, things are more efficient as well. It used to take days for the BAs to do the mapping and now, it probably takes less than one hour.

    We have tried the AIMatch feature a couple of times, and it was okay. It is intended to help automatically discover relationships and associations in data and I found that it was positive, albeit more relevant to the data governance team, of which I am not part. I think that it is a feature in its infancy and there is a lot of room for improvement.

    Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream. For example, if a source were to say "Okay, we're no longer going to send this field to you," then immediately we will know what the impact downstream will be. In response, either we can inform upstream to hold off on making changes, or we can inform the departments that will be impacted. That in itself has a lot of value.

    What is most valuable?

    The most valuable features are lineage and impact analysis. In our use case, we deal with data transformations from multiple sources into our data warehouse. As part of this process, we need traceability of the fields, either from the source or from the presentation layer. If something is changing then it will help us to determine the full impact of the modifications. Similarly, if we need to know where a specific field in the presentation layer is coming from, we can trace it back to its location in the source.

    The feature used to fill metadata is very useful for us because we can replicate the data into our analytics as metadata.

    What needs improvement?

    Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times.

    There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI.

    We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.

    For how long have I used the solution?

    We have been using erwin Data Intelligence (DI) for Data Governance since 2013.

    What do I think about the stability of the solution?

    The stability of DI has come a long way. Now, it's very stable. If I were rating it six years ago, my assessment would definitely have been different. At this time, however, I have no complaints.

    What do I think about the scalability of the solution?

    We have the enterprise version and we can add as many projects as we need to. It would be helpful if we had a feature to keep better track of the users, such as a group membership field.

    We are the only department in the organization that uses this product. This is because, in our department, we handle data warehousing, and mapping documentation is very important. It is like a bible to us and without it, we cannot function properly. We use it very extensively and other departments are now considering it.

    In terms of roles, we have BAs with read-write access. We also have power users, who are the ones that work with the data catalog, create the projects, and make sure that the metadata is all up-to-date. Maintenance of this type also ensures that metadata is removed when it is no longer in use. We have QA/Dev roles that are read-only. These people read the mapping and translate it into code, or do QA on it. Finally, we have an audit role, where the users have read-only access to everything.

    One of the tips that I have for users is that if there are a lot of mapping documents, for example, more than a few hundred rows for a few hundred records, it's easier to download it, do it in Excel, and upload it again.

    All roles considered, we have between 30 and 40 users.

    How are customer service and technical support?

    The technical support is good.

    When erwin took over this product from the previous company, the support improved. The previous company was not as large and as such, erwin is more structured and has processes in place. For example, if we report issues, erwin has its own portal. We also have a specific channel to go through, whereas previously, we contacted support through our account manager.

    Which solution did I use previously and why did I switch?

    Other than what we were doing with Excel, we were not using another solution prior to this one.

    How was the initial setup?

    We have set up this product multiple times. The first setup was very challenging, but that was before erwin inherited or bought this product from the original developer. When erwin took over, there were lots of improvements made. As it is now, the initial setup is not complex and is no longer an issue. However, when we first started in 2013, it was a different story.

    When we first deployed, close to 10 years ago, we were new to the product and we had a lot of challenges. It is now fairly easy to do and moreover, erwin has good support if we run into any trouble. I don't recall exactly how long it took to initially deploy, but I would estimate a full day. Nowadays, given our experience and what we know, it would take less than half a day. Perhaps one or two hours would be sufficient.

    The actual deployment of the tool itself has no value because it's not a transactional system. With a transactional system, for example, I can do things like point of sale. In the case of this product, BAs create the mappings. That said, once it's deployed, the BAs can begin working to create mappings. Immediately, we can perform data cataloging, and given the correct connections, for example to Oracle, we can begin to use the tool right away. In that sense, there is a good time-to-value and it requires minimal support to get everything running.

    We have an enterprise version, so if a new department wants to use it then we don't need to install it again. It is deployed on a single system and we give access to other departments, as required. As far as installing the software on a new machine, we have a rough plan that we follow but it is not a formal one that is written down or optimized for efficiency.

    What about the implementation team?

    We had support from our reseller during the initial setup but they were not on-site.

    Maintenance is done in-house and we have at least three people who are responsible. Because of our company structure, there is one who handles the application or web server. A second person is responsible for AWS, and finally, there is somebody like me on the administrative side.

    What was our ROI?

    We used to calculate ROI several years ago but are no longer concerned with it. This product is very effective and it has made our jobs easier, which is a good return.

    What's my experience with pricing, setup cost, and licensing?

    We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users. This product is not expensive compared to the other ones on the market.

    We did not buy the full DI, so the Business Glossary costs us extra. As such, we receive two bills from erwin every year.

    Which other solutions did I evaluate?

    We evaluated Informatica but after we completed a cost-benefit analysis, we opted to not move forward with it.

    What other advice do I have?

    My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market.

    I would rate this solution a nine out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Jim Marr - PeerSpot reviewer
    Analytics Delivery Manager at DXC
    Real User
    Top 5Leaderboard
    Value is in the accuracy, quality, and completeness of the migration source to target mapping and acceleration of development through code automation.
    Pros and Cons
    • "We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation."
    • "One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage because they get locked."

    What is our primary use case?

    We use DI for Data Governance as part of a large system migration supporting application refresh and multi-site consolidation. Metadata Manager is utilized to harvest metadata which is augmented with custom metadata properties identifying rules criteria which drive automated source to target mapping. Custom build code generation connector then automates forward engineering code generation groovy. We've developed a small number of connectors supporting this 1:1 data migration. It's a really good product that we've been able to make very good use of.

    How has it helped my organization?

    This use case is a one-time system conversion solution not having life after the migration. Value is in the acceleration, accuracy, quality, and completeness of the migration source to target mapping and generated data management code.

    Use case action is the extraction and staging of the source application data targeting ~700 large objects from the overall application set of ~2,400 relational tables. Each table extract has light join and selection criteria which are injected into the source metadata. The application itself is moving to a next-generation application that performs the same business function. Our client is in health and human services welfare administration in the United States. This use case doesn't have ongoing data governance for our client, at least at this point.

    erwin DIS has enabled us to automate critical areas of data management infrastructure. That's where we see the benefit, in the acceleration of speed as well as the acceleration of quality and reduction of costs. 

    erwin DIS generation of data management code through automated code engineering reduced the time it takes to go from initial concept to implementation for what we're in progress with right now. There is not a production delivery as of yet. That's still another year and a half out. This is a multi-year project where this use case is applied.

    erwin has affected the transparency and accuracy of data movement and data integration quite a bit through the various report facilities. We can make self-service reporting available through the business user's portal. erwin DIS has provided the framework and the capability to be transparent, to have stakeholder involvement with the exercise the whole way along.

    Through business user's portals and workflows, we're able to provide effective stakeholder reviews as well as then stakeholder access to all of the information and knowledge that's collected. The facility itself gives quite a few capabilities into user-defined parameters to capture data knowledge and organization change information which project stakeholders can use and apply throughout the program. Client and stakeholders utilize the business user's portal for extended visibility which is a big benefit.

    We're interested in the AIMatch feature. It's something that we had worked with AnalytiX DS early on to actually develop some of the ideas for. We were somewhat instrumental in bringing some of that technology in, but in this particular case, we're not using it. 

    What is most valuable?

    The most valuable features include: 

    • The mapping facilities
    • All of the mapping controls workflow
    • The metadata injection and custom metadata properties for quality of mappings
    • The various mapping tools and reports that are available
    • Gap analysis
    • Model gap analysis
    • Codesets and codeset value mapping 

    We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation.

    Smart Data Connectors to reverse engineer and forward engineer code from BI, ETL or Data Management Platform is where we're gaining most value. The capability is such that it's only limited by one's imagination or ability to come up with innovative ideas, to automate every idea that we've been able to come up with. We have been able to apply some form of automation to that. That's been quite good.

      What needs improvement?

      The UI just got a real big uplift, but behind the UI, there are quite a few different integrations that go on.

      One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage.

      Some areas we found take time to process such as metadata scans, some of the management functions at a large scale do take time to process. That's an observation that we've worked with erwin support to a degree, but it seems that's just an inherent part of the scale of our particular project.

      For how long have I used the solution?

      We're in our second year of using DI for Data Governance.

      What do I think about the scalability of the solution?

      Erwin's latest general release has addressed performance of metadata sources having greater than 2,000 objects. Our use has 3 metadata sources each having ~ 2,400 relational objects. DIS provides good capability to organize projects and subject areas with multiple sublayers. All mappings have been set to synchronize with scanned metadata. Our solution had built over close to 2,000 mappings over 20K mapped code value pairs. So far so good, scanning and synchronizing metadata and reporting on enterprise gaps take some time to process but not unreasonable considering the work performed. 

      How are customer service and technical support?

      Erwin support is pretty good. We've had our struggles there and I've gone through a lot of tickets. I'd rate them an eight out of ten.

      There have been a couple of product enhancements, one of which I've not been able to get traction into and that was with regard to code set management and workflows. There's some follow-up that I have to do there. That doesn't seem to be a priority. It seems we have to have a couple of different discussions usually or deep dive to determine the problem understanding for a resolution. Sometimes that takes a little bit longer than I would like but all in all, it's pretty good.

      What about the implementation team?

      We had erwin involved in the implementation. 

      I don't think that it can be stood up quickly with minimal professional services. There's quite a bit of involvement. The integration of the solution into an environment ecosystem has challenges that take some effort especially if you're building new connectors. There's a good bit of effort in designing, preparing, planning, and building. It's pretty heavy as far as its integration effort.

      What was our ROI?

      The client is thrilled with higher quality, lower-cost products, and the services.

      What's my experience with pricing, setup cost, and licensing?

      The financial model will be different. There is the cost of this software but there are offsetting accelerations through the automation as well as cost and efficiency. Don't be afraid of automation and don't get hung up on losing revenue due to automation. What I've seen is that some financial managers resist automation that results in a reduction of labor revenue. These reductions are ideally overcome through additional engagements, improve customer satisfaction, quality, add-on support, whatever the case, automation is a good thing.

      The fact that this solution can be hosted in the cloud does not affect the total cost of ownership. The licensing cost is the same whether I use the cloud or on-prem. It may be the partner agreements but we do get some discounts and there's some negotiated pricing already in place with our companies. I didn't see that there was a difference in cloud license versus on-premise.

      What other advice do I have?

      We haven't integrated Data Catalog and Data Literacy yet. Our client is a little bit behind on being able to utilize these aspects that we've presented for additional value. 

      My advice would be to partner with an integrator. erwin has quite a few of them. If you're going to jump into this in earnest, you're going to need to have that experience and support.

      The biggest lesson I have learned is that the only limitation is the imagination. Anything is possible. There's quite a strong capability with this product. I've seen what you can come up with as far as innovative flows, processes, automation, etc. It's got quite strong capabilities. 

      The next lesson would be in regards to how automation fits within a company's framework and to embrace automation. There are some good quality points to continue with, certainly within the data cataloging, data governance, and so forth. There's quite a bit of good capability there. 

      I rate erwin Data Intelligence for Data Governance a nine out of ten. 

      Which deployment model are you using for this solution?

      Private Cloud

      If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

      Amazon Web Services (AWS)
      Disclosure: I am a real user, and this review is based on my own experience and opinions.
      PeerSpot user
      Tracy Hautenen Kriel - PeerSpot reviewer
      Tracy Hautenen KrielArchitecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
      Top 5LeaderboardReal User

      Thanks for the great review! How do you find the interaction between the cloud instance of DIS obtaining metadata from on-prem DBMS solutions?

      Buyer's Guide
      Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.
      Updated: September 2022
      Product Categories
      Data Governance
      Buyer's Guide
      Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.