erwin Data Intelligence by Quest OverviewUNIXBusinessApplication

erwin Data Intelligence by Quest is the #5 ranked solution in Data Governance tools. PeerSpot users give erwin Data Intelligence by Quest an average rating of 8.6 out of 10. erwin Data Intelligence by Quest is most commonly compared to Microsoft Purview: erwin Data Intelligence by Quest vs Microsoft Purview. erwin Data Intelligence by Quest is popular among the large enterprise segment, accounting for 74% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 25% of all views.
erwin Data Intelligence by Quest Buyer's Guide

Download the erwin Data Intelligence by Quest Buyer's Guide including reviews and more. Updated: November 2022

What is erwin Data Intelligence by Quest?

The erwin Data Intelligence Suite (erwin DI) combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog. Then make it accessible and understandable within context via role-based views. This complete metadata-driven approach to data governance facilitates greater IT and business collaboration.

erwin Data Intelligence by Quest was previously known as erwin DG, erwin Data Governance.

erwin Data Intelligence by Quest Customers

Oracle, Infosys, GSK, Toyota Motor Sales, HSBC

erwin Data Intelligence by Quest Video

erwin Data Intelligence by Quest Pricing Advice

What users are saying about erwin Data Intelligence by Quest pricing:
  • "Smart Data Connectors have some costs, and then there are user-based licenses. We spend roughly $150,000 per year on the solution. It is a yearly subscription license that basically includes the cost for Smart Data Connectors and user-based licenses. We have around 30 data stewards who maintain definitions, and then we have five IT users who basically maintain the overall solution. It is not a SaaS kind of operation, and there is an infrastructure cost to host this solution, which is our regular AWS hosting cost."
  • "The licensing cost was very affordable at the time of purchase. It has since been taken over by erwin, then Quest. The tool has gotten a bit more costly, but they are adding more features very quickly."
  • "We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users."
  • erwin Data Intelligence by Quest Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Senior Director at a retailer with 10,001+ employees
    Real User
    Top 10
    Gives a visual representation of how the data flows from tables to the metrics and has a big impact in terms of the transparency and understanding of data
    Pros and Cons
    • "Being able to capture different business metrics and organize them in different catalogs is most valuable. We can organize these metrics into sales-related metrics, customer-related metrics, supply chain-related metrics, etc."
    • "There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements."

    What is our primary use case?

    Our primary use case is that we want to enable self-service for different business teams to be able to find different data. We are using erwin Data Intelligence platform to enable data literacy and to enable different users to be able to find the data by using the data catalog.

    It can be hosted on-premise or in the cloud. We chose to run it in the cloud because the rest of our analytics infrastructure is running in the cloud. It made natural sense to host it in the cloud.

    How has it helped my organization?

    I represent IT teams, and a lot of times different business teams want to do data analysis. Before using erwin Data Intelligence Suite, they used to constantly come to IT teams to understand things like how is the data is organized and what type of queries or table they should use. It used to take a lot of my team's time to answer those questions. Some of those questions were pretty repetitive. With erwin Data Intelligence Suite, they can now do a self-service. There is a business user portal using which they can search different tables. They can do the search in different ways. If they already know the table name, they can just directly search for that table name, and they will find the definition of each column there, and that would help them in understanding how to use that table. In some cases, they may not know the exact table name, but they may know, for example, a business metric. In such a case, they can search by using a business metric, and, inside the tool, they can link those business metrics to the underlying tables from which these metrics get calculated. They can get to the table definitions through that route as well. This is helping all of our business analysts to do the self-service analytics, and, at the same time, we can enforce some governance around it. Because we enabled self-service for different business analysts, it has improved the speed. It has easily reduced at least 20% of the time that my IT team had to spend answering questions from different business teams. The benefit is probably even more for business teams, and I think they are faster by at least 30% in terms of being able to get the data that they need and perform their analysis based on that. I would expect at least 25% savings in time.

    It has a big impact in terms of the transparency of the data. Everybody is able to find the data by using the catalog, and they can also see how the data is getting loaded through different tables. It has given a lot of transparency. Based on this transparency, we were able to have a good discussion about how the data should be organized so that we can collaborate better with the business in terms of data organization. We were also able to change some of our table structures, data models, etc.

    By using the data catalog, we have definitely improved in terms of maturity as a data-driven decision-maker organization. We are now getting to a level where everybody understands the data. Everyone understands how it is organized, and how they can use this data for different business decisions. The next level for us would be to go and use some of these advanced features such as AI Data Match.

    In terms of the effect of the data pipeline on our speed of analysis, understanding the data pipeline and the data flow is helpful in identifying a problem and resolving it quickly. A lot of times there is some level of ambiguity, and businesses don't understand that how the data flows. Understanding the data pipeline helps them in quickly identifying the problems. They can solve the identified problems and bottlenecks in the data flow. For example, they can identify the data set that is required for a specific analysis and then bring in the data from another system. 

    In terms of the money and time that the real-time data pipeline has saved us, it is hard to quantify the amount in dollars. In terms of time, it has saved us 25% time on the analysis part.

    It has allowed us to automate a lot of stuff for data governance. By using Smart Data Connectors, we are automatically able to pull metric definitions from our reporting solution. We are then able to put an overall governance and approval process on top of that. Whenever a new business metric needs to be created, the data stewards who have Write access to the tool can go ahead and create those definitions. Other data stewards can peer-review their definitions. Our automated workflow then takes that metric to an approved state, and it can be used across the company. We have definitely built a lot of good automation with the help of this tool.

    It definitely affects the quality of data. A lot of times, different teams may have different levels of understanding, and they might have different definitions of a particular metric. A good example is the customer lifetime value. This metric is used by multiple departments, but each department can have its own metric definition. In such a case, they will get different values for this metric, and they won't make consistent decisions across. If they have a common definition, which is governed by this tool, then everybody can reference that. When they do the analysis, they will get the same result, which leads to a better quality of decision-making across the company.

    It affects data delivery in terms of making correct decisions. Ultimately, we are using all of this data to get some insights and then make decisions based on that. It is not so much of the cost but more of the risk that it affects.

    What is most valuable?

    Being able to capture different business metrics and organize them in different catalogs is most valuable. We can organize these metrics into sales-related metrics, customer-related metrics, supply chain-related metrics, etc. 

    Data catalog and data literacy are really the great capabilities of this solution that we leverage. A data catalog is coupled with the business glossary, which enables data literacy. In the business glossary, we can maintain definitions of different business terms and metrics, and then the data catalog can be searched with them. 

    Data lineage is very important to us. It is related to the origin of the data. For example, if a particular metric gets calculated from certain data, how did this data originate? From which source system or table did this data originate? After we have the data, lineage is populated, and some advanced users, such as data scientists, can use this data lineage to get to the details. Sometimes, they are interested in kind of more raw data so that they can get details of the raw table from which these metrics are getting calculated. They can use that raw data for their machine learning or AI use cases.

    I used a couple of different Smart Data Connectors, and they were great. One of the Smart Data Connectors that we used was for our Microstrategy BI solution, so it was a Microstrategy Smart Data Connector. Microstrategy is our enterprise reporting tool, and a lot of the metrics were already built in different reports in Microstrategy. So, it made sense to use this connector to extract all the metrics that were already being used. By using this connector, we could connect to Microstrategy, pull all the metrics and reports from that, and then populate our business glossary with those details. This was a big advantage of using the Microstrategy Smart Data Connector. Another Smart Data Connector that we used was the Python Connector. It enabled us to build the data lineage. We already have a lot of ETL kind of processes built by using Python and SQL, and this connector can reverse engineer that and graphically show how the data flows from the source. This work was done by our data engineers or IT teams, but the business teams didn't understand how it is built. So, by giving them a visual representation of that, they became more data literate. They understood how the data flows from different tables and ultimately lands in the final tables.

    It can be stood up quickly with minimal professional services. Its installation and configuration are not that complicated, and we can easily and quickly stand it up. We wanted to get faster time to value. So, we did a small professional services engagement to come up to speed in terms of how to use the product. Its installation and configuration were pretty quick, but afterward, for configuring it, we wanted to make sure that we have the right processes established within the tool.

    What needs improvement?

    There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements.

    Buyer's Guide
    erwin Data Intelligence by Quest
    November 2022
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
    653,757 professionals have used our research since 2012.

    For how long have I used the solution?

    We have been using erwin Data Modeler for quite a while. We have been using erwin Data Intelligence Suite for about one and a half years.

    What do I think about the scalability of the solution?

    It seems to be easily scalable. I haven't seen any problems so far from the scalability aspect. We have strong support, so whenever I have some issues or there is something for which I need technical support, their support is always there to answer the questions. Their support has been great.

    We have a few key users, such as data domain experts, and we have different business areas, such as marketing, sales, finance, supply chain, etc. Each of them has a domain expert who also has an account in erwin to maintain definitions. The rest of the organization kind of gets a read-only view into that. 

    We have about 30 people who can maintain those definitions, and the rest of the organization can find the data or the definitions of that data. These 30 people include data stewards or data domain specialists, and they maintain the definitions of different business terms, glossary terms, and business metrics. There are about five different IT users who actually configure data lineages and data catalog definitions. These are the core teams that basically make sure that the catalog and Data Intelligence Suite are populated with the data. There are more than 200 corporate business users who then find this data after it is populated in the catalog. 

    I would expect its usage to grow from 200 people to 2,000 people within the next year. When we become more mature at using this data and analytics, we will use the advanced features within the tool.

    How are customer service and support?

    In terms of the support that I'm getting, I'm able to get all my requests fulfilled. The only thing that happened was Erwin got sold and then Quest acquired them, but so far, I haven't seen any issues because of this acquisition.

    When we upgraded the version, we had some issues related to Smart Data Connector not working properly, so we had to log a ticket with them, and they were responsive. They set up meetings with us to go through the problem and helped us in resolving the problem. Their support has been pretty responsive. When we submitted tickets, we got immediate attention and resolution.

    How was the initial setup?

    The initial setup was straightforward. We only had to work with the Erwin team to get some of the Smart Data Connectors configured properly.

    Its deployment was within three months. Installing, configuring, and getting it up to speed wasn't that much of a pain. Getting business users to use the tool and making sure that they are leveraging it for their day-to-day tasks is what takes more time. It is more of a change management exercise.

    In terms of the implementation strategy, we worked with Erwin's team. In fact, I hired their professional services as well because I wanted to make sure we get up to speed pretty quickly. The installation, configuration, and some of the cleaning were done with Erwin's professional services team. After my team was up to speed, we retained some of the key data stewards. We included them as part of the planning, and they are now kind of driving the adoption and use of the tool across the company.

    What about the implementation team?

    We didn't use any company other than Erwin's team. 

    You don't need many resources for its deployment and maintenance. You just need one person, and this person also doesn't have to be full-time. Only in the initial stages, you have to spend time adjusting or populating these definitions. 

    What was our ROI?

    We are in the early stage of ROI. The ROI is more in terms of the time that we saved from the analysis. If the analysis is much faster, let's say by 30%, we can get some of the insights faster. We can then accordingly make business decisions, which will give the business value. So, right now, the ROI is in terms of being able to be faster to market with some of the businesses.

    What's my experience with pricing, setup cost, and licensing?

    Smart Data Connectors have some costs, and then there are user-based licenses. We spend roughly $150,000 per year on the solution.

    It is a yearly subscription license that basically includes the cost for Smart Data Connectors and user-based licenses. We have around 30 data stewards who maintain definitions, and then we have five IT users who basically maintain the overall solution. It is not a SaaS kind of operation, and there is an infrastructure cost to host this solution, which is our regular AWS hosting cost.

    Which other solutions did I evaluate?

    When we were looking for a data catalog solution, we evaluated two or three other solutions. We evaluated data catalogs from both Alation and Collibra. We chose Erwin because we liked the overall solution that Erwin offered as compared to the other solutions.

    One of the great features that Erwin provided was the mind map feature, which I did not see in any of the other tools that we used. A mind map gives a visual representation of how the data flows from tables to the metrics. Another great feature was being able to pull the metric definitions automatically from our reporting system. These were the two great positives for us, which I did not see in the other solutions when we did the proof of concept.

    What other advice do I have?

    We are not using erwin's AI Match feature to automatically discover and suggest relationships and associations between business terms and physical metadata. We are still trying to get all of our data completely mapped in there. After that, we will get to the next level of maturity, which would be basically leveraging in some of the additional features such as AI Match.

    Similarly, we have not used the feature for generating the production code through automated code engineering. Currently, we are primarily doing the automation by using Smart Data Connectors to build some data lineages, which is helping with the overall understanding of the data flow. Over the next few months, as it gets more and more updated, we might see some benefits in this area. I would expect at least 25% savings in time.

    It provides a real-time understandable data pipeline to some level. Advanced users can completely understand its real-time data pipeline. Average users may not be able to understand it.

    Any organization that is looking into implementing this type of solution should look at its data literacy and maturity in terms of data literacy. This is where I really see the big challenge. It is almost like a change management exercise to make sure people understand how to use the data and build some of the processes around the data governance. The maturity of the organization is really critical, and you should make your plans accordingly to implement it.

    The biggest lesson that I have learned from using this solution is probably around how to maintain the data dictionary, which is really critical for enabling data literacy. A lot of times, companies don't have these data dictionaries built. Building the data dictionary and getting it populated into the data catalog is where we spend some of the time. A development process needs to be established to create this data dictionary and maintain it going forward. You have to just make sure that it is not a one-time exercise. It is a continuous process that should be included as part of the development process.

    I would rate erwin Data Intelligence for Data Governance an eight out of 10. If they can improve its user interface, it will be a great product.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Architect at a insurance company with 10,001+ employees
    Real User
    Top 20
    Manages all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions
    Pros and Cons
    • "It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool."
    • "We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today."

    What is our primary use case?

    The Data Intelligence suite helps us manage all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions. These are all managed in this tool. There is definitely extended use of this data set, which includes using the metadata that we have built. Using that metadata, we also integrate with other ETL tools to pull the metadata and use it in our data transformations. It is very tightly coupled with our data processing in general. We also use erwin Data Modeler, which helps us build our metadata, business definitions, and the physical data model of the data structures that we have to manage. These two tools work hand-in-hand to manage our data governance metadata capabilities and support many business processes.

    I manage the data architecture plus manage the whole data governance team who designed the data pipelines. We designed the overall data infrastructure plus the actual governance processes. The stewards, who work with the data in the business, set up the metadata and manage this tool everyday end-to-end.

    How has it helped my organization?

    The benefit of the solution was the adoption of a lot of business partners using and leveraging our data through our governance processes. We have matrices of how many users have been capturing and using it. We have data consultants and other data governance teams who are set up to review these processes and ensure that nobody is really bypassing them. We use this tool in the middle of our work processes for utilization of data on the tail-end, letting the business do self-service, and build our own IT things.

    When we manage our data processes, we know that there are some upward sources or downstream systems. We know that they could be impacted based on some changes coming in from the source or some related to the lineage and impact analysis that this tool brings to the table. We have been able to identify system changes which could impact all downstream systems. That is a big plus because IT and production support teams are now able to use this tool to identify the impact of any issues with the data or any data quality gaps. They can notify all the recipients upfront with the product business communications of any impacts.

    For any company mature enough to have implemented any of these data governance rules or principles, these are the building blocks of the actual process. The criticality is such because we want the business to self-service. We can build data lakes or data warehouses using our data pipelines, but if nobody can actually use the data to be able to see what information they have available without going through IT sources, that defeats the whole purpose of doing this additional work. It is a data platform that allows any business process to come in and be self-service, building their own processes without a lot of IT dependencies.

    There is a data science function where a lot of critical operational reporting can be done. Users leverage this tool to be able to discover what information is available, and it's very heavily used.

    If we start capturing additional data about some metadata, then we can define our own user-defined attributes, which we can then start capturing. It does provide all the information that we want to manage. For our own processes, we have some special tags that we have been able to configure quickly through this tool to start capturing that information.

    We have our own homegrown solutions built around the data that we are capturing in the tool. We build our own pipelines and have our own homegrown ETL tools built using Spark and cloud-based ecosystems. We capture all the metadata in this tool and all the transformation business rules are captured there too. We have API-level interfaces built into the tool to pull the data at the runtime. We then use that information to build our pipelines.

    This tool allows us to bring in any data stewards in the business area to use this tool and set up the metadata, so we don't have to spend a lot of time in IT understanding all the data transformation rules. The business can set up the business metadata, and once it is set up, IT can then use the metadata directly, which feeds into our ETL tool.

    Impact analysis is a huge benefit because it gives us access to our pipeline and data mapping. It captures the source systems from which the data came. For each source system, there is good lineage so we can identify where it came from. Then, it is loaded into our clean zone and data warehouse, where I have reports, data extracts, API calls, and the web application layer. This provides access to all the interfaces and how information has been consumed. Impact analysis, at an IT and field levels, lets me determine:

    • What kind of business rules are applied. 
    • How data has been transformed from each stage. 
    • How the data is consumed and moved to different data marts or reporting layers. 

    Our visibility is now huge, creating a good IT and business process. With confidence, they can assess where the information is, who is using it, and what applications are impacted if that information is not available, inaccurate, or if there are any issues at the source. That impact analysis part is a very strong use case of this tool.

    What is most valuable?

    The most critical features are the metadata management and data mapping, which includes the reference data management and code set management. Its capabilities allow us to capture metadata plus use it to define how the data lineage should be built, i.e., the data mapping aspects of it. The data mapping component is a little unique to this tool, as it allows the entire data lineage and impact analysis to be easily done. It has very good visuals, which it displays in a build to show the data lineage for all the metadata that we are capturing.

    Our physical data mapping is using this tool. The component of capturing the metadata, integrating the code set managers and reference data management aspects of it with the data pipeline are unique to this tool. They are definitely the key differentiators that we were looking for when picking this tool.

    erwin DI provides visibility into our organization’s data for our IT, data governance, and business users. There is a business-facing view of the data. There is an IT version of the tool that allows us to set up the metadata managed by our IT users or data stewards, who are users of the data, to set up the metadata. Then, the same tool has a very good business portal that takes the same information in a read-only way and presents it back in a very business-user friendly way. We call it a business portal. This suite of applications provides us end-to-end data governance from both the IT's and business users' perspective.

    It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool.

    The business portal allows us to search the metadata and do data discovery. Business users come in and present data catalog-type information. This means all the metadata that we capture, such as AI masking, dictionaries, and the data dictionary, is set up as well. That aspect is very heavily used.

    There are a lot of Data Connectors that gather the data from all different source systems, like metadata from many data stores. We configure those Data Collectors, then install them. The Data Connector that helps us load all the metadata from the erwin Data Modeler tool is XML-based.

    The solution delivers up-to-date and detailed data lineage. It provides you all the business rules that data fields are going through by using visualization. It provides very good visualization, allowing us to quickly assess the impact in an understandable way.

    All the metadata and business glossaries are captured right there in the tool. All of these data points are discoverable, so we can search through them. Once you know the business attribute you are looking for, then you are able to find where in the data warehouse this information lives. It provides you technical lineage right from the business glossary. It provides a data discovery feature, so you are able to do a complete discovery on your own.

    What needs improvement?

    The data quality has so many facets, but we are definitely not using the core data quality features of this tool. The data quality has definitely improved because the core data stewards, data engineers, data stewards, and business sponsors know what data they are looking for and how the data should move. They are setting up those rules. We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today.

    For how long have I used the solution?

    I have been using it for four or five years.

    What do I think about the stability of the solution?

    We had a couple of issues here and there, but nothing drastic. There has been a lot of adoption of the tool increasing data usage. There have been a few issues with this, but not blackout-type issues, and we were able to recover. 

    There were some stability issues in the very beginning. Things are getting better with its community piece.

    What do I think about the scalability of the solution?

    Scalability has room for improvement. It tends to slow down when we have large volumes of data, and it takes more time. They could scale better, as we have seen some degradation in performance when we work with large data sets.

    How are customer service and support?

    We have some open tickets with them from time to time. They have definitely promptly responded and provided solutions. There have been no issues.

    Support has changed hands many times, though we always land on a good support model. I would rate the technical support as seven out of 10.

    They cannot just custom build solutions for us. These are things that they will deliver and add to releases. 

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We were previously using Collibra and Talend data management. We switched this tool to help us build our data mapping, not just field-level mapping. There are also aspects of code set management, where we are translating different codes that we are standardizing to enterprise codes. With the reference data management aspects of it, we can build our own data sets within the tool and that data set is also integrated with our data pipeline.

    We were definitely not sticking with the Talend tool because it increased our delivery time for data. When we were looking for other platforms, we needed a tool that captured data mapping in a way that a systematic program could actually read and understand it, then generate the dynamic code for an ETL processor pipeline.

    How was the initial setup?

    It was through AWS. The package was very easy to install. 

    What was our ROI?

    If I use a traditional ETL tool and build it through an IT port, it would take five days to build very simple data mapping to get it to the deployment phase. Using this solution, the IT cost will be cut down to less than a day. Since the business requirements are now captured directly in the tool, I don't need IT support to execute it. The only part being executed and deployed from the metadata is my ETL code, which is the information that the business will capture. So, we can build data pipelines at a very rapid rate with a lot of accuracy. 

    During maintenance times, when things are changing and updating, businesses will not have access to their ETL tool, code, and the rules executed in the code. However, using this tool with its data governance and data mapping, the data captured is what actually it will be. The rules are first defined, then they are fed into the ETL process. This is done weekly because we dynamically generate the ETL from our business users' mapping. That definitely is a big advantage. Our data will never be off the rules that the business has set up.

    If people cannot do discovery on their own, then you will be adding a lot of resource power, i.e., manpower, to support the business usage of the data. A lot of money is saved because we can run a very lean shop and don't have to onboard a lot of resources. This saves a lot on manpower costs as well.

    What's my experience with pricing, setup cost, and licensing?

    The licensing cost was very affordable at the time of purchase. It has since been taken over by erwin, then Quest. The tool has gotten a bit more costly, but they are adding more features very quickly. 

    Which other solutions did I evaluate?

    We did a couple of demos with data catalog-type tools, but they didn't have the complete package that we were looking for.

    What other advice do I have?

    Our only systematic process for refreshing metadata is from the erwin Data Modeler tool. Whenever those updates are done, we then have a systematic way to update the metadata in our reference tool.

    I would rate the product as eight out of 10. It is a good tool with a lot of good features. We have a whole laundry list of things that we are still looking for, which we have shared with them, e.g., improving stability and the product's overall health. The cost is going up, but it provides us all the information that we need. The basic building blocks of our governance are tightly coupled with this tool.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    erwin Data Intelligence by Quest
    November 2022
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
    653,757 professionals have used our research since 2012.
    Maximilian Te - PeerSpot reviewer
    Business Intelligence BA at a insurance company with 10,001+ employees
    Real User
    Top 10
    Good traceability and lineage features, impact analysis is helpful in the decision-making process, and the support is good
    Pros and Cons
    • "Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream."
    • "There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool."

    What is our primary use case?

    Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents.

    As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.

    How has it helped my organization?

    The automated data lineage is very useful. We used to work in Excel, and there is no way to trace the lineage of the data. Since we started working with DI, we have been able to quickly trace the lineage, as well as do an impact analysis.

    We do not use the ETL functionality. I do know, however, that there is a feature that allows you to export your mapping into Informatica.

    Using this product has improved our process in several ways. When we were using Excel, we did not know for sure that what was entered in the database was what had been entered into Excel. One of the reasons for this is that Excel documents contain a lot of typos. Often, we don't know the data type or the data length, and these are some of the reasons that lineage and traceability are important. Prior to this, it was zero. Now, because we're able to create metadata from our databases, it's easier for us to create mappings. As a result, the typos virtually disappeared because we just drag-and-drop each field instead of typing it. 

    Another important thing is that with Excel, it is too cumbersome or next to impossible to document the source path for XSD files. With DI, since we're able to model it in the tool, we can drag and drop and we don't have to type the source path. It's automatic.

    This tool has taken us from having nothing to being very efficient. It's really hard to compare because we have never had these features before.

    The data pipeline definitely improved the speed of analysis in our use case. We have not timed it but having the lineage, and being able to just click, makes it easier and faster. We believe that we are the envy of other departments that are not using DI. For them to conduct an impact analysis takes perhaps a few minutes or even a few hours, whereas, for us, it takes less than one minute to complete.

    We have automated parts of our data management infrastructure and it has had a positive effect on our quality and speed of delivery. We have a template that the system uses to create SQL code for us. The code handles the moving of data and if they are direct move fields, it means that we don't need a person to code this operation. Instead, we just run the template.

    The automation that we use is isolated and not for everything, but it affects our cost and risk in a positive way because it works efficiently to produce code.

    It is reasonable to say that DI's generation of production code through automated code engineering reduces the cost from initial concept to implementation. However, it is only a small percentage of our usage.

    With respect to the transparency and accuracy of data movement and data integration, this solution has had a positive impact on our process. If we bring a new source system into the data warehouse and the interconnection between that system and us is through XML then it's easier for us to start the mapping in DI. It is both efficient and effective. Downstream, things are more efficient as well. It used to take days for the BAs to do the mapping and now, it probably takes less than one hour.

    We have tried the AIMatch feature a couple of times, and it was okay. It is intended to help automatically discover relationships and associations in data and I found that it was positive, albeit more relevant to the data governance team, of which I am not part. I think that it is a feature in its infancy and there is a lot of room for improvement.

    Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream. For example, if a source were to say "Okay, we're no longer going to send this field to you," then immediately we will know what the impact downstream will be. In response, either we can inform upstream to hold off on making changes, or we can inform the departments that will be impacted. That in itself has a lot of value.

    What is most valuable?

    The most valuable features are lineage and impact analysis. In our use case, we deal with data transformations from multiple sources into our data warehouse. As part of this process, we need traceability of the fields, either from the source or from the presentation layer. If something is changing then it will help us to determine the full impact of the modifications. Similarly, if we need to know where a specific field in the presentation layer is coming from, we can trace it back to its location in the source.

    The feature used to fill metadata is very useful for us because we can replicate the data into our analytics as metadata.

    What needs improvement?

    Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times.

    There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI.

    We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.

    For how long have I used the solution?

    We have been using erwin Data Intelligence (DI) for Data Governance since 2013.

    What do I think about the stability of the solution?

    The stability of DI has come a long way. Now, it's very stable. If I were rating it six years ago, my assessment would definitely have been different. At this time, however, I have no complaints.

    What do I think about the scalability of the solution?

    We have the enterprise version and we can add as many projects as we need to. It would be helpful if we had a feature to keep better track of the users, such as a group membership field.

    We are the only department in the organization that uses this product. This is because, in our department, we handle data warehousing, and mapping documentation is very important. It is like a bible to us and without it, we cannot function properly. We use it very extensively and other departments are now considering it.

    In terms of roles, we have BAs with read-write access. We also have power users, who are the ones that work with the data catalog, create the projects, and make sure that the metadata is all up-to-date. Maintenance of this type also ensures that metadata is removed when it is no longer in use. We have QA/Dev roles that are read-only. These people read the mapping and translate it into code, or do QA on it. Finally, we have an audit role, where the users have read-only access to everything.

    One of the tips that I have for users is that if there are a lot of mapping documents, for example, more than a few hundred rows for a few hundred records, it's easier to download it, do it in Excel, and upload it again.

    All roles considered, we have between 30 and 40 users.

    How are customer service and technical support?

    The technical support is good.

    When erwin took over this product from the previous company, the support improved. The previous company was not as large and as such, erwin is more structured and has processes in place. For example, if we report issues, erwin has its own portal. We also have a specific channel to go through, whereas previously, we contacted support through our account manager.

    Which solution did I use previously and why did I switch?

    Other than what we were doing with Excel, we were not using another solution prior to this one.

    How was the initial setup?

    We have set up this product multiple times. The first setup was very challenging, but that was before erwin inherited or bought this product from the original developer. When erwin took over, there were lots of improvements made. As it is now, the initial setup is not complex and is no longer an issue. However, when we first started in 2013, it was a different story.

    When we first deployed, close to 10 years ago, we were new to the product and we had a lot of challenges. It is now fairly easy to do and moreover, erwin has good support if we run into any trouble. I don't recall exactly how long it took to initially deploy, but I would estimate a full day. Nowadays, given our experience and what we know, it would take less than half a day. Perhaps one or two hours would be sufficient.

    The actual deployment of the tool itself has no value because it's not a transactional system. With a transactional system, for example, I can do things like point of sale. In the case of this product, BAs create the mappings. That said, once it's deployed, the BAs can begin working to create mappings. Immediately, we can perform data cataloging, and given the correct connections, for example to Oracle, we can begin to use the tool right away. In that sense, there is a good time-to-value and it requires minimal support to get everything running.

    We have an enterprise version, so if a new department wants to use it then we don't need to install it again. It is deployed on a single system and we give access to other departments, as required. As far as installing the software on a new machine, we have a rough plan that we follow but it is not a formal one that is written down or optimized for efficiency.

    What about the implementation team?

    We had support from our reseller during the initial setup but they were not on-site.

    Maintenance is done in-house and we have at least three people who are responsible. Because of our company structure, there is one who handles the application or web server. A second person is responsible for AWS, and finally, there is somebody like me on the administrative side.

    What was our ROI?

    We used to calculate ROI several years ago but are no longer concerned with it. This product is very effective and it has made our jobs easier, which is a good return.

    What's my experience with pricing, setup cost, and licensing?

    We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users. This product is not expensive compared to the other ones on the market.

    We did not buy the full DI, so the Business Glossary costs us extra. As such, we receive two bills from erwin every year.

    Which other solutions did I evaluate?

    We evaluated Informatica but after we completed a cost-benefit analysis, we opted to not move forward with it.

    What other advice do I have?

    My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market.

    I would rate this solution a nine out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Roy Pollack - PeerSpot reviewer
    Advisor Application Architect at CPS Energy
    User
    Top 10
    The solution provides more profound insights into legacy data movements, lineages, and definitions in the short term.
    Pros and Cons
    • "Data Intelligence has provided more profound insights into legacy data movements, lineages, and definitions in the short term. We have linked three critical layers of data, providing us with an end-to-end lineage at the column level."
    • "The integration with various metadata sources, including erwin Data Modeler, isn't smooth in the current version. It took some experimentation to get things working. We hope this is improved in the newer version. The initial version we used felt awkward because Erwin implemented features from other companies into their offering."

    What is our primary use case?

    Data Intelligence enables us to provide deeper technical insight into our enterprise data warehouse while democratizing the solution and data. 

    For more than 10 years, we had built our data systems without detailed documentation. We finally determined that we needed to improve our data management, and we chose Data Intelligence Suite (DIS) based on our past experience using erwin Data Modeler. After researching DIS, we also discovered other desirable features, such as the Business Glossary and Mind Map features that link various assets.

    How has it helped my organization?

    Data Intelligence has provided more profound insights into legacy data movements, lineages, and definitions in the short term. We have linked three critical layers of data, providing us with an end-to-end lineage at the column level.

    Our long-term plans include adding other systems to complete the end-to-end picture of the data lineage. We also intend to better utilize the Business Glossary and Mind Map features. This will require commitment from a planned data governance program, which may still be a year or more into the future.

    What is most valuable?

    We appreciate the solution's ability to upload source-to-target mappings as well as other types of metadata. We were able to semi-programmatically build these worksheets. The time needed to map each manually would be prohibitive.

    Although it was not intuitive, there is a feature where DIS can generate the Excel worksheet as a template. Using this allowed us to discover many other types of metadata we can upload, which is the most efficient way to populate metadata.

    What needs improvement?

    We have loaded over 300,000 attributes and more than 1000 mappings. The performance is slow,  depending on the lineage or search.  This is supposed to be fixed in the later versions, but we haven't upgraded yet.

    The integration with various metadata sources, including erwin Data Modeler, isn't smooth in the current version. It took some experimentation to get things working. We hope this is improved in the newer version. The initial version we used felt awkward because Erwin implemented features from other companies into their offering.

    For how long have I used the solution?

    I have been using Data Intelligence for two years.

    What do I think about the stability of the solution?

    Because Data Intelligence is a Java-based solution, initial usage can require patience and reloads to function properly.  

    What do I think about the scalability of the solution?

    There are many options to scale the repository and webserver application for performance.

    How are customer service and support?

    Generally, erwin support was highly responsive. However, we did this installation while Erwin was transitioning to Quest. Support was still surprisingly good given that situation.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    This was the first metadata repository tool at this company.

    How was the initial setup?

    Setting up Data Intelligence is complex. It required a few calls with support to figure out how to confiugre multiple components.

    What was our ROI?

    I can't quantify our return in a dollar amount. However, we can now answer how the system works down to the transformation level as needed. Previously, we would need to start a "project" to obtain such information.

    What's my experience with pricing, setup cost, and licensing?

    Tools like this generally have a low or no cost for "read only" usage. The licensing required to actively update metadata is much more expensive, but we only needed three licenses. Two licenses would likely suffice for most organizations.

    What other advice do I have?

    I rate erwin Data Intelligence nine out of 10. LDAP integration is provided, but the roles and role integration require some research and setup.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.
    Updated: November 2022
    Product Categories
    Data Governance
    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.