Try our new research platform with insights from 80,000+ expert users
Business Intelligence BA at a insurance company with 10,001+ employees
Real User
Good traceability and lineage features, impact analysis is helpful in the decision-making process, and the support is good
Pros and Cons
  • "Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream."
  • "There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool."

What is our primary use case?

Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents.

As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.

How has it helped my organization?

The automated data lineage is very useful. We used to work in Excel, and there is no way to trace the lineage of the data. Since we started working with DI, we have been able to quickly trace the lineage, as well as do an impact analysis.

We do not use the ETL functionality. I do know, however, that there is a feature that allows you to export your mapping into Informatica.

Using this product has improved our process in several ways. When we were using Excel, we did not know for sure that what was entered in the database was what had been entered into Excel. One of the reasons for this is that Excel documents contain a lot of typos. Often, we don't know the data type or the data length, and these are some of the reasons that lineage and traceability are important. Prior to this, it was zero. Now, because we're able to create metadata from our databases, it's easier for us to create mappings. As a result, the typos virtually disappeared because we just drag-and-drop each field instead of typing it. 

Another important thing is that with Excel, it is too cumbersome or next to impossible to document the source path for XSD files. With DI, since we're able to model it in the tool, we can drag and drop and we don't have to type the source path. It's automatic.

This tool has taken us from having nothing to being very efficient. It's really hard to compare because we have never had these features before.

The data pipeline definitely improved the speed of analysis in our use case. We have not timed it but having the lineage, and being able to just click, makes it easier and faster. We believe that we are the envy of other departments that are not using DI. For them to conduct an impact analysis takes perhaps a few minutes or even a few hours, whereas, for us, it takes less than one minute to complete.

We have automated parts of our data management infrastructure and it has had a positive effect on our quality and speed of delivery. We have a template that the system uses to create SQL code for us. The code handles the moving of data and if they are direct move fields, it means that we don't need a person to code this operation. Instead, we just run the template.

The automation that we use is isolated and not for everything, but it affects our cost and risk in a positive way because it works efficiently to produce code.

It is reasonable to say that DI's generation of production code through automated code engineering reduces the cost from initial concept to implementation. However, it is only a small percentage of our usage.

With respect to the transparency and accuracy of data movement and data integration, this solution has had a positive impact on our process. If we bring a new source system into the data warehouse and the interconnection between that system and us is through XML then it's easier for us to start the mapping in DI. It is both efficient and effective. Downstream, things are more efficient as well. It used to take days for the BAs to do the mapping and now, it probably takes less than one hour.

We have tried the AIMatch feature a couple of times, and it was okay. It is intended to help automatically discover relationships and associations in data and I found that it was positive, albeit more relevant to the data governance team, of which I am not part. I think that it is a feature in its infancy and there is a lot of room for improvement.

Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream. For example, if a source were to say "Okay, we're no longer going to send this field to you," then immediately we will know what the impact downstream will be. In response, either we can inform upstream to hold off on making changes, or we can inform the departments that will be impacted. That in itself has a lot of value.

What is most valuable?

The most valuable features are lineage and impact analysis. In our use case, we deal with data transformations from multiple sources into our data warehouse. As part of this process, we need traceability of the fields, either from the source or from the presentation layer. If something is changing then it will help us to determine the full impact of the modifications. Similarly, if we need to know where a specific field in the presentation layer is coming from, we can trace it back to its location in the source.

The feature used to fill metadata is very useful for us because we can replicate the data into our analytics as metadata.

What needs improvement?

Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times.

There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI.

We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.

Buyer's Guide
erwin Data Intelligence by Quest
June 2025
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
856,873 professionals have used our research since 2012.

For how long have I used the solution?

We have been using erwin Data Intelligence (DI) for Data Governance since 2013.

What do I think about the stability of the solution?

The stability of DI has come a long way. Now, it's very stable. If I were rating it six years ago, my assessment would definitely have been different. At this time, however, I have no complaints.

What do I think about the scalability of the solution?

We have the enterprise version and we can add as many projects as we need to. It would be helpful if we had a feature to keep better track of the users, such as a group membership field.

We are the only department in the organization that uses this product. This is because, in our department, we handle data warehousing, and mapping documentation is very important. It is like a bible to us and without it, we cannot function properly. We use it very extensively and other departments are now considering it.

In terms of roles, we have BAs with read-write access. We also have power users, who are the ones that work with the data catalog, create the projects, and make sure that the metadata is all up-to-date. Maintenance of this type also ensures that metadata is removed when it is no longer in use. We have QA/Dev roles that are read-only. These people read the mapping and translate it into code, or do QA on it. Finally, we have an audit role, where the users have read-only access to everything.

One of the tips that I have for users is that if there are a lot of mapping documents, for example, more than a few hundred rows for a few hundred records, it's easier to download it, do it in Excel, and upload it again.

All roles considered, we have between 30 and 40 users.

How are customer service and support?

The technical support is good.

When erwin took over this product from the previous company, the support improved. The previous company was not as large and as such, erwin is more structured and has processes in place. For example, if we report issues, erwin has its own portal. We also have a specific channel to go through, whereas previously, we contacted support through our account manager.

Which solution did I use previously and why did I switch?

Other than what we were doing with Excel, we were not using another solution prior to this one.

How was the initial setup?

We have set up this product multiple times. The first setup was very challenging, but that was before erwin inherited or bought this product from the original developer. When erwin took over, there were lots of improvements made. As it is now, the initial setup is not complex and is no longer an issue. However, when we first started in 2013, it was a different story.

When we first deployed, close to 10 years ago, we were new to the product and we had a lot of challenges. It is now fairly easy to do and moreover, erwin has good support if we run into any trouble. I don't recall exactly how long it took to initially deploy, but I would estimate a full day. Nowadays, given our experience and what we know, it would take less than half a day. Perhaps one or two hours would be sufficient.

The actual deployment of the tool itself has no value because it's not a transactional system. With a transactional system, for example, I can do things like point of sale. In the case of this product, BAs create the mappings. That said, once it's deployed, the BAs can begin working to create mappings. Immediately, we can perform data cataloging, and given the correct connections, for example to Oracle, we can begin to use the tool right away. In that sense, there is a good time-to-value and it requires minimal support to get everything running.

We have an enterprise version, so if a new department wants to use it then we don't need to install it again. It is deployed on a single system and we give access to other departments, as required. As far as installing the software on a new machine, we have a rough plan that we follow but it is not a formal one that is written down or optimized for efficiency.

What about the implementation team?

We had support from our reseller during the initial setup but they were not on-site.

Maintenance is done in-house and we have at least three people who are responsible. Because of our company structure, there is one who handles the application or web server. A second person is responsible for AWS, and finally, there is somebody like me on the administrative side.

What was our ROI?

We used to calculate ROI several years ago but are no longer concerned with it. This product is very effective and it has made our jobs easier, which is a good return.

What's my experience with pricing, setup cost, and licensing?

We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users. This product is not expensive compared to the other ones on the market.

We did not buy the full DI, so the Business Glossary costs us extra. As such, we receive two bills from erwin every year.

Which other solutions did I evaluate?

We evaluated Informatica but after we completed a cost-benefit analysis, we opted to not move forward with it.

What other advice do I have?

My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Ahmad AlRjoub - PeerSpot reviewer
Data Management Consultant at CompTechCo
Real User
Top 10
Enhances our overall data governance process, improving the business as well as our ability to meet compliance and reporting requirements
Pros and Cons
  • "The interface is easy to use. I also like Erwin's automatic data classification and data quality checks."
  • "The solution's Arabic language processing is limited. The results are limited when you use the interface in Arabic."

What is our primary use case?

Data Intelligence is a data management solution that connects to various data sources. It also provides data profiling and data quality management.

How has it helped my organization?

erwin enhances our overall data governance process, improving the business as well as our ability to meet compliance and reporting requirements. 

The solution helps us save time and understand our data better. For example, you can search for keywords or discover assets by classification, feedback, and rating. You can sort assets by the highest or lowest rating. It has reduced the time spent on these tasks by about 20 percent. 

What is most valuable?

The interface is easy to use. I also like erwin's automatic data classification and data quality checks. It provides excellent visibility into stable data and data in motion. We have a clear view of ETL processes through data lineage, and a feature called the data mind map. 

erwin provides the necessary traceability. It's easy to assign tasks and keywords. Data owners can manage their data and assign access to various teams. 

The data catalog dashboard enables us to classify sensitive data and comply with regulations, which is crucial in Saudi Arabia. Every government organization must abide by the rules on personal data. I rate the data catalog dashboard nine out of 10. 

What needs improvement?

The solution's Arabic language processing is limited. The results are limited when you use the interface in Arabic.

For how long have I used the solution?

I have used erwin for six months.

How are customer service and support?

I rate erwin customer support nine out of 10. We purchased premium support, which is crucial because we have a lot of customers who require a quick response.

How would you rate customer service and support?

Positive

How was the initial setup?

The deployment is straightforward, and we deployed erwin with a three-person in-house team. It took us about six hours to complete the configuration because it takes time to connect the data sources, which must be configured separately. You need to define the data source, identify the server where each database is located, enter the server ID, input the connection string, etc. 

What was our ROI?

We have seen a return, but I can't say how much. 

What's my experience with pricing, setup cost, and licensing?

erwin's license is about average. You have to pay extra for smart connectors if you need to add data sources that won't work with the standard connector. 

Which other solutions did I evaluate?

We tried Talend and a few other solutions. The primary benefit erwin offered was automation. It helps us reduce manual work. 

What other advice do I have?

I rate erwin Data Intelligence nine out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer:
PeerSpot user
Buyer's Guide
erwin Data Intelligence by Quest
June 2025
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
856,873 professionals have used our research since 2012.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
We always know where our data and metadata are, versus having to spend weeks hunting down information
Pros and Cons
  • "The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company."
  • "We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool."
  • "The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested."
  • "We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server."

What is our primary use case?

We have many use cases.

We have a use case to understand our metadata, understand where it is, and understand where our authoritative data systems are. We need to understand the data systems that we have. We also need to link the data models that we have to these data systems so that we know which data models are supporting which database applications. We're also linking our business data models to our physical implementation so that our data governance team is driving our data and our understanding of our data. That is one use case for the Metadata Manager. Another is the creation of automated reports that will show the changes that are made in production after a production release.

Our use cases for the Mapping Manager are around understanding where our data movement is happening and how our data is being transformed as it's moved. We want automated data lineage capabilities at the system database environment table and column levels, as well as automated impact analysis. If someone needs to make a change to a specific column in a specific database, what downstream applications or databases will be impacted? Who do we have to contact to tell that we're making changes?

When thinking about the Mapping Manager, we do have another use case where we want to understand not only the data design of the mapping, but the actual implementations of the mapping. We want to understand, from a data design standpoint, the data lineage that's in the data model, as well as the data lineage in a source-to-target mapping document. But we also want to understand the as-implemented data lineage, which comes in our Informatica workflows and jobs. So we want to automatically ingest our Informatica jobs and create mapping documents from those jobs so that we have the as-designed data lineage, as well as the as-implemented data lineage.

In addition, with regard to our data literacy, we want to understand our business terminology and the definitions of our business terms. That information drives not only our data modeling, but it drives our understanding of the data that is in our datastores, which are cataloged in the Metadata Manager. This further helps us to understand what we're mapping in our source-to-target mapping documents in the Mapping Manager. We want to associate our physical columns and our data model information with our business glossary. But taking that a step further, when you think about code sets, we also need to understand the data. So if we have a specific code set, we need to understand if we are going to see those specific codes in that database, or if we are going to see different codes that we have to map to the governed code set.

That's where the Codeset Manager comes into play for us because we need to understand what our governed code sets are. And we need to understand and automatically be able to map our code sets to our business terminology, which is automatically linked to our physical tables and columns. And that automatically links the code set values or the crosswalks that were created when we have a data asset that does not have all of the conforming values that are in the governed code set. 

We also have reporting use cases. We create a lot of reports. We have reports to understand who the Data Intelligence Suite users are, when they last logged in, the work that they're doing, and for automatically assigning work from one person to another person. We also need automated reports that look at our mappings and help us understand where our gaps are, where we need a code set that we don't already have a governed code set for. And we're also creating data dictionary reports, because we want to understand very specific information about our data models, our datastores, and our business data models, as well as the delivery data models.

We are currently using the

  • Resource Manager
  • Metadata Manager
  • Mapping Manager
  • Codeset Manager
  • Reference Data Manager
  • Business Glossary Manager.

How has it helped my organization?

One of the ways this is helping to improve our delivery is through the increased understanding of what the data is, so that we're not mapping incorrect data from a source to a target. 

We also have additional understanding of where our best data is. For example, when you think of the HL7 FHIR work and the need to map customer data to a specific FHIR profile, we need to understand where our best data is, as well as the definition of the data so that we are mapping the correct data. Health-interoperability requires us to provide the customer with the data they request when they request it. There are multiple levels of complexity in doing that work. The Data Intelligence Suite is helping us to manage and document all of those complexities to ensure that we are delivering the right data to the customer when they request it.

erwin DI also provides us with a real-time, understandable data-pipeline. One of the use cases that we didn't talk about is that we set up batch jobs to automate the metadata ingestion, so that we always have up to date and accurate metadata. It saves us a great deal because we always know where our metadata is, and what our data is, versus having to spend weeks hunting down information. For example, if we needed to make a change to a datastore, and we needed to understand the other datastores that are dependent on that data, we know that at a moment's notice. It's not delayed by a month. It's not a case of someone either having to manually look through Excel spreadsheet mapping documents or needing to get a new degree in a software tool such as Informatica or DataStage or Ab Initio, or even reading Python. We always know where our data is, and anybody can look that up, whether they're a business person who doesn't know anything about Informatica, or a developer who knows everything about creating data movement jobs in Informatica, but who does not understand the business terminology or the data that is being used in the tool.

The solution also automates critical areas of our data governance and data management infrastructure. The data management is, obviously, key in understanding where the data is and what the data is. And the governance can be done at multiple levels. You have the governance of the code sets versus the governance of the business terms and the definitions of those business terms. You have the governance of the business data models and how those business data models are driving the physical implementation of the actual databases. And, of course, you have the governance of the mapping to make sure that source-to-target mapping is done and is being shared across the company.

In terms of how this affects the quality and speed of delivery of data, I did use-case studies before we brought the Data Intelligence Suite into our company. Some of those use cases included research into impact analysis taking between six and 16 weeks to figure out where data is, or where an impact would be. Having the mapping documents drive the data lineage and impact analysis in the Data Intelligence Suite means that data investigation into impact analysis takes minutes instead of weeks. The understanding of what the data is, is critical to any company. And being able to find that information with the click of a button, versus having to request access to share drives and Confluence and SharePoint drives and Alation, and anywhere that metadata could be, is a notable difference. Having to ask people, "Do you have this information?" versus being able to go and find it yourself saves incredible amounts of time. And it enables everyone, whether it's a business person or a designer, or a data architect, a data modeler, or a developer. Everyone is able to use the tool and that is extremely important, because you need a tool that is user-friendly, intuitive, and easily understood, no matter your technical capabilities.

Also, the things around production that the solution can do have been very helpful to us. This includes creating release reports, so that we know what production looked like prior to an implementation versus what it looks like afterward. It helps with understanding any new data movement that was implemented versus what it was previously. Those are the production implementations that are key for us right now.

Another aspect is that the solution’s data cataloging, data literacy, and automation have been extremely important in helping people understand what the data is so that they use it correctly. That happens at all levels.

The responsiveness of the tool has been fantastic. The amount of time that it takes to do work has been so significantly decreased. If you were creating a mapping document, especially if you were doing it in an Excel spreadsheet, you would have to manually type in every single piece of information: the name of the system, the name of the table, the name of the column, the data type, the length of the column. Any information that you needed to put into a source-to-target mapping document would have to be manually entered.

Especially within the Mapping Manager, the ability to automatically create the mapping document through drag-and-drop functionality of the metadata that is in the system catalog, within the Metadata Manager, results in savings on the order weeks or days. When you drag and drop the information from the metadata catalog into the mapping document, the majority of the mapping document is filled out, and the only thing that you have to do manually is put the information in about the type of data movement or transformation that you're going to do on the data. And even some of that is automated, or could be automated. You're talking about significant time savings.

And because you have all of the information right there in the tool, you don't have to look at different places to find the understanding of the data that you're working with. All of the information is right there, which is another time savings. It's like one-stop shopping. You can either go to seven stores to get everything you want, or you can go to one store. Which option would you choose? Most people would prefer to go to one store. And that's what the Data Intelligence Suite gives you: one place.

I can say, in general that a large number of hours are saved, depending on the work that is being done, because of the automation capabilities and the ability to instantly understand what your data is and where it is. We are working on creating metrics. For example, we have one metric where it has taken someone hours to do research, to understand what the data is and where it is and map it to a business term, versus where it has taken less than two minutes to map 600 physical columns to a business term.

What is most valuable?

We are looking forward to using the AI match capability. We are using several Smart Data Connectors, as well as the Reporting Manager and the Workflow Manager.

We are customizing our own installation of the erwin Data Intelligence Suite by adding fields as extended properties that do not already exist, that are of value to us, as well as changing the user-defined fields that are in the Data Intelligence Suite. We're renaming them so that we can put very specific information into those user-defined properties.

The customization and the ability to add information is extremely valuable to us because there is no tool on the market that is going to be able to accommodate, out-of-the-box, everything that every customer will use. Being able to tailor the tool to meet our needs, and add additional metadata, is very valuable to us.

Also, in terms of the solution's integrated data catalog and data literacy when it comes to mapping, profiling, and automated lineage analysis, it is incredibly important to have that business glossary and understand what the data is — the definitions of the data — so that you use it correctly. You can't move data when you don't understand what it is. You can't merge data with other data unless you know what it is or how to use it. Those business definitions help us with all of that: with the mapping and with being able to plan the movement of one data element into another data element. The data lineage, understanding where the data is and how it moves, is very critical.

What needs improvement?

The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested.

erwin has been a fantastic partner with regard to our suggestions for enhancements, and that's why I'm having difficulty thinking of areas for improvement of the solution. They are delivering enhancements that we've requested in every release.

For how long have I used the solution?

We've been using erwin Data Intelligence for Data Governance for 19 months.

What do I think about the stability of the solution?

We're very impressed with the stability. 

What do I think about the scalability of the solution?

We find it to be very scalable. We currently have it connected to and pulling the metadata in from four different database types. We currently have it connected to automatically ingest mapping information from Informatica and we are importing different types of metadata that is captured in Excel spreadsheets. The tool is able to not only ingest all of this information, but present it in a usable fashion.

We have two different types of users. We have the people who are using the Data Intelligence Suite back-end, which is where the actual work is done. We have over 50 users there. And on the Business User Portal, which is the read-only access to the work that's being done in the back-end, we have over 100 users. Everyone who sees the tool wants to use it, so the desire for adoption is incredibly high.

How are customer service and technical support?

The technical and customer support are outstanding. erwin honestly goes above and beyond to ensure the success of its customers. Their people are available as needed to assist with the implementations and the upgrades.

They are also very willing to listen to enhancement requests and understand what the business or technical impact of the request is. They have been incredibly responsive with the inclusion of enhancement requests. I couldn't ask for more. They're really an example of the highest level of customer service that anyone could provide.

Which solution did I use previously and why did I switch?

We have had multiple other tools and our metadata is currently fractured across multiple tools because we haven't had a good integration point for all of our information. erwin Data Intelligence Suite gives us that one, fantastic, single point of integration. That means we do not have to remain fractured across other tools, but also we don't need to reinvent the wheel and recreate a new system to contain all of our metadata. We have an opportunity to have it in a single place, working with it from a technical standpoint, governing it from a business standpoint, and integrating both the business and technical knowledge in a single location.

The tools we replaced were homegrown tools that made information available in a very manual fashion. We have replaced Excel spreadsheets as our documentation of mapping. We are replacing many different types of data sharing sites by having all of our information, our metadata, in a single location.

How was the initial setup?

We found the initial setup to be very straightforward. The user manuals are very clear for the users who are doing the work. And whenever there was a need for assistance with the implementation of the back-end database or the software, erwin was just a phone call away and has always been available to answer any questions or assist as needed. They're just fantastic partners.

It took us about a day when it was first set up, and it is just a matter of a couple hours, now, as we do upgrades to the software.

In terms of our implementation strategy, we have segregation of duties within our company. We have one team that is responsible for delivery, a separate team that is responsible for production support, another team that is responsible for the creation of the database behind the tool, and another team that is responsible for the installation of the software. It's the coordination of the different people who are supporting the tool that takes the most effort.

There are eight people maintaining the solution, because of the segregation of duties. We have a primary and a backup, within each of the four teams, who are doing the delivery or support.

What was our ROI?

We have absolutely seen return on our investment with Data Intelligence so far. There has been an increase in delivery speed and the decrease of project costs that results. The decrease in time to find the information you need to do your job, versus the larger amount of time needed to research without erwin, has been invaluable.

What's my experience with pricing, setup cost, and licensing?

The one thing that you want to make sure of is that you have enough licenses to cover the people who will be administering the tool, as well as the people who are using the tool. You have to know not only the people who will be using the tool but the teams that will be supporting it. That was something we did not know ahead of time: the number of support licenses that we would need.

Which other solutions did I evaluate?

There are other vendor tools that do not have all of the capabilities, or they're trying to have the capabilities that Data Intelligence Suite has, but they are more complex to use or do not have the fast performance that the Data Intelligence Suite has.

There are many tools available for business term, management, codeset management, and data lineage, as well as metadata and mapping capabilities. 

Collibra was on the market prior to the Data Intelligence Suite, but since erwin's acquisition of the Data Intelligence Suite, erwin has brought their software along faster and incorporated more useful capabilities than some of the other vendor products. And some of the other products are limited because they have per-server costs, where erwin Data Intelligence Suite has not had that kind of cost. It can connect to the systems where the metadata resides and is able to ingest that metadata without additional costs.

The user-friendliness of the erwin tool made it much easier for users to adopt and desire to adopt because it was easier to ramp up and utilize and understand, compared to other tools that we looked at. Another difference was the completeness of the erwin tool versus having to work with tools that have some of the capabilities but not all of the capabilities. It was that "one stop-shopping" versus having to go to multiple tools.

What other advice do I have?

Erwin currently supports two implementations of this product: one on a SQL Server database and the other on an Oracle Database. It seems that the SQL Server database may have fewer complications than the Oracle Database. We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server.

We are not currently using forward engineering capabilities of the Data Intelligence suite. We do use erwin Data Modeler for forward engineering the data definition language that is used to change the actual databases where the data resides. We are currently using the Informatica reverse smart connector so that we can understand what is in the Informatica jobs, jobs which may not have been designed with, or have, a source-to-target mapping document. That's as opposed to having a developer create data movement without any documentation to support it. We look forward to potentially using the capability to create Informatica jobs, or other types of jobs, based on the mapping work, so that we can automate our work more and decrease our delivery time and cost to deliver while increasing our accuracy of delivery.

We've learned several lessons from using erwin Data Intelligence Suit. One lesson is around adoption. There will be better adoption through ease of use. We do have another product in-house and the largest complaint about that product is that it's extremely difficult to use. The ease of use with the Data Intelligence Suite has significantly improved our adoption rate.

Also, having all of the information in one place has significantly improved our adoption and people's desire to use the tool, rather than looking here, there, and everywhere for their information. The automated data lineage and impact analysis being driven from the mapping documents are astounding in reducing the time to research impact analysis from six to 16 weeks down to minutes, because it's a couple of clicks with a mouse. Having all of the information in one place also improves our knowledge about where our data is and what it is so that we can use it in the best possible ways.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer2191656 - PeerSpot reviewer
Release Train Engineer (RTE) at a pharma/biotech company with 10,001+ employees
Real User
Saves us time and reduces the number of bugs by automatically generating software
Pros and Cons
  • "The solution saves time in data discovery and understanding our entire organization's data."
  • "The technical support could be improved."

What is our primary use case?

We use erwin Data Intelligence to map the data structures from the source systems to our logical data model. Based on this mapping, the tool automatically generates ETL procedures for us.

How has it helped my organization?

The solution saves us time and reduces the number of bugs by automatically generating software, rather than manually creating it.

The solution saves time in data discovery and understanding our entire organization's data.

What needs improvement?

The technical support could be improved. When we had an issue, we were given vague answers that did not resolve the issue.

For how long have I used the solution?

I have been using erwin Data Intelligence for three years.

What do I think about the stability of the solution?

The solution is stable.

What do I think about the scalability of the solution?

The solution is scalable.

How are customer service and support?

We have Premier Support, which provides us with quick access to the support team. However, it does not accelerate the resolution of our issues. It took almost a year for us to get the impression that they were listening to us. It took another half a year for them to understand the issue, and another half a year to resolve it.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We previously used Excel spreadsheets to do the mapping before switching to erwin Data Intelligence for the automation.

What's my experience with pricing, setup cost, and licensing?

The price is too high. We pay 41,000 Swiss francs for five users.

I give the pricing a three out of ten.

What other advice do I have?

I give erwin Data Intelligence an eight out of ten.

Premier Support has added minimal value to our overall investment.

I recommend doing a POC for erwin Data Intelligence before moving forward to ensure that it meets all requirements.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Practice Director - Digital & Analytics Practice at HCL Technologies
Real User
Metadata harvesters, data catalogs, and business glossaries help standardize data and create transparency
Pros and Cons
  • "erwin has tremendous capabilities to map right from the business technologies to the endpoint, such as physical entities and physical attributes, from a lineage standpoint."
  • "Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier."

What is our primary use case?

Our clients use it to understand where data resides, for data cataloging purposes. It is also used for metadata harvesting, for reverse engineering, and for scripting to build logic and to model data jobs. It's used in multiple ways and to solve different types of problems.

How has it helped my organization?

Companies will say that data is their most valuable asset. If you, personally, have an expensive car or a villa, those are valued assets and you make sure that the car is taken for service on a regular basis and that the house is painted on a regular basis. When it comes to data, although people agree that it is one of the most valued assets, the way it is managed in many organizations is that people still use Excel sheets and manual methods. In this era, where data is growing humongously on a day-to-day basis—especially data that is outside the enterprise, through social media—you need a mechanism and process to handle it. That mechanism and process should be amply supported with the proper technology platform. And that's the type of technology platform provided by erwin, one that stitches data catalogs together with business glossaries and provides intelligent connectors and metadata harvesters. Gone are the days where you can use Excel sheets to manage your organization. erwin steps up and changes the game to manage your most valued asset in the best way possible.

The solution allows you to automate critical areas of your data governance and data management infrastructure. Manual methods for managing data are no longer practical. Rather than that, automation is really important. Using this solution, you can very easily search for something and very easily collaborate with others, whether it's asking questions, creating a change request, or creating a workflow process. All of these aspects are really important. With this kind of solution, all the actions that you've taken, and the responses, are in one place. It's no longer manual work. It reduces the complexity a lot, improves efficiency a lot, and time management is much easier. Everything is in a single place and everybody has an idea of what is happening, rather than one-on-one emails or somebody having an Excel sheet on their desktop.

The solution also affects the transparency and accuracy of data movement and data integration. If people are using Excel sheets, there is my version of truth versus your version of truth. There's no source of truth. There's no way an enterprise can benefit from that kind of situation. Bringing in standardization across the organization happens only through tools like metadata harvesters, data catalogs, business glossaries, and stewardship tools. This is what helps bring transparency.

The AIMatch feature, to automatically discover and suggest relationships and associations between business terms and physical metadata, is another very important aspect because automation is at the heart of today's technology. Everything is planned at scale. Enterprises have many data users, and the number of data users has increased tremendously in the last four or five years, along with the amount of data. Applications, data assets, databases, and integration technologies have all evolved a lot in the last few years. Going at scale is really important and automation is the only way to do so. You can't do it working manually.

erwin DI’s data cataloging, data literacy, and automation have reduced a lot of complexities by bringing all the assets together and making sense out of them. It has improved the collaboration between stakeholders a lot. Previously, IT and business were separate things. This has brought everybody together. IT and business understand the need for maintaining data and having ownership for that data. Becoming a data-literate organization, with proper mechanisms and processes and tools to manage the most valued assets, has definitely increased business in terms of revenues, customer service, and customer satisfaction. All these areas have improved a lot because there are owners and stewards from business as well as IT. There are processes and tools to support them. The solution has helped our clients a lot in terms of overall data management and driving value from data.

What is most valuable?

  • Metadata harvesting
  • business glossaries and data catalogs

In an enterprise there will already have been a lot of investment in technology over the last one or two decades. It's not practical for an organization to scrap what they have built over that time and embrace new technology. It's important for us to ensure that whatever investments have been made can be used. erwin's metadata managers, metadata hooks, and its reverse engineering capabilities, ensure that the existing implementation and technology investments are not scrapped, while maximizing the leveraging of these tools. These are unique features which the competition is lacking, though many of them are catching up. erwin is one of the top providers in those areas. Customers are interested because it's not a scrap-and-rebuild, rather it's a build on to what they already have.

I would rate the solution’s integrated data catalog and data literacy, when it comes to mapping, profiling, and automated lineage analysis at eight out of 10. erwin has tremendous capabilities to map right from the business technologies to the endpoint, such as physical entities and physical attributes, from a lineage standpoint. Metadata harvesting is also an important aspect for automating the whole thing. And cataloging and business glossaries cannot work on their own. They need to go hand-in-glove when it comes to actual data analysis. You need to be able to search and find out what data resides where. It is a very well-stitched, integrated solution.

In terms of the Smart Data Connectors, automating metadata for reverse engineering or forward engineering is a great capability that erwin provides. Keeping technology investments intact is something which is very comforting for our clients and these capabilities help a client build on, rather than rebuild. That is one of the top reasons I go for erwin, compared to the competition.

What needs improvement?

I would like to see a lot more AI infusion into all the various areas of the solution. 

Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier. 

Overall, automation for associating business terms to data items, and having automatic relationship discovery, can be improved in the upcoming releases. But I'm sure that erwin is innovating a lot.

For how long have I used the solution?

We have been implementing erwin Data Intelligence for Data Governance since 2017-2018. We don't use it in our company, but we have to build capabilities in the tool as well as learn how best to implement the tool, service the tool, etc. We understand the full potential of the tool. We recommend the tool to our customers during RFPs. Then we help them use the product.

HCL Technologies is one of the top three ID service organizations in India, with around 150,000 employees. We have a practice specifically for data and analytics and within that, we cover data governance, data modeling, and data integration. I lead the data management practice including the glossary, business lineage, and metadata integration. I have used all of that. 

We are Alliance partners with Erwin and have partnered with them for three or four years.

We serve many clients and we have a fortnightly catch-up with erwin Alliance people. We have implemented it in different ways for our customers.

What do I think about the stability of the solution?

It is stable. 

What do I think about the scalability of the solution?

It can scale to large numbers of people and processes. It can connect to multiple sources of data within an organization to harvest metadata. It can connect to multiple data assets to bring the metadata into the solution. From a performance standpoint, a scaling standpoint, we've not seen an issue.

How are customer service and support?

We are Alliance partners, so whenever we go to clients and there are specific instances where we lack thorough knowledge of the erwin tools, we touch base with erwin's product team. We have worked together to tweak the product or to give our clients a seamless experience. 

We have also had their Alliance team give our developer community sessions on erwin DI, usages, and PoCs. We've done collaborated multiple times with erwin's product presales community.

How was the initial setup?

It's really straightforward. There are user-friendly tools so that a business user can very quickly access the tools. It's easy to create terminologies and give definitions. Even for an IT person, you don't need to be an architect to really understand how data catalogs work or how mapping can be created between data elements. They are all UI-driven so it's very easy to deploy or to create an overall data ecosystem.

The time it takes to deploy depends. Product deployment may not take a lot of time, between a couple of days and a week. I have not done it for an enterprise, but I'm assuming that it wouldn't be too much of a task to deploy erwin in an organization.

The important aspect is to bring in the data literacy and increase use throughout the organization to start seeing the benefit. People may not move from their comfort zone so easily. That would be the part that can take time. And that is where a partner like us, one that can bring change management into the organization and hand-hold the organization to start using this, can help them understand the benefits. It is not that the CEO or CTO of the organization must understand the benefits and decide to go for it, but all the people—senior management, mid-management, and below—should buy into the idea. They only buy into the idea if they see the benefit from it, and for that, they need to start using the product. That is what takes time.

Our deployment plan is similar across organizations, but building the catalog and building the glossaries would depend on the organization. Some organizations have a very strong top-down push and the strategy can be applied in a top-down approach. But in some cases, we may still need to get the buy-in. In those cases we would have to start small, with a bottom-up approach, and slowly encourage people to use it and scale it to the enterprise. From a tool-implementation standpoint, it might be all the same, but scaling the tool across the organization may need different strategies.

In our organization, there are 400 to 500 people, specifically on the data management side, who work for multiple clients of ours. They are developers, leads, and architects, at different levels. The developers and the leads look at the deployment and actual business glossary and data catalog creation using the tool for metadata harvesting, forward engineering, and reverse engineering. The architects generally connect with the business and IT stakeholders to help them understand how to go about things. They create business glossaries and business processes on paper and those are used as the design for the data leads who then use the tool to create them.

What was our ROI?

We struggle when it comes to ROI because data governance and data management are parts of an enterprise strategy, as opposed to a specific, pinpointed problem. An organization might be able to use the overall data management strategy for multiple things, whether it's customer satisfaction, customer churn, targeted marketing, or improving the bottom line. When we clean the data and bring some method to the madness, it creates a base and, from there, an organization can really start reaping the benefits.

They can apply analytics to the clean data and have right ownership of the data. The overall process is important as it is the base for an organization to start asking: "Now that I have the right data and it is quality compliant, what can I deduce from the data?" There may not be a dollar value to that straight away, but if you really want to bring in dollar value from your data, you need to have the base set properly. Otherwise it is garbage in, garbage out. Organizations understand that, even though there is no specific increase in sales or bottom-line improvement. Even if that dollar value is not apparent to the customer, they understand that this process is important for them to get to that stage. That is where the return on investment comes in.

What's my experience with pricing, setup cost, and licensing?

The solution is aggressively priced. We can compete with most of them. 

It is up to erwin and its pricing strategy, but if the Smart Connectors—at least a few of them which are really important—can be embedded into the product, that would be great. 

But overall, I feel the pricing is correct right now.

Which other solutions did I evaluate?

There are a number of competitors including Informatica, IBM, Collibra, and Alation; multiple organizations that offer similar features. But Erwin has an edge on metadata harvesting.

What other advice do I have?

It is a different experience. Collaboration and communication are very important when you want to harvest the value from the humongous amount of data that you have in your organization. All these aspects are soft aspects, but are very important when it comes to getting value from data.

Data pipelines are really important because of the kinds of data that are spread across different formats, in differing granularity. You need to have a pipeline which removes all the complexities and connects many types of sources, to bring data into any type of target. Irrespective of the kind of technology you use, your data platform should be adaptive enough to bring data in from any types of sources, at any intervals, in real-time. It should handle any volume of data, structured and unstructured. That kind of pipeline is very important for any analysis, because you need to bring in data from all types of sources. Only then you can do a proper analysis of data. A data pipeline is the heart of the analysis.

Overall, erwin DI is not so costly and it brings a lot of unique features, like metadata hooks and metadata harvesters, along with the business glossaries, business to business mapping, and technology mapping. The product has so many nice features. For an organization that wants to realize value from the potential of its data, it is best to go with erwin and start the journey.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: My company has a business relationship with this vendor other than being a customer: Alliance Partner
PeerSpot user
Analytics Delivery Manager at DXC
Real User
Value is in the accuracy, quality, and completeness of the migration source to target mapping and acceleration of development through code automation.
Pros and Cons
  • "We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation."
  • "One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage because they get locked."

What is our primary use case?

We use DI for Data Governance as part of a large system migration supporting application refresh and multi-site consolidation. Metadata Manager is utilized to harvest metadata which is augmented with custom metadata properties identifying rules criteria which drive automated source to target mapping. Custom build code generation connector then automates forward engineering code generation groovy. We've developed a small number of connectors supporting this 1:1 data migration. It's a really good product that we've been able to make very good use of.

How has it helped my organization?

This use case is a one-time system conversion solution not having life after the migration. Value is in the acceleration, accuracy, quality, and completeness of the migration source to target mapping and generated data management code.

Use case action is the extraction and staging of the source application data targeting ~700 large objects from the overall application set of ~2,400 relational tables. Each table extract has light join and selection criteria which are injected into the source metadata. The application itself is moving to a next-generation application that performs the same business function. Our client is in health and human services welfare administration in the United States. This use case doesn't have ongoing data governance for our client, at least at this point.

erwin DIS has enabled us to automate critical areas of data management infrastructure. That's where we see the benefit, in the acceleration of speed as well as the acceleration of quality and reduction of costs. 

erwin DIS generation of data management code through automated code engineering reduced the time it takes to go from initial concept to implementation for what we're in progress with right now. There is not a production delivery as of yet. That's still another year and a half out. This is a multi-year project where this use case is applied.

erwin has affected the transparency and accuracy of data movement and data integration quite a bit through the various report facilities. We can make self-service reporting available through the business user's portal. erwin DIS has provided the framework and the capability to be transparent, to have stakeholder involvement with the exercise the whole way along.

Through business user's portals and workflows, we're able to provide effective stakeholder reviews as well as then stakeholder access to all of the information and knowledge that's collected. The facility itself gives quite a few capabilities into user-defined parameters to capture data knowledge and organization change information which project stakeholders can use and apply throughout the program. Client and stakeholders utilize the business user's portal for extended visibility which is a big benefit.

We're interested in the AIMatch feature. It's something that we had worked with AnalytiX DS early on to actually develop some of the ideas for. We were somewhat instrumental in bringing some of that technology in, but in this particular case, we're not using it. 

What is most valuable?

The most valuable features include: 

  • The mapping facilities
  • All of the mapping controls workflow
  • The metadata injection and custom metadata properties for quality of mappings
  • The various mapping tools and reports that are available
  • Gap analysis
  • Model gap analysis
  • Codesets and codeset value mapping 

We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation.

Smart Data Connectors to reverse engineer and forward engineer code from BI, ETL or Data Management Platform is where we're gaining most value. The capability is such that it's only limited by one's imagination or ability to come up with innovative ideas, to automate every idea that we've been able to come up with. We have been able to apply some form of automation to that. That's been quite good.

    What needs improvement?

    The UI just got a real big uplift, but behind the UI, there are quite a few different integrations that go on.

    One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage.

    Some areas we found take time to process such as metadata scans, some of the management functions at a large scale do take time to process. That's an observation that we've worked with erwin support to a degree, but it seems that's just an inherent part of the scale of our particular project.

    For how long have I used the solution?

    We're in our second year of using DI for Data Governance.

    What do I think about the scalability of the solution?

    Erwin's latest general release has addressed performance of metadata sources having greater than 2,000 objects. Our use has 3 metadata sources each having ~ 2,400 relational objects. DIS provides good capability to organize projects and subject areas with multiple sublayers. All mappings have been set to synchronize with scanned metadata. Our solution had built over close to 2,000 mappings over 20K mapped code value pairs. So far so good, scanning and synchronizing metadata and reporting on enterprise gaps take some time to process but not unreasonable considering the work performed. 

    How are customer service and technical support?

    Erwin support is pretty good. We've had our struggles there and I've gone through a lot of tickets. I'd rate them an eight out of ten.

    There have been a couple of product enhancements, one of which I've not been able to get traction into and that was with regard to code set management and workflows. There's some follow-up that I have to do there. That doesn't seem to be a priority. It seems we have to have a couple of different discussions usually or deep dive to determine the problem understanding for a resolution. Sometimes that takes a little bit longer than I would like but all in all, it's pretty good.

    What about the implementation team?

    We had erwin involved in the implementation. 

    I don't think that it can be stood up quickly with minimal professional services. There's quite a bit of involvement. The integration of the solution into an environment ecosystem has challenges that take some effort especially if you're building new connectors. There's a good bit of effort in designing, preparing, planning, and building. It's pretty heavy as far as its integration effort.

    What was our ROI?

    The client is thrilled with higher quality, lower-cost products, and the services.

    What's my experience with pricing, setup cost, and licensing?

    The financial model will be different. There is the cost of this software but there are offsetting accelerations through the automation as well as cost and efficiency. Don't be afraid of automation and don't get hung up on losing revenue due to automation. What I've seen is that some financial managers resist automation that results in a reduction of labor revenue. These reductions are ideally overcome through additional engagements, improve customer satisfaction, quality, add-on support, whatever the case, automation is a good thing.

    The fact that this solution can be hosted in the cloud does not affect the total cost of ownership. The licensing cost is the same whether I use the cloud or on-prem. It may be the partner agreements but we do get some discounts and there's some negotiated pricing already in place with our companies. I didn't see that there was a difference in cloud license versus on-premise.

    What other advice do I have?

    We haven't integrated Data Catalog and Data Literacy yet. Our client is a little bit behind on being able to utilize these aspects that we've presented for additional value. 

    My advice would be to partner with an integrator. erwin has quite a few of them. If you're going to jump into this in earnest, you're going to need to have that experience and support.

    The biggest lesson I have learned is that the only limitation is the imagination. Anything is possible. There's quite a strong capability with this product. I've seen what you can come up with as far as innovative flows, processes, automation, etc. It's got quite strong capabilities. 

    The next lesson would be in regards to how automation fits within a company's framework and to embrace automation. There are some good quality points to continue with, certainly within the data cataloging, data governance, and so forth. There's quite a bit of good capability there. 

    I rate erwin Data Intelligence for Data Governance a nine out of ten. 

    Which deployment model are you using for this solution?

    Private Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Tracy Hautenen Kriel - PeerSpot reviewer
    Tracy Hautenen KrielArchitecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
    Real User

    Thanks for the great review! How do you find the interaction between the cloud instance of DIS obtaining metadata from on-prem DBMS solutions?

    Data Architect at NAMM California
    Real User
    Enabled us to centralize a tremendous amount of data into a common standard, and uniform reporting has decreased report requests
    Pros and Cons
    • "The biggest benefit with erwin DI is that I have a single source of truth that I can send anybody to. If anybody doesn't know the answer we can go back to it. Just having a central location of business rules is good."
    • "The solution gives us data lineage which means we can see an impact if we make a change. The ability for us to have that in this company is brilliant because we used to have 49 data stewards from some 23 different groups within six major departments. Each one of those was a silo unto itself. The ability to have different glossaries — but all pointed to the same key terms, key concepts, or key attributes — has made life really simple."
    • "The fact that I sometimes have to go in and out of different applications, even though it's all part of the whole erwin suite, perhaps means it could be architected a little bit better. I think they do have some ideas for improvements there."
    • "There was a huge learning curve, and I'd been in software development for most of my career. The application itself, and how it runs menus and screens when you can modify and code, is complex. I have found that kind of cumbersome."

    What is our primary use case?

    We're a medical company and we have our own source systems that process claims from multiple organizations or health plans. In our world, there are about 17 different health plans. Within each of those health plans, the membership, or the patients, have multiple lines of businesses, and the way our company is organized, we're in three different markets with up to 17 different IPAs (Independent Physician Associations).

    While that is a mouthful, because of data governance, and our having own data governance tool, we understand those are key concepts and that is our use case: so that everybody in our organization knows what we are talking about. Whether it is an institutional claim, a professional claim, Blue Cross or Blue Shield, health plan payer, group titles, names, etc., our case represents 18 different titles. For us, there was a massive number of concepts and we didn't have any centralized data dictionary of our data. Our company had grown over the course of 20 years. We went from one IPA and one health plan to where we are today: in five markets, doing three major lines of businesses, etc.

    The medical industry in general is about 20 years behind, technology-wise, in most cases; there are a lot of manual processes. Our test use case was to start from fresh after 20 years of experience and evolution and just start over. I was given the opportunity to build a data strategy, a three-year plan where we build a repository of all sources of truth data used in governance. We have our mapping, our design, our data linkage, principles, business rules, and data stewardship program. Three years later, here we are.

    How has it helped my organization?

    erwin DI needs the Data Modeler, obviously, to be able to harvest the data directly from an existing database, or even a brand new one as you're designing it. That is a huge step in the right direction, although erwin has been known for that for 30 years. But the ability to take that model and interface it directly to the data governance makes it an easy update. It makes it simple for me to move from a development/design stage, for each environment, and into production, and to update the documentation using the data harvester and the Metadata Management tool and data cataloging module. That really brings it all together.

    If I were to note any downside, it's that there are multiple modules and you can't have one without the other if you want to be world-class. But when you have them all, it makes life really easy for something like data profiling of an existing database to know if you want to keep it or not, given that there are so many legacy changes all the way through. The way we do it, when we make a change to a database or we add a database, the model is mapped, we import it, and then we have the data stewards populate any of our descriptions in their glossaries. The tool allows us to see all that instantly, unlike before.

    I mentioned we have a data steward program, which is not part of the tool. While the solution has ways of using issues and for requesting data access within it, we're still stumbling with that. Sometimes it's just easier to talk to people. But we find that getting requests, getting data, and updating it, is actually a much easier process now.

    In addition, the fact that I can always refer back to a centralized location with executive approval has helped me. 

    For our business analysts and data analysts, especially for some of the wannabes and the data steward program, we have been able to centralize a tremendous amount of data into a common standard. One of our mandates was to have a Tableau-type business-intelligence component. We went live with our entire enterprise data warehouse, all the tools, in January of 2019, even though we started in 2016. We spent most of the year in massive amounts of discovery just around our organization's members. We didn't even get to claims or provider-contracting because they are so complex. The tool itself has expedited our getting to brand-new levels we've never seen with our members, because now things are becoming standardized.

    People can refer to an inventory of reports and they can see that we don't have the same report in 20 different places, having 20 people support them. Now, there is one report in Tableau with one dataset. That dataset has become a centralized dictionary/glossary/ terminology inside the tool. Anybody who needs to get access to our data can access it. 

    It's enables efficiency. Just in our marketing department alone, the number of new ways they have to think about our membership and growth has completely changed. They have access to data to make decisions. 

    Executives can now look at what we call a scorecard of our PCP because we now have standardized sales. Everybody knows what they mean, how they are calculated.

    Very high-end statistics and calculations are now easily designed. Anybody can go look at them, they know where to go. And if they want something because it helps make their business grow, it's almost a 24-hour turnaround, as opposed to a four-week SDLC process. It has expedited our process. The goal was to build a foundation and then, for the next couple of years, to really expand it. We hit that and I don't think we could've done it without these tools.

    Recently we had to bring on a brand-new entity, a brand-new medical group. One of the minimum requirements was that we had to take 10 years of historical data from whatever system they had and to convert it, transform it, map it, and log it into our existing source of truth. We did this about four years ago for an entity, and it took us almost nine months just to get a dataset that somebody could use. This last time, it took us three weeks from start to finish because, outside of the governance tool, we have erwin's Mapping Manager and harvester. It also allows us to do source-to-target, so we have all our target mapping to our own repository, and then we have all our targets to EDW already mapped. Our goal was to bring a 100 percent source of truth. We had a complete audit, from when it came in from outside the building, to a location in the building. Then we would transform it into our EDW to whatever attributes, facts, or dimensions we wanted to. The tool allowed us to do that almost in hours, compared to what used to take months.

    Another thing with their DI, not necessarily governance, but some of their other tools — which, of course, all feed back there — is that as soon as we do it, it's available to anybody. Not that a lot of people look at it, because a lot of times they just come and ask us, but the difference is that we're giving them the right answers within minutes. We don't have to tell them, "Well, let me go back and search it for six days."

    We have downstream departments, like our risk department which manages our Medicare patients, and makes sure that we are taking care of them, which involves a very data-intensive process. Our ability to bring in historical data from an old system, a different type of a computer system, and convert it to make it look just like ours, no matter what it looked like before, is all because we have a data governance program. People can look at the changes from before and after and determine if they need certain data. 

    A year ago, if somebody in our company's "left hand" brought in new data, no one but that left hand would know about it. Today, if somebody brings in data, all my data stewards know about it and they can choose to subscribe to it or not, today or later. And that is a matter of a flip of a switch for them, once we have brought it in and published it to anybody in the company. That's really important, for example, from the point of view of a human being. If someone has been around for 20 years it would be nice if we had all their records. Because of our data governance and what we built, all those records are maintained and associated with that person, and that's huge from a medical point of view. Data governance is helping us become even a better company because we know our data and how to use it.

    The fact that erwin DI for Data Governance has affected our speed of analysis is a given. The DBAs are starting to use it more and even some of our executives are wanting to get to it for the data dictionary. It can happen that somebody from one of our departments sends them something and it doesn't make sense to them. Our goal was that if that happened we would try to find out and try to centralize it. We ended up creating our own dashboard reports on our Tableau server and published them to the same parties, so we could get rid of old habits and focus on new ones that have now been validated and verified, with the rules checked. 

    The data governance allows us a real-time inventory. Every time there's a new request or a new ask, we put it in there and we track it and we make sure that our attributes are the same. If they're not, we have an explanation with a description for the different contexts in which the data is being used.

    In addition, part of our ingest of an ask is that we take a first look at it and we provide as-is documentation so that the functional design can be tracked. That's a huge advantage. That has saved huge amounts of time in our development cycle, either for data exchange or interfacing, or even application development. The ability to just pull up the database, to be able to look at the fields and know what's important and what isn't important, note the definitions — we're able to support that kind of functionality. I'm one of the data architects here, and we work with everybody to make sure that our features and our epics are managed properly. For me to be able to quickly assess something, within a few minutes, to be able to say, "Here's the impact, here's what we have to do," and then hand it off to the full-blown design teams; that saves a month, easily. And that's especially true when there are 10 or 15 requests a week.

    As for how the solution’s data cataloging, data literacy, and automation have affected the data used by decision-makers in our organization, on a scale of one to 10, I would give it a seven. It depends on which stakeholder or executive we're talking about. But has it had an impact? Every one of them has brand new reports, reports that didn't exist a year ago. Every one of them now sees data in a standardized format. The data governance tool might not have a direct impact on that, but it has an indirect impact due to the fact that we now govern our data. We treat data as an asset because of the tool. It's not cheap, it's an expensive tool. But my project has a monthly executive steering committee and, for 36 months, they never had a question and never second-guessed anything we did, and they loved any and all tools. So being able to sit with them and say, "Hey, we had an issue," and immediately give them a visual diagram — show what happened with the databases and what somebody may have misinterpreted — is huge; just huge. For everyone from our chief operating officer to our network operations, physicians' contracting, our medical management group, and our quality improvement group, it definitely has impacted the company.

    We've only taken it out to about 50 percent of what it can do. There's so much it can do that we still don't do, because we ourselves are maturing into the program. It really has helped when it comes to harvesting or data profiling. For those processes, it's beautiful — hands-down the best so far. I love the data profiling.

    What is most valuable?

    For me, the biggest benefit with erwin DI is that I have a single source of truth that I can send anybody to. If anybody doesn't know the answer we can go back to it.

    Just having a central location of business rules is good. That has come up a lot.

    The solution gives us data lineage which means we can see an impact if we make a change. The ability for us to have that in this company is brilliant because we used to have 49 data stewards from some 23 different groups within six major departments. Each one of those was a silo unto itself. The ability to have different glossaries — but all pointed to the same key terms, key concepts, or key attributes — has made life really simple.

    It has given us better adaptability in our EDW and a more standardized way of looking at data, as opposed to all of the different formats.

    Our experience with the solution's Smart Data Connectors is still limited, but so far we're impressed. For us, it's mostly been reverse-engineering. But because we got to start this whole project from scratch it was really about forward-engineering. One of the advantages is that we wanted to go back through the entire enterprise, and start mapping everything legacy or future. Ultimately, the future is to move to the cloud and rearrange all our data and reprioritize the most important attributes and not have multiple replications of data to our many silos. So the Smart Data Connectors to engineer code have been spot-on. Harvesting reverse-engineering allows us to test and verify some of the things that are odd.

    It's got a proprietary format that works really well within all the systems and I can export to any format, including my data profiling, my mapping integrator, and any and all of my governance stuff, whether it's within the business rules, the policies, the dictionaries, or the glossaries. I can export that into a CSV or Visio depending on what it is.

    What needs improvement?

    There is room for improvement in automation, no question. 

    Also, the fact that I sometimes have to go in and out of different applications, even though it's all part of the whole erwin suite, perhaps means it could be architected a little bit better. I think they do have some ideas for improvements there. 

    But regarding the data governance tool itself, for me there was a huge learning curve, and I'd been in software development for most of my career. The application itself, and how it runs menus and screens when you can modify and code, is complex. I have found that kind of cumbersome. I had one guy make an error and it costs us a few days because it had an impact on a whole slew of options and objects because he didn't know what he was doing. That was not their fault; it was purely my fault, allowing that to happen. For me, that was a struggle.

    For how long have I used the solution?

    We started about three years ago when we started our data warehouse project.

    What do I think about the stability of the solution?

    We haven't had any problems with anything. I haven't had to worry about updates. If anything is done in terms of updates, it has no impact, to my knowledge. Every day, one of three of us on it and we haven't had any problems.

    What do I think about the scalability of the solution?

    In terms of its scalability, I'm a really simple person. Two plus two equals four. If I know that I can always expand, it's basically a configuration engine or I don't feel like I'm painted into the corner, I feel I'm covered for scalability.

    I feel that way about everything with the Data Governance tool. If I need to grow it, it's just a few more licenses. I have to pay a little extra, but I can get another hundred if I need that many. So I have no worries about being able to add users. I'm also not worried about size and space on the system.

    So far, I have no limitations in terms of scalability. I'm good with it, but I can't say that I have pushed it to the limit yet. We have 500 people here and I have a hundred licenses. Of those, about 85 are currently active, between our users in the building and our IT department.

    It's used extensively by the people who need to use it. I don't think the end-users are using it as much as are the data analysts, the business analysts, or the data stewards. But that was the goal. The data stewards are the ones who should use it. The data stewards are the ones who are managing it. They're the ones who get the requests from the users within their department or their group.

    As far as using it more extensively, it's just a matter of if we grow as a company and we need more data stewards, but I think we're in a good place. Everything feeds accurately. As far as visibility and reading dictionaries, that's something that we would want to do more of. As users adopt it, as departments go from a mom-and-pop mentality to "corporate America," I would say our goal over this next year is to double our growth.

    Self-service is one of our goals: get people to know how to use it themselves. People here ask IT for a particular report and that report goes to a bunch of different people. The same report is used for different reasons in different groups but no one knows it. With erwin, the goal would be that if they want a report they would be able to go in there, see the BI reports which are inside the Data Governance, determine if it's what they want, and make a request; and that day, they would have it, filtered to their needs. If we had to create a brand-new one, they would know the elements and would put the request into the hopper and we could turn that around. Even today, in some cases, because we still have BusinessObjects and we have SSRS, with some of that stuff it can still take up to six weeks to turn a report around. If they use the new system and they use the data, I can put a Tableau report on their desktop within 48 hours.

    How are customer service and technical support?

    Their helpdesk ticket itself is pretty great. The team of people, whether it's Tammy or Susan or a few others that my associate has worked with, is very good, no matter what the tool is. It's very comparable to our own helpdesk. You can get stuff done and, in some cases, there are almost too many emails. That's just how good they are.

    Which solution did I use previously and why did I switch?

    We had Excel and Word documents, but nothing else. 

    When we started our data warehouse project we were looking for a data governance tool. That led us to buy their data governance 1.0. They've upgraded to the 2.0 and we're working with that now. We then ended up buying the modeling software, so we got server licenses for some copies for our modeling, which is the foundation. Then we purchased their Mapping Manager harvester, their data integrator, their metadata package, and we've just recently even purchased their architectural software, so we're working with their entire stack.

    How was the initial setup?

    The initial setup of the software was a combination of straightforward and complex. For me, it was complex, as I'd never seen it and there were a lot of components to their instructions. But we ended up doing it pretty much ourselves, with a few interactions with some of their technical people on the installs. The clouds were, obviously, easy, but the making sure the Data Modeler was there and dealing with the architectural software within our organization wasn't as easy. 

    But erwin DI for Data Governance, itself, was pretty straightforward. Adding users required a little learning curve but no one taught me. It was trial and error more than anything. I'm sure we could have had great training from them, but they have good videos and good tools on the web. For us, there was probably less than a 5 percent interface with erwin for our installs, configuration setup, and even configuring the databases to house all the data.

    Being that it's a cloud solution, in the beginning, the firewalls became an issue but we got those resolved right away. There really wasn't anything bad. We self-taught; we didn't have a whole lot of Professional Services on this. It's intuitive enough that we run our entire data strategy on it. With a group of three people, we support another 50 developers, systems analysts, and data analysts outside of my group.

    We didn't really have an implementation strategy. We didn't know what we were doing three years ago. We just knew that we were tasked to create a data warehouse, and I'd never done one. We tried to do it the right way. If it was something that I'd been doing for 30 years, there would have been more of a strategy for how I would do it.

    We had a consultant come in in 2016 and create what they called an IT map of applications and data strategy plan. We called it the Imap. They laid out the groundwork and the framework in which to build this whole thing. One of their areas was data governance. It just so happened that I was doing research and erwin's Data Governance tool came up in my research. I knew erwin's responsibility, so I put in a request. Around Christmas of 2016 it got approved. We didn't expect it because the year was out. That became the first thing we started with. From there, it just kept building with all the other modules. It just kept growing with us.

    The advantage erwin had with us is that we were new at it. So even if there was something wrong, we wouldn't have known the difference. But I knew what was right in terms of what the data means to the company and how we run our business. For me, it just fit perfectly. It just kept falling into place. Each time we need to get more money for another license or a different module, I could integrate it pretty simply because it fit the narrative. Maybe that's the best way: the technology just simply fit, purely by accident. I'd love to tell you that I'm such a genius and that I planned this all out, but I didn't. It did help that erwin purchased some companies and ran right along with what we were trying to do.

    Because of the erwin Data Governance software, and trying to figure out and follow their MO as far as key concepts, key terms, and attributes were concerned, we took 27 of the most important people in our organization from the president on down, and sat them in a room for three hours so they could define the term "member." Who was a "patient," a "member," or a "consultant," meant a lot. It changed the direction of the company. They didn't even know what data governance was three years ago. Now everybody talks about it.

    What about the implementation team?

    We worked directly with erwin. If we had any questions, we'd go through their helpdesk. Sometimes we'd have conference calls. A lot of times, they seemed to go above and beyond to help us, especially when it came to the database configurations. We ran into a few things with the Mapping Manager and the harvester, but their team was great. They worked with our DBAs with no questions asked and no hesitation.

    What was our ROI?

    Our labor costs are half what they would have been. And then there was the lack of quality or the lack of productivity. So it's had a huge impact on all those things. That's why the money is more than recouped.

    My three-year plan was to recoup the $3,000,000 we spent in the previous three years, and this year we have already recouped $2,400,000 of it. We have two more years to recoup to break even. I don't know if that's directly related to just the Data Governance. It might be that because of governance it has allowed us to do all the other things. The ultimate goal was to get data into the hands of the decision-makers and have it as accurate as it could be, so they could make better decisions.

    We have improved our time for report servicing and our capabilities in turning things around quickly.

    One thing we missed in our estimates of cost savings was the reduction in the number of requests or the time it would take to do a request. The issue was that we created a standardized report, and it worked so well that we stopped getting requests altogether. We never thought they would never send a new request, so we saved even more. But that's because we knew the fields, we got it right the first time, and we created standards around governance that allowed us to really simplify our business. I have 29 standardized reports around memberships, providers, and claims, and they are used throughout the organization now. Those are reports we didn't have before.

    In terms of time-to-value, I wouldn't say standing it up was quick, because a day would be quick. But it was under a month. It could be set up pretty easily, especially once you understand all the components you need and all their modules. The erwin governance solution is on the cloud while the modeler is on-premise and the suites are also on the cloud. And the fact that it's cloud-based made it simple and straightforward. It was just "boom," we got our logins and we were fine, for the Data Governance software.

    What's my experience with pricing, setup cost, and licensing?

    The whole suite, not just the DI but the modeling software, the harvester, Mapping Manager — everything we have — is about $100,000 a year for our renewals. That works out to each module being something like $8,000 to $10,000.

    Which other solutions did I evaluate?

    Everybody was evaluating other options. I got in trouble because I picked erwin after I had looked at 20 other things out there. Based on price and the size of our company, and the fact that we were brand-new to this whole endeavor, I didn't want to spend a fortune on something like Informatica, and have a master data management system.

    Another big difference between Informatica and erwin is interfacing. Informatica is the top-of-the-line, upper-quadrant for MDM solutions, whereas erwin was more about just managing data and not necessarily manipulating it, moving it, interfacing with it, etc. That was the big difference. But erwin allowed us to get our footprint into it and really learn it. It was just the right solution at the right time.

    What other advice do I have?

    Our first goals were data literacy and data as an asset. Those were our two big, ultimate goals three years ago. Data literacy turned out to be 10 times more important than a data warehouse. We could look at existing data sets and, just by educating people, it gave them an advantage almost immediately. The fact that the data governance was able to put a framework around data literacy helped us focus on the right answer, even if it wasn't the first one given. In other words, sometimes we'd have the same answer three or four times, and it would shift until we nailed it. But without governance, we would never have done that and we would have stayed the same.

    The secret to the success of this project was that we had a vision and we stuck to it. Governance was important to us, no matter how other people might have thought about it. In my very first data steward meeting I was introducing everybody to these brand-new terms they'd never seen, and someone in our analytical group totally derailed the meeting. So be aware that it's not going to be easy, but have a vision. 

    And make sure that governance is important or don't bother. It's not something that a lot of people add value to at all. When you say, "Oh, I want governance so we can have a data dictionary and you can go look at it," they'll say, "I don't want to look at it, just give me a report." But the ability for those who need to do that is huge. Have a vision and stick to it and be willing to take a step back, sometimes, to go two forward.

    The neat thing is that we've pretty much done all of this with two to three people, for our entire organization. We do have three data teams that are using the Modeler for development — ETL SSIS stuff — but we have a pretty serious "wash, rinse, repeat" standard. If anything is in doubt, we just go back to the business rules and see what our rules are. What are our principles, and are we meeting them?

    As far as automating the changes through the environments, it has helped, but not a lot. It's not like it was a silver bullet. We need help there, because there's so much. There's the model, but once you promote that in different environments, sometimes you miss it because you only get three or four days to get out of QA to get it into stage.

    Obviously, you mitigate risks with automation. It does have an impact. As a company, we just haven't been able to take full advantage of it now, but that's our hope. We're only into it for about a year-and-a-half, even though we have run with the suite for almost three years. We're still immature. I wish I had everything at the push of one button, the "Easy" button. Some of it's over our heads. We could use some new training and we could use some additional support. erwin has been great with us, but it's also a matter of the appetite and the resources. The biggest issue is that I don't have a team of people doing what a team of people need to do to accomplish what we would like to. It's done by a small number of people on a consistent basis, and not full-time.

    The solution's generation of production code through automated code engineering would reduce the time it takes to go from initial concept to implementation, but we're a Microsoft shop and most of all that is done inside TFS or Visual Studio. That's how we manage all our codebase, including release management. That's all done separately and is automated. We're trying to create some interfaces between the two. We just haven't gotten there.

    In my three years using erwin, besides actually getting approval for the money to purchase the software, I don't think I've had a struggle with it. They've been great. When we first got on and we had some questions, they got me to the development team in England and set it all up with us without question, no extras. They just tried to make sure it worked.

    I would rate erwin DI for Data Governance at eight out of 10. I never give a 10 because I have yet to see perfection. It has some gaps, but I definitely think it's in the top third. As far as rates go, I don't have a lot to compare it with. It's easy now but it took going through a learning curve, but that's the case with any software. Does it need to mature a little? Possibly. But that would be it. With their roadmap, they're buying companies, and changing things, and doing things. I've been pleased.

    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
    Real User
    Lets me have a full library of physical data or logical data sets to publish out through the portal that the business can use for self-service
    Pros and Cons
    • "They have just the most marvelous reports called mind maps, where whatever you are focused on sits in the middle. They have this wonderful graphic spiderweb that spreads out from there where you can see this thing mapped to other logical bits or physical bits and who's the steward of it. It's very cool and available to your business teams through a portal."
    • "There are a lot of little things like moving between read screens and edit screens. Those little human interface type of programming pieces will need to mature a bit to make it easier to get to where you want to go to put the stuff in."

    What is our primary use case?

    We don't have all of the EDGE products. We are using the Data Intelligence Suite (DI). So, we don't have the enterprise architecture piece, but you can pick them up in a modular form as part of the EDGE Suite.

    The Data Intelligence Suite of the EDGE tool is very focused on asset management. You have a metadata manager that you can schedule to harvest all of your servers, cataloging information. So, it brings back the database, tables, columns and all of the information about it into a repository. It also has the ability to build ETL specs. With Mapping Manager, you then take your list of assets and connect them together as a Source-to-Target with the transformation rules that you can set up as reusable pieces in a library.

    The DBAs can use it for all different types of value-add from their side of the house. They have the ability to see particular aspects, such as RPII, and there are some neat reports which show that. They are able manage who can look at these different pieces of information. That's the physical side of the house, and they also have what they call data literacy, which is the data glossary side of the house. This is more business-facing. You can create directories that they call catalogs, and inside of those, you can build logical naming conventions to put definitions on. 

    It all connects together. You can map the business understanding in your glossary back to your physical so you can see it both ways. 

    How has it helped my organization?

    We have only had it a couple months. I am working with the DBAs to get what I would call a foundational installation of the data in. My company doesn't have a department called Data Governance, so I'm having to do some of this work during the cracks of my work day, but I'm expecting it to be well-received.

    What is most valuable?

    They have just the most marvelous reports called mind maps, where whatever you are focused on sits in the middle. They have this wonderful graphic spiderweb that spreads out from where you can see this thing mapped to other logical bits or physical bits and who's the steward of it. It's very cool and available to your business teams through a portal. 

    Right now, we're focusing on building a library. erwin DM doesn't have the ability to publish out easily for business use. The business has to buy a license to get into erwin DM. With erwin DI, I can have a full library of physical data there or logical data sets, publish it out through the portal, and then the business can do self-service. 

    We are also looking at building live legends on the bottom of our reports based on data glossary sets. Using an API callback from a BusinessObjects report from the EDGE governance area in the Data Intelligence Suite back to BusinessObjects, Alteryx, or Power BI reports so you can go back and forth easily. Then, you can share out a single managed definition on a report that is connected to your enterprise definitions so people can easily see what a column means, what the formula was, and where it came from.

    It already has the concept of multilanguage, which I find a really important thing for global teams.

    What needs improvement?

    It does have some customization, but it is not quite as robust as erwin DM. It's not like everything can have as many user-defined properties or customized pieces as I might like.

    There are a lot of little things like moving between read screens and edit screens. Those little human interface type of programming pieces will need to mature a bit to make it easier to get to where you want to go to put the stuff in.

    For how long have I used the solution?

    We have only had erwin DI for a couple months. We brought it in at the very end of last year.

    What do I think about the stability of the solution?

    So far, I haven't had any problems with it whatsoever. Now, I'm not working on it all day every day. It seems to be just as stable as erwin DM is. I used this tool when it was still independent and called Mapping Manager, before it became part of the erwin Suite. It's lovely to see it maturing to connect all the dots.

    Four people are maintain the solution. The DBAs are going into harvest the metadata out of the physical side of the house. Then, I'm working with the data architects to put in the business glossaries.

    What do I think about the scalability of the solution?

    It is a database. All of the data is kept outside of the client, so it's how you set up your server.

    We have five development licenses and 100 seats for the portal. Other than those of us who are logging in to put data in, nobody much is using it. However, you have to start some place.

    Right now, the DBAs, data architects, and I are its users.

    I'm expecting the solution to expand because the other cool thing that this Data Intelligence Suite has is a lot of bulk uploads. I can create an Excel template, send it to the business to get definitions, and then bulk upload all their definitions. So, we don't need a lot of developer licenses. It becomes a very nice process flow between the two of us. They don't have to login and do things one by one. They just do it in a set, then I load things up for them. I have also loaded up industry standard definitions and dictionaries making it easy to deal with.

    How are customer service and technical support?

    I haven't interfaced with anybody who is just an EDGE team member. I will say the sales and the installation teams that we worked with were both fabulous.

    Which solution did I use previously and why did I switch?

    We did not previously use another solution. erwin didn't have a formal business glossary.

    How was the initial setup?

    The initial setup seemed to be very straightforward. I don't do the installations, but the DBAs seem to find it pretty easy. They got the installation instructions from the erwin team, followed them, and the next day, it was up and running.

    We're just following the same implementation strategy that we're doing with erwin DM. We didn't set up the lower tiers because I didn't see that we need lower tiers except for upgrades. We just do lower tiers when we do an upgrade and push to production, then we just drop the lower tier. Other than having to train people on how to use it, implementation has been pretty easy.

    What was our ROI?

    ROI is a bit hard to come at. There is peace of mind knowing that we now have visibility into the business. To be able to know that I'm instantly pushing all the data definitions out to the business, even though culturally I haven't changed everything so they are looking at it on a daily basis. This is still hard to put a price tag on. I know I'm doing my piece of the job. Now, I have to help them understand that it's there and build a more robust data set for them.

    What's my experience with pricing, setup cost, and licensing?

    You buy a seat license for your portal. We have 100 seats for the portal, then you buy just the development licenses for the people who are going to put the data in.

    Which other solutions did I evaluate?

    We did evaluate other options. Even though erwin DI got a few extra points from the evaluation to coordinate with the erwin DM tool, we looked at other tools: Alteryx Connect, Collibra, DATUM, and Alation.

    We did a whole pile of comparisons:

    • Some of them were a bit more technical. 
    • Some of them were integration points.
    • Customization.
    • The ability to schedule data harvests, because the less you have to do manually, the better.
    • The ability to build your data lineages, then the simplicity of being able to look at those sorts of things to do searches. 

    There were a different things along those lines that showed up in the comparison.

    Erwin DI checked all the boxes for us. There are some things that they will grow into over time, but they had all of the basics for us.

    Collibra scored a little higher on being able to integrate with SAP Financials. In fact, other products scored a bit higher with the SAP integration altogether, because with erwin DI, you need to buy a connection to do some of that.

    For the connection with some of our scheduler tools, Alation was able to integrate with our UC4 scheduler. Right now, the EDGE tools don't.

    For the most part, the functionalities were exactly the same, e.g., being able to do bulk uploads with high performance, Alteryx, Collibra, and erwin Data Intelligence Suite tied on a lot of things. However, erwin's pricing was cheaper than its competitors.

    What other advice do I have?

    If you have the ability to pull a steering committee together to talk about how your data asset metadata needs to be used in different processes or how you can connect it into mission-critical business processes so you slowly change the culture, because erwin DI is just part of the processes, that probably would be a smoother transition than what I am trying to do. I'm sitting in an office by myself trying to push it out. If I had a steering committee to help market or move it into different processes, this would be easier.

    Along the same lines as setting up an erwin Workgroup environment, you need to be thoughtful about how you are going to name things. You can set up catalogs and collection points for all your physical data, for instance. We had to think about if we did it by server, then every time we moved a server name, we'd have to change everything. You have to be a little careful and thoughtful about how you want to do the collections because you don't want the collection names to change every time you're changing something physically.

    What we did is I set up a more logical collection, so crossing all the servers. The following going into different catalogs:  

    • The analytics reporting data sets 
    • The business-purchased applications 
    • External data sets 
    • The custom applications. 

    I'm collecting the physical metadata, and they can change that and update it. However, the structure of how I am keeping the data available for people searching for it is more logically-focused.

    You can update it. However, once people get used to looking in a library using the Dewey Decimal System, they don't understand if all of a sudden you reorganize by author name. So, you have to think a bit down the road as to what is going to be stable into the future. Because the more people start to get accustomed to it being organized a certain way, they're not going to understand if all of a sudden you pull the rug out from under them.

    I'm going to give the solution an eight (out of 10) because I'm really happy with what I've been able to do so far. 

    The more that the community uses this tool, the more feedback they will get, and the better it will become.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    James M. Dey - PeerSpot reviewer
    James M. DeyWorks at Mintel
    Real User

    Actually getting metadata out from Erwin DM is pretty easy. DM comes with a SQL Query Tool - https://erwin.com/bookshelf/pu... which allows you to query any object in the ERWIN Metadata model. It also has an ODBC data source, so pretty much any coding language can connect via ODBC issue a metadata sql query and get the metadata back as a result set. From there you can obviously do anything e.g. create a data dictionary in Excel.

    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.
    Updated: June 2025
    Product Categories
    Data Governance
    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.