Try our new research platform with insights from 80,000+ expert users
Data Management & Automation Manager at a consultancy with 11-50 employees
Reseller
Oct 22, 2020
Different members can work on the same model, regardless of where they are located
Pros and Cons
  • "The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically."
  • "We had some data integration projects, where we needed to integrate it for about 100 databases. Doing that manually is crazy; we can't do that. With erwin, it was much easier to identify which tables and columns could be used for the integration. That means a lot in terms of time and effort as well as my image to the customer, because they can see that we are providing value in a very short time."
  • "I am not so happy with its speed. Sometimes, it can have problems with connections."

What is our primary use case?

We use it in order to create models, do some reverse engineering in the case of existing databases, and for comparing models, e.g., what is in the design vs reality.

How has it helped my organization?

It provides us a visual of the database, which helps me with the complexity of the models. We can know if someone made changes to anything, which is very important from a development perspective. It helps us maintain control of the work.

We had some data integration projects, where we needed to integrate it for about 100 databases. Doing that manually is crazy; we can't do that. With erwin, it was much easier to identify which tables and columns could be used for the integration. That means a lot in terms of time and effort as well as my image to the customer, because they can see that we are providing value in a very short time.

The solution's code generation ensures accurate engineering of data sources. This accuracy affects our development time a lot. It is very easy to go into the graphical model to change something, e.g., generate scripts. It now takes minutes (less than an hour).

What is most valuable?

The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically.

I like the accuracy. It is very precise.

What needs improvement?

I am not so happy with its speed. Sometimes, it can have problems with connections.

erwin's automation of reusable design rules and standards is good, but it could be better.

Buyer's Guide
erwin Data Modeler
January 2026
Learn what your peers think about erwin Data Modeler. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
880,844 professionals have used our research since 2012.

For how long have I used the solution?

About 30 years.

What do I think about the stability of the solution?

It is pretty good. I haven't had any problems with crashes, etc.

We have a consultant who is responsible for the maintenance.

What do I think about the scalability of the solution?

The solution's scalability is good. However, there isn't a clear explanation of how to go from 10 to 20 users, which is something that customers ask us.

In my company, there are currently five data managers who use erwin.

How are customer service and support?

I like their technical support. They try very hard to solve the problem.

They are not supporting old versions of some databases anymore, so I don't always have the tools that I need. I would like them to keep the support for the older versions.

How was the initial setup?

The standard edition is quite straightforward to set up. It is just clicking, "Next, Next, Next." This takes less than an hour to set up.

It gets complicated when we set up the group edition. We need to start a database. Sometimes, erwin support is needed for the setup. The setup for the group edition can take two days to a week, depending on the database.

What about the implementation team?

We also sell erwin to some of our customers. Usually, we create some sort of implementation steps to ensure that it will work.

What was our ROI?

We have seen ROI in terms of time, e.g., consulting time and the ability to answer customers faster. This has improved the image of the company.

The solution’s ability to generate database code from a model for a wide array of data sources cuts development time from two weeks to one day or even hours. This is one of the features that I like.

What's my experience with pricing, setup cost, and licensing?

The price should be lower in order to be on the same level as its competitors.

Which other solutions did I evaluate?

I have worked with Toad, Sparx, and the free version of Oracle Data Modeler. erwin DM's competitors are cheaper, but the look and feel of erwin is more user-friendly, professional, mature, and enterprise level.

What other advice do I have?

I recommend using erwin Data Modeler. You should have a good business case to convince the finance team, as the price is high for Latin America.

I would rate this solution as nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer. Partner
PeerSpot user
Data Architect at a tech services company with 51-200 employees
Real User
Oct 22, 2020
Its ability to standardize data types and some common attributes is pretty powerful
Pros and Cons
  • "We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team."
  • "The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository."

What is our primary use case?

My previous employer's use case was around data warehousing. We used it to house our models and data dictionaries. We didn't do anything with BPM, etc. The company that I left prior to coming to my current company had just bought erwin EDGE. Therefore, I was helping to see how we could leverage the integration between erwin Mapping Manager and erwin Data Modeler, so we could forward engineer our models and source port mappings, then mapping our data dictionary into our business definitions.

We didn't use it to capture our sources. It was more target specific. We would just model and forward engineer our targets, then we used DM to manage source targets in Excel. Only when the company first got erwin EDGE did we start to look at leveraging erwin Mapping Manager to manage source targets, but that was still a POC. 

As far as early DM source specific, we didn't do anything with that. It was always targeted. 

How has it helped my organization?

It improved the way we were able to manage our models. I come from a corporate background, working for some big banks. We had a team of about 10 architects who were spread out, but we were able to collaborate very well with the tool.

It was a good way to socialize the data warehouse model within our own team and to our end users. 

It helped manage some of the data dictionary stuff, which we could extract out to end users. It provided a repository of the data warehouse models, centralizing them. It also was able to manage the metadata and have the dictionary all within one place, socializing that out from our repository as well.

Typically, for an engineer designing and producing the DDL out of erwin, we will execute it into the database, then they have a target that they can start coding towards. 

What is most valuable?

  • Being able to manage the domains.
  • Ability to standardize our data types and some common attributes, which was pretty powerful. 
  • The Bulk Editor: I could extract the metadata into Excel (or something) and be able to make some mass changes, then upload it back.

We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team.

What needs improvement?

The Bulk Editor needs improvement. If you had something that was a local model to your local machine, you could connect to the API, then it would write directly into the repository. However, when you have something that is on the centralized server, that functionality did not work. Then, you had to export out to a CSV and upload up to the repository. It would have been nice to be able to do the direct API without having that whole download and upload thing. Maybe I didn't figure it out, but I'm pretty sure that didn't work when it was a model that sat on a centralized repository.

For how long have I used the solution?

I have been using erwin since about 2010. I used it last about a year ago at my previous employer. My current employer does not have it.

What do I think about the stability of the solution?

We only had one guy who would keep up with it. Outside of the server, as far as adding and removing users and doing an upgrade which I would help with sometimes, there were typically only two people on our side maintaining it.

What do I think about the scalability of the solution?

There are about 10 users in our organization.

How was the initial setup?

There were a couple of little things that you had to remember to do. We ran into a couple of issues more than once when we did an upgrade or install. It wasn't anything major, but It was something that you really had to remember how you have to do it. I

t takes probably a few hours If you do everything correctly, then everything is ready to go.

What about the implementation team?

There were two people from our side who deployed it, a DBA and myself. 

We didn't go directly through erwin to purchase the solution. We used Sandhill Consulting, who provided someone for the setup. We had used them since purchasing erwin. They used to put on workshops, tips and tricks, etc. They're pretty good.

What was our ROI?

Once you start to get into using all the features, it is definitely worth the cost.

Which other solutions did I evaluate?

With erwin Mapping Manager, which I have PoC'd a few times, it was something that I'd always get to produce ETL code. I have also used WhereScape for several years as well, and that type of functionality is very useful when producing ETLs from your model. It provides a lot of saving. When you're not dealing with something extremely complex, but just has a lot of repeatable type stuff, then you get a pretty standard, robust model. That's a huge saving to be able to do that with ETL code.

What other advice do I have?

The ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is pretty powerful. However, I have never actually used the models as something that associates source. It is something I would be interested in trying to learn how to use and get involved with that type of feature. It would be nice to be able to have everything tied in from start to finish.

I am now working with cloud and Snowflake. Therefore, I definitely see some very good use cases and benefits for modeling the cloud with erwin. For example, there is so much more erwin can offer for doing something automated with SqlDBM. 

I would rate this solution as an eight out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer. Partner
PeerSpot user
Buyer's Guide
erwin Data Modeler
January 2026
Learn what your peers think about erwin Data Modeler. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
880,844 professionals have used our research since 2012.
reviewer1376661 - PeerSpot reviewer
Sr. Data Engineer at a healthcare company with 10,001+ employees
Real User
Aug 4, 2020
Provides the ability to document primary/foreign key relationships and standardize them
Pros and Cons
  • "What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes."
  • "erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge."

What is our primary use case?

I am responsible for both a combination of documenting our existing data models and using erwin Data Modeler as a primary visual design tool to design and document data models that we implement for our production services.

My primary role is to document our databases using erwin to work with people and ensure that there is logically referential integrity from the perspective of the data models. I also generate the data definition language (DDL) changes necessary to maintain our data models and databases up to our client requirements in terms of their data, analytics, and whatever data manipulation that they want to do. I use erwin a lot.

It is either installed locally or accessed through a server, depending on where I have been. I have had either a single application license or pooled license that I would acquire when I open up erwin from a server.

How has it helped my organization?

We get data from many different sources where I work. We have many clients. The data is all conceptually related. There are primary subject area domains common across most of our clients. However, the physical sources of the data, or how the data is defined and organized, often vary significantly from client to client. Therefore, data modeling tools like erwin provide us with the ability to create a visual construct from a subject area perspective of the data. We then use that as a source to normalize the data conceptually and standardized concepts that are documented or defined differently across our sources. Once we get the data, we can then treat the data that has been managed somewhat disparately from a common conceptual framework, which is quite important.

At the moment, for what I'm doing, the interface to the physical database is really critical. erwin generally is good for databases. It is comfortable in generating a variety of versions of data models into DDL formats. That works fine.

What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes.

What is most valuable?

Its visualization is the most valuable feature. The ability to make global changes throughout the data model. Data models are reasonably large: They are hundreds, and in some cases thousands, of tables and attributes. With any data model, there are many attributes that are common from a naming perspective and a data type perspective. It is possible with erwin to make global changes across all of the tables, columns, or attributes, whether you are doing it logically or physically. Also, we use it to set naming standards, then attempt to enforce naming standards and changes in naming from between the logical version of the data models and the physical versions of the data models, which is very advantageous. It also provides the ability to document primary/foreign key relationships and standardize them along with being able to review conceptually the data model names and data types, then visualize that across fairly large data models.

The solution’s visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage is very important because you can create or define document subject areas within enterprise data models. You can create smaller subsets to be able to document those visually, assess the integrity, and review the integrity of the data models with the primary clients or the users of the data. It can also be used to establish communications that are logically and conceptually correct from a business expert perspective along with maintaining the physical and logical integrity of the data from a data management perspective. 

What needs improvement?

We are not using erwin's ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync to the fullest extent. Part of it is related to the sources of the data and databases that we are now working with and the ability of erwin to interface with those database platforms. There are some issues right now. Historically, erwin worked relatively well with major relational databases, like Oracle, SQL Server, Informix, and Sybase. Now, we are migrating our platforms to the big data platforms: Hadoop, Hive, and HBase. It is only the more recent versions of erwin that have the ability to interface successfully with the big data platforms. One of the issues that we have right now is that we haven't been able to upgrade the version that we currently have of erwin, which doesn't do a very good job of interfacing with our Hive and Hadoop environments. I believe the 2020 version is more successful, but I haven't been able to test that. 

Much of what I do is documenting what we have. I am trying to document our primary data sources and databases in erwin so we have a common platform where we can visually discuss and make changes to the database. In the past couple of years, erwin has kind of supported importing or reverse engineering data models from Hive into erwin, but not necessarily exporting data models or forward generating the erwin-documented data models into Hive or Hadoop (based on my experience). I think the newest versions are better adapted to do that. It is an area of concern and a bit of frustration on my part at this time. I wish I had the latest version of erwin, either the 2020 R1 or R2 version, to see if I could be more successful in importing and exporting data models between erwin and Hive.

erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge. 

There are other characteristics of erwin, as far as interfacing directly with the databases, that we don't do. Historically, while erwin has existed, the problem is the people that I work with and who have done most of the data management and database creation are engineers. Very few of them have any understanding of data modeling tools and don't work conceptually from that perspective. They know how to write DDL syntax for whether it's SQL Server, Oracle, or Sybase, but they don't have much experience using a data modeling tool like erwin. They don't trust erwin nor would they trust any of its competitors. I trust erwin a lot more than our engineers do. The most that they trust the solution to do is to document and be able to see characteristics of the database, which are useful in terms of discussing the database from a conceptual perspective and with clients, rather than directly engineering the database via erwin. 

erwin is more of a tool to document what exists, what potentially will exist, and create code that engineers can then harvest and manage/manipulate to their satisfaction. They can then use it to make changes directly to our databases. Currently, when the primary focus is on Hive databases or Hadoop environment, where there is no direct engineering at this point between erwin and those databases, any direct or indirect engineering at the moment is still with our Oracle Database.

For how long have I used the solution?

I have been using the solution on and off for 20 to 30 years.

What do I think about the stability of the solution?

It is pretty stable. Personally, I haven't run into any real glitches or problems with the output, the ability to import data when it does work correctly, the export/creation of DDL, or generation of reports.

We are trying to upgrade. This has been going on now for several months. We're trying to upgrade to the 2020 version. Originally, it was 2020 R1, but I think at this point people are talking about the 2020 R2 version. Now, I'm not part of our direct communications with erwin in regards to Data Modeler, but there are some issues that erwin is currently working on that are issues for my company. This have prevented us from upgrading immediately to the 2020 version.

What do I think about the scalability of the solution?

This gets down to how you do your data modeling. If you do your data modeling in a conceptually correct manner, scaling isn't an issue. If you don't do your data modeling very well, then you are creating unnecessary complexities. Things can get a bit awkward. This isn't an erwin issue, but more a consequence of who is using the product.

In the area that I'm working right now, I'm the only user. Within the company, there are other people and areas using the solution probably far more intimately in regards to their databases. I really don't know the number of licenses out there.

How are customer service and technical support?

The problem is that our issues are related to interfacing erwin Data Modeler with the Hadoop Hive environments. The issues have always been either what I was trying to do was not fully supported by our version of erwin Data Modeler. People have certainly tried to help, but there's only so much that they could tell me. So, it's been difficult. I am hoping that I can get back to people with some better answers once the newest version of erwin is available to us.

Which solution did I use previously and why did I switch?

The people who were previously responsible for the database development were very good engineers who knew how to write SQL. They could program anything themselves that they wanted to program. However, I really don't think that they really understood data modeling as such. They just wrote the code. Our code and models are still developing and not necessarily conformed to good data modeling practices. 

How was the initial setup?

In the past, I was involved in the initial setup. In traditional environments, it sets up pretty easily. In my current environment, where I'm trying to get it as intimately integrated with our big data platforms as possible, I'm finding it quite frustrating. However, I'm using an older version and think that is probably a significant part of the problem.

What was our ROI?

In other environments where I've worked, the solution’s ability to generate database code from a model for a wide array of data sources cuts development time. In this environment, erwin is not very tightly integrated into the development cycle. It is used more for documentation purposes at this point and for creating a nascent code which down the road gets potentially implemented. While it's not used that way at my current company, I think it would be better if it were, but there is a culture here that probably will prevent that from ever occurring.

What's my experience with pricing, setup cost, and licensing?

An issue right now would be that erwin doesn't have a freely available browser (that I am aware of) for people who are not data modelers or data engineers that a consumer could use to look at the data models and play with it. This would not be to make any changes, but just to visually look at what exists. There are other products out there which do have end user browsers available and allow them to access data models via the data modeling tool.

Which other solutions did I evaluate?

There is another tool now that people are using. It is not really a data modeling tool. It is more of a data model visualization tool, and that's SchemaSpy. We don't do data modeling with that. You get a visualization of the existing physical database. But that's where the engineers live, and that's what they think is great. This is a cultural, conceptual, understanding issue due to a lack of understanding and appreciation of what good data modeling tools do that I can't see changing based on the current corporate organization. 

What other advice do I have?

It is the only meaningful way to do any data modeling. It is impossible to conceptualize and document complex data environments and the integration between different data subject areas. You can write all the code or DDL you want, but it's absolutely impossible to maintain any sort of conceptual or logical integrity across a large complex enterprise environment without using a tool like erwin. 

You want to look at what you are trying to accomplish with erwin before implementing it.

  • Does the product have the ability to support or accomplish that?
  • Based on the technologies that you have decided you want to use to manage your data, how intimately does it integrate with those technologies? 

From my perspective of using the traditional relational databases, I think erwin probably works pretty well. 

For the newer database technologies, such as the Hadoop environment databases, it's not clear to me how successful erwin is. However, I'm not talking from the perspective of somebody who has been aggressively using the latest version. I don't have access to it, so I'm afraid my concerns or issues may not be valid at this point. I will find out when we finally implement the latest erwin version.

I would give the solution a seven or eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Architecture Manager at a insurance company with 10,001+ employees
Real User
Jul 27, 2020
The ability to generate database code from a model for a wide array of data sources cuts development time
Pros and Cons
  • "We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work."
  • "I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk."

What is our primary use case?

We have a couple of really important use cases for erwin. One of them is that we automate the pull of metadata from the repository itself, so that we have all the model metadata that we can then put into a centralized hub that we can access with other applications. Another reason we pull all the metadata out of the model is to run it through our model validation application, which telsl us if this model is healthy or not and if it meets our standards or not.

The other use case that's really important is managing the abbreviations file that erwin uses to convert logical terms into physical terms. The way that you manage it today within erwin is very manual and you'll go from a spreadsheet, make changes, and upload, et cetera-- but we've created an API application where if we take the main standard file and keep it in the database, make the changes in the database, then we have an application that goes out into the Mart file, deletes the glossary, replaces it with the table from the database. It's all automated at the push of a button. It's things that would take us days to make changes in the standard files and do updates in eight different files.

How has it helped my organization?

Data warehousing is the best example of how this product can make a huge difference because it's an integration of a lot of different source systems. You have to be able to visualize how you are going to make the information from sources A, B, and C merge together. It makes it very important.

The ability to automatically generate DDL and have the to do it in different flavors (Teradata DDL or Oracle, et cetera), and to be able to fine-tune the forward engineering file so that it comes out the way your shop likes to see the DDL done is critical. It's soup to nuts from the design all the way to implementation. It's really critical.

We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work.

The code generation ensures accurate engineering of data sources especially because you can tweak it.

Development time is another critical issue. If you had to tweak every single piece of code that comes off the line because there's only a one-size-fits-all solution, then the problem would not be worth anywhere near as much as it is. It has the ability to create a customized forward engineering code that you can use to generate your code for your shop so that it always comes out the way you want it.

What is most valuable?

The product itself is fantastic and it's about the only way to get an enterprise view of the data that you're designing. It's a design tool, obviously. Once you add the API to that where you can automate things, you can make bulk changes. You can integrate your data from erwin into another in-house application that doesn't have access to the data because the erwin data is encrypted. It's been quite a boon to us because we're very heavy into automation, to have the ability to create these ad hoc programs, to get at the data, and make changes on the fly. It's been a wonderful tool.

A data modeling case tool is a key element if you are a data-centric team. There is no way around it. It's a communication tool. It's a way of looking at data and seeing visually how things fit together, what is not going to fit together. You have a way of talking about the design that gets you off of that piece of paper, where people are sitting down and they're saying, "Well, I need this field and I need that field and we need the other field." It just brings it up and makes it visible, which is critical.

What needs improvement?

I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk.

We have a really good relationship with erwin and whenever we come across something and we contact the product developers and the contacts that we have, they immediately put fixes in, or they roll it into the next product. They're very responsive. I really don't have any complaints.

It's a wonderful product and a great company.

For how long have I used the solution?

I've been using erwin since version 3.0 in the '80s.

What do I think about the stability of the solution?

It's very stable. It's very mature.

What do I think about the scalability of the solution?

We have about 70 licenses and we have about 70 people using the product full time. I've worked in shops where there were two or three to a dozen. Besides these 70, we also have other parts of the world, shops that have it. It scales right up. I have not worked in a shop where it was either too small or too large.

We have full-time data modelers. We have architects. We don't make a distinction between the data architect and the data modeler. The data architect is designing the enterprise-level view of data and how we use it as a business and then modelers work on specific projects. They'll take this enterprise view and they'll create a project model for whatever it is that we're rolling out.

We've got an architecture person, a modeler person, and we also have some developers who do some smaller database modeling when they had to get out something that's just used in-house. It's not used downstream by the end-user. We have the use of a portal product. Everybody at the company has access to the web portal product. They can go in and see what data has been designed and do impact analysis.

The business analyst will look at it in the web portal to see what the downstream impact would be for them to change a particular name that the company uses for something. They check what the downstream and upstream implications are. Then, the developers use our DI tools for creating the mapping from the source system to the target system. Our data stewards use the tool for the business glossary and for how we define things. Every part of the company that deals with data uses eriwn.

How are customer service and technical support?

Customer service is fantastic. I know a lot of the guys by first name that work in tech support.

When we have a problem, typically we're broke because we have people here on staff who answer most of the questions and most of the problems. If we have a problem, it's a big problem. They put us straight through and they handle us right away.

Which solution did I use previously and why did I switch?

I've used four different data modeling tools. Every modeling tool has its strong point but there's none of them out there that are as robust to me as erwin. If I have to choose one tool, it's going to be erwin, especially since I've gotten into the API and I know how to use it. Some of the things the other tools add in terms of being able to manipulate the underlying metadata, erwin has with that API. I won't say they now have it. They've had it since day one, but I've just picked it up in the last year or so.

How was the initial setup?

The setup gets more complex every time. Especially with 2019, they completely changed the interface and that was another learning curve. But for the most part, if you know data modeling, you can find the logical task that you want to do within the physical form and menus of the product. I didn't find the learning curve so bad because I was already a data modeler.

I started the upgrade process today, as a matter of fact. We just got the software installed on a Mart, and I'm going through the new features. I'll play with it for a week. Then we'll get other testers to actually do some formal testing for a week. And then we'll put in our change because we're a large shop. It's around a month cycle to get an upgrade in place. That's if there are no problems. If we come across something that tells us that we can't use this product until this is changed or fixed then it's a stop. For the most part, a happy path, takes around a month in a large shop like ours.

As far as the upgrade itself on dev, it took maybe an hour to upgrade the Mart. And it took me maybe an hour to upgrade the desktops that we use for testing.

We've been doing upgrades for years. I'm been involved in it with multiple companies and it's what I do here. We have a cycle, a strategy, and a checklist that we go through for every upgrade.

The first thing we do is we have a development system. We have virtual machines that we set up, so it's not on anybody's particular desktop. We upgrade the product and then one person will go through and I'll look at the new features and I'll see, number one, if we need the new features. Number two, if there is anything in these features that will break what we're doing today. Those are the first things I look at. If we pass those first two tests, then I start looking at the features and check what we are going to have and what it is going to involve in terms of training the user. We check how it is going to impact the modeler that's actually down in the trenches.

I've got to do the training materials and then the next thing is we have a warranty period. We have a group that pushes the software to the desktop. We have a special day that we roll it out. And then we have a warranty period where we set a virtual call that anybody could sit in if they have a problem. We have a virtual call so that if anybody, when they come in on Monday morning, can't get into the product, or if they're having any problems with that at all, we're right there to answer their questions. We allow for that for the first week. After that, we turn everybody loose. Of course, it doesn't account for the physical part of backing up the database, doing the install, validating over the weekend, and all that stuff. It's just the standard software upgrade stuff.

What about the implementation team?

We implement in-house, but always have access to a World-Class vendor.

What was our ROI?

I wouldn't know how to measure ROI. I can only say that the alternative is spreadsheets, typing, visually inspecting things, never being able to integrate, never being able to communicate. I can't give an ROI, but I can say that I wouldn't want to work in a shop that didn't have a data modeling data tool.

erwin's my first love. I know that I have been using it long enough that I am under the covers and I know it backward and forwards. It's the one I prefer.

What's my experience with pricing, setup cost, and licensing?

I don't deal with pricing or licensing here. I know that you can get a per-seat license. You can get concurrent licenses. To me, if you're a full-time modeler, you need a per-seat license. If you're a developer or a data steward, you use it a couple of times a day, maybe a couple of times a week, you can have concurrent licenses so that a group of five people will share one license. If someone's using it you can't, but if it's free then you can go ahead and use it, or you can lock it, or whatever. There are different ways of licensing it.

What other advice do I have?

The one thing that having a CASE tool does is it takes the drudge away from modeling. You get to actually think of what you're doing. You think about the solution and not how you are going to keep track of what you're doing. It frees you from a lot of mechanical things that are part of keeping track of data modeling, and it allows you to do the thinking part.

There's not a lot of documentation on the API. You're pretty much going to have to teach yourself. If you have a specific problem where you've gotten to a certain point, you can always touch base with the guys at erwin and they will help you to get little snippets of code. But if you're doing things like we have, which is to write a full-blown application to extract the data or to make changes to the model, you're pretty much going to have to learn it on your own. That's just the one drawback of the API but if you're a programmer and you want to DM like me, it's a lot of fun.

It's a challenge but it's very rewarding to be able to automate stuff that people are doing manually and to be able to hand them a solution.

From one out of ten, I'd give erwin a 9.99. Everything has flaws. Everybody's got these little quirks like I mentioned about the ability to make changes that you shouldn't make. But as far as the product itself, I love it. It's right up there with a 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer1376640 - PeerSpot reviewer
Technical Consultant at a insurance company with 1,001-5,000 employees
Real User
Jul 23, 2020
The UI is very clunky and much more difficult to use that it needs to be
Pros and Cons
  • "Any tool will do diagramming but I think the ability to put the stuff up in a graphical fashion, then think about it, and keep things consistent is what's valuable about it. It's too easy when you're using other methods to not have naming consistent standards and column consistent definitions, et cetera."
  • "I find the UI very clunky and very difficult to use. If I add columns to a table the whole workflow could be so much easier. I get frustrated using it. I've tried other tools. I've tried to get off of erwin a few times. I always come back to it because every tool has its own set of problems, and it seems like if I have to pick my poison, I stay with erwin. But so many things that are clunky with it."

What is our primary use case?

I'm an application developer with a fair amount of database background so I mostly use the tool to do physical modeling to support our application development. I'm a firm believer in not just adding columns to a table but to actually think about it, put together an Erwin model, and look at the relationships. I used to like to generate the model and generate changes all through the tool but being honest, one of my biggest frustrations with Erwin is that it's very difficult to forward engineer and keep things in sync. It used to be so easy and now it's very difficult. It's very frustrating to use this tool for that.

We use it for data modeling but they also do a lot of logical modeling and architecture, and we also use the naming standards capability to force corporate standards across the models.

How has it helped my organization?

It has improved my organization because using a data modeling tool is forcing us to come up with better models.

Its code generation ensures accurate engineering of data sources. It should generate correct code, so I can't say it cuts development time just because it's doing what it's supposed to be doing correctly.

What is most valuable?

I think the ability to depict the model in a graphical fashion, think about it, and keep things consistent is what's valuable about it. It's too easy when you're using other methods to not have naming consistent standards and column consistent definitions, et cetera.

This isn't specific to Erwin, it's specific to any data modeling tool but we also like:

  • The ability to graphically depict how the relationships occur and the relationship lines.
  • The fact that it migrates your foreign keys for you.
  • The general principles of what a data modeling tool does. 

Erwin does a lot of things well. It's just very frustrating in some areas that really should not be frustrating.

The people who don't use a data modeling tool but rather use spreadsheets or wing it typically have pretty poor data models. If you use a data modeling tool, the graphical nature of the data modeling tool forces you to think about relationships. It forces you to ask questions that you wouldn't ask if you were just creating tables and doing it off the top of your head. That's number one, in my opinion, from my own experience. The number one benefit of using a tool like Erwin, is that visual representation forces you to come up with a better model.

    Its ability to generate database code from a model for a wide array of data sources is useful but we're 99% SQL Server, so the fact that it generates 60 other databases doesn't really help me too much. It doesn't support Postgres or Redshift which are the two other systems that we're using. 

    What needs improvement?

    I find the UI very clunky and very difficult to use. The whole workflow of adding columns to a table could be so much easier. I get frustrated using it. Resizing dialog boxes, changing fonts, printing, scrolling around in the UI, etc is very clunky.

    I've tried other tools. I've tried to get off of Erwin a few times. I always come back to it because every other tool has its own set of problems, and it seems like if I have to pick my poison, I stay with Erwin. But so many things that are clunky with it. 

    My biggest frustrations with the product is forward engineering and keeping things in sync. A lot of times I need to change a column definition and all I want to do forward engineer it over to the database, it used to be so easy to do that, way back in the early days with Erwin before CA bought it, and now it's almost impossible. It's very frustrating to do. I've spoken to Erwin about this in the past, and I can understand why they're doing some of the things they're doing, but I'm more of a casual user than a power user, and for me, it's so clunky. It's so much easier using Embarcadero to forward engineer changes to a database than it is using Erwin.

    This product has been on the market for years and I'm amazed at some of the quirky things that I still have to deal with in this product. I wish rather than adding new features, Erwin would fix some of these usability issues.

    For how long have I used the solution?

    I have been using erwin Data Modeler for around ten years before it was owned by CA.

    What do I think about the stability of the solution?

    Other than the bugs, it doesn't crash on me, so I guess the stability is good. 

    What do I think about the scalability of the solution?

    We have somewhere around 20 uses. I use it as a developer and the data architects use it as well. 

    We use the Mart model, we break them out into areas, and there are many models in each area. So we have around a couple hundred models.

    How are customer service and technical support?

    I haven't used their support in quite a while, so I'll say neutral.

    Which solution did I use previously and why did I switch?

    We previously used Toad and Embarcadero.

    I've been using Erwin since it first came out, a long time ago. Back then it was a lot simpler to use and it was just so much easier. I think they tried to make it do everything for everybody and now it's very difficult to do some of the simplest tasks. It's very frustrating, and there are a lot of issues.

    The forward engineering frustration I experience with Erwin is a thousand times easier in Embarcadero. If I want to just make a quick change to a column and forward engineer it to my database, it's a lot easier in other tools.

    Some of the other tools were a lot better in the ease of use and stability of the UI but they also had their share of problems that are deal-breakers. For example, models won't print on one page.  I keep coming back to Erwin. It was the lesser of the two evils. No product is perfect but I think Erwin tries to be everything to everybody, and sometimes when you do that, it's no good to anybody.

    I don't use all the features, it's nice that they're there, but I wish the stuff that I did use was better usability-tested. 

    How was the initial setup?

    I was not involved in the installation of this particular version. When we first started using erwin, we used to install it on our local machines, but now we're using the Mart model and it's installed on servers, so we have a group that maintains it. For years and years, it used to be that we all just installed it on our local machines and ran it that way. 

    It's a licensing thing. We have a concurrent license so by having it on a server, it's in one place, which is nice. That way, everyone's running the same version. Then, because we have concurrent licensing, if you have 30 people that need to use it, but people like me only use it once in a while, you don't have to buy me an expensive, dedicated license, so it's a lot cheaper to have a concurrent license for our company.

    What was our ROI?

    It's not necessarily erwin-specific, but by using a data modeling tool, it forces a better product, better application development, and better applications at our company. Using a tool like that is a must-have. 

    What's my experience with pricing, setup cost, and licensing?

    I like the concurrent licensing. That's phenomenal. I think that was a big win for us.

    What other advice do I have?

    Sometimes you have an initial idea for a data model and when you try to design it in Erwin you realize that you were wrong in how you approached it. Erwin enforces consistency and accuracy. Quite often I learn something by looking at the generated code. It's not like I create table statements all day long. I don't do that generally. So when I use the tool, it generates the correct code in scripts for me which we will then hand off to the DBAs who run them. 

    I would rate it a six out of ten. It's frustrating. It could be so much better. 

    The problem is mostly usability. It has little quirks about the way the screen refreshes, things move around, and the workflow when you're creating columns and tables could be so much better. 

    I have a love-hate relationship. I've used this product for years. I've actually gone to training on it at Erwin, so I know what I'm doing with it. I wish they would make it easier to use. I would think if Microsoft bought it, this would be a totally different product.

    Interestingly enough, Microsoft has tried to come out with data modeling tools a few times, and they are all bad. They're basically toys. You can't use them for anything real, which is surprising to me. You would have thought that they would have had a tool that could compete.

    There are only a couple of big players out there that Erwin competes with. I looked at just about all of them, and I keep coming back to Erwin, but I hate it nonetheless. There's nothing better. There are certain tools that are better in certain areas but far worse in others, and so you pick your poison.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Technology Manager at a pharma/biotech company with 10,001+ employees
    Real User
    Mar 23, 2020
    Gives us an enterprise-view of data and helps enforce data standards we've adopted
    Pros and Cons
    • "The most valuable features are being able to visualize the data in the diagrams and transform those diagrams into physical database deployments. These features help, specifically, to integrate the data. When the source data is accumulated and modeled, the target model is in erwin and it helps resolve the data integration patterns that are required to map the data to accommodate a model."
    • "The modeling product itself is far and above anything else that I've seen on the market. There are certain inconsistencies when it comes to keeping up with other platforms' databases in the reverse-engineering process. It should also support more database platforms."

    What is our primary use case?

    The use cases are for our enterprise data warehouse where we have an enterprise model being maintained and we have about 11 business-capability models being maintained. Examples of business capabilities would be finance, human resources, supply-chain, sales and marketing, and procurement. We maintain business domain models in addition to the enterprise model.

    We're on-premise, a virtualized data center. We're running this as client-server, the client being PC-driven and the back-end for the erwin Mart is virtualized Windows Servers.

    How has it helped my organization?

    Collaboration is very important because it's important to have an enterprise-view of data, as opposed to a project-specific view of data. Using the business capability models, we're able to augment those models based on a project-by-project implementation. And each of those implementations goes through a review process before those business capability models are finalized. That adds a lot of value in data consistency and data replication when it comes to the models. We can discover where there is duplication and inconsistency. It also helps with the data descriptions, the metadata, about the purpose of using certain designs and certain descriptions for tables and patterns, for the data elements. It helps enforce the data standards that we've adopted.

    Each data modeler has their own way of designing the models, but no modeler is starting from a blank sheet of paper. By reverse-engineering models, and by creating models that are based off of popular packages — for example SAP or JD Edwards or Workday — you're able to construct your own data model and leverage the metadata that comes along with the application models. You are able to integrate the data based on these models.

    These modeling tasks deal with applications, and some of the applications are mission-critical and some are not. Most of the applications are not; it's more an analytical/reporting nature that these models represent. The models are key for data discovery of where things are, which makes it more transparent to the user.

    The solution's code generation pretty much ensures accurate engineering of data sources. If you're reverse-engineering a data source, it's good to have the script for examination, but it's valuable in that it describes data elements. So you get accurate data types from those. It cuts down on the integration development time. The mapping process of source-to-target is a lot easier once you know what the source model is and what your target mapping is.

    What is most valuable?

    The most valuable features are being able to visualize the data in the diagrams and transform those diagrams into physical database deployments. These features help, specifically, to integrate the data. When the source data is accumulated and modeled, the target model is in erwin and it helps resolve the data integration patterns that are required to map the data to accommodate a model.

    Also, collaboration around maintenance and usage is associated with data model development and expertise coming from a review process, before the data is actually deployed on a platform. So the data models are reviewed and the data sources are discovered and profiled, allowing them to be mapped to the business capability models.

    What needs improvement?

    The modeling product itself is far and above anything else that I've seen on the market. There are certain inconsistencies when it comes to keeping up with other platforms' databases in the reverse-engineering process. It should also support more database platforms.

    There should also be improvements to capture erwin models in third-party products, for example, data catalogs and things of that nature, where the vendors have to be more aware of the different releases of product and what they support during that type of interaction. Instead of being three or four releases behind from one product to another, the products should become more aligned with each other. So if you're using an Erwin model in a data catalog, you should be able to scan that model based on the level of the Erwin model. If the old model is a certain release, the capture of that should be at the same release.

    For how long have I used the solution?

    I've been using erwin Data Modeler since 2014.

    What do I think about the stability of the solution?

    There haven't been too many problems with stability so we're pretty pleased with the stability of it. Once in a while things may go awry but then we open up a request.

    What do I think about the scalability of the solution?

    We haven't had any issues with scalability. Licensing is very supportive of the scalability because of the type of license we use, which is concurrent. We don't anticipate any issues with scalability: not in terms of the number of users and not in terms of the scalability of some of the models. 

    Some of the models are quite large and therefore our data modeling framework helps us because we're able to have multiple models that are loosely coupled and make up our enterprise model. So we're not maintaining one model for all the changes. We're maintaining several models, which makes it a lot easier to distribute the scalability of those models and the number of objects in those models.

    How are customer service and technical support?

    Technical support has been pretty good. We've had licensing issues. There have also been some bugs that have been repaired and there have been some issues with installation. But all in all, it's been pretty good.

    Which solution did I use previously and why did I switch?

    We did not have a previous solution.

    How was the initial setup?

    The initial setup was pretty straightforward. 

    The only thing that we would like to see improved would be having the product support a silent install. If we were able to deploy the product from a predefined script, as opposed to a native installation, such as on a Windows platform, that would help. We are such a large company that we would prefer to package the erwin installation in one of our custom scripts so we could put it in our application store. It's much along the lines of thinking of an iPhone or an Android application in an application store where you're able to have it scripted for deployment, as opposed to installing it natively.

    Our deployment took just a few months. We constantly go through deployments as new people come onboard, especially consultants. Usually, with a consultant engagement using a data modeler, you have to be able to deploy the software to them. Anything that helps them out in that process is good.

    Our deployment plan was to test the product in a development environment, and have people trained through either self-service video instruction or through on-the-job-training. We were then able to be productive in a production environment.

    What was our ROI?

    ROI is hard to measure. If we did measure it, it would be more of a productivity jump of around 10 percent and would also be seen in data standardization. All of these numbers are intangible. There is more of an intangible benefit than a tangible benefit. It's hard to really put a dollar on some of the data governance processes that erwin supports.

    Standardization is very difficult to put a price tag on or to estimate its return on investment. But we do have data standards; we are using standard names and abbreviations and we do have some standards domains and data types. Those things, in themselves, have contributed to consistency, but I don't know how you measure the consistency. When it comes to enterprise-data warehousing, it's a lot easier for end-users to understand the context of data by having these standards in place. That way, the people who use the data know what they're looking at and where it is. If they need to look at how it's designed, then they can get into the product a little deeper and are able to visualize the designs of some of this data.

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources absolutely make the cost of the tool worth it. erwin supports the Agile methodology, which tends to stabilize your data before you start your sprints and before application development runs its course.

    What's my experience with pricing, setup cost, and licensing?

    We pay on a one-year subscription basis.

    What other advice do I have?

    The biggest lesson that I've learned in using this solution is to have a data governance process in place that allows you to use erwin more easily, as opposed to it being optional. There are times when people like to do design without erwin, but that design is not architected. It pays to have some sort of model governance or data governance process in place, so models can be inspected and approved and deployed on database platforms.

    We use it primarily for first drafts of database scripts, both in a relational database environment and other types of environments. The models represent those physical implementations. The database scripting part is heavily modified after the first draft to include additional features of those database platforms. So we find erwin DM less valuable through that and we find it more valuable creating initial drafts and reverse-engineering databases. It cuts development time for us to some degree, maybe 10 percent, but all in all, there are still a lot of extensions to the scripting language that are not included with the erwin product.

    In our company, there are about 130 users, globally. From time to time the number varies. Most of those users are either the data modelers or data architects. There are fewer enterprise data architects. The other users would just be erwin Web Portal users who want to have a little bit of an understanding about what's in a data model and be able to search for things in the data model. For deployment and maintenance of this solution we have about two infrastructure people, in an 8 x 5 support model.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Enterprise Data Architect at a energy/utilities company with 1,001-5,000 employees
    Real User
    Feb 16, 2020
    Makes logical and conceptual models easy to look at, helping us to engage and collaborate with the business side
    Pros and Cons
    • "It's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that."
    • "Another feature of erwin is that it can help you enforce your naming standards. It has little modules that you can set up and, as you're building the data model, it's ensuring that they conform to the naming standards that you've developed."
    • "I would like to see improved reporting and, potentially, dashboards built on top of that. Right now, it's a little manual. More automated reporting and dashboard views would help because currently you have to push things out to a spreadsheet, or to HTML, and there aren't many other options that I know of. I would like to be able to produce graphs and additional things right in the tool, instead of having to export the data somewhere else."

    What is our primary use case?

    We use it for our conceptual business-data model, for logical data modeling, and to generate physical database schemas. We also create dimensional modeling models.

    How has it helped my organization?

    One of the ways Data Modeler has benefited our company is that it gives us the ability to engage with the business alongside IT, because it's friendly. It has friendly views that we can use when we meet with them. They can follow them and understand them. That increases the quality and accuracy of our IT solutions.

    The solution's ability to generate database code from a model for a wide array of data sources helps cut development time. We generate all the DDL for our hub through a modeling exercise and generate the alter statements and maintenance through the erwin modeling tool. I would estimate that reduces development time by 30 to 40 percent because it's so accurate. We don't have to go back in. It takes care of the naming standards and the data types. And because we use OData, we generate our service calls off of those schemas too. So that's also more accurate because it uses what we've created from the model all the way through to a service call with OData.

    What is most valuable?

    I find the logical data modeling very useful because we're building out a lot of our integration architecture. The logical is specific to my role, since I do conceptual/logical, but I partner with a team that does the physical. And we absolutely see value in the physical, because we deploy databases for some of those solutions.

    I would rate erwin's visual data models very highly for helping to overcome data source complexity. We have divided our data into subject areas for the company, and we do a logical data model for every one of those subject areas. We work directly with business data stewards. Because the logical and the conceptual are so easy to look at, the business side can be very engaged and collaborate on those. That adds a lot of value because they're then governing the solutions that we implement in our architecture.

    We definitely use the solution's ability to compare and synchronize data sources with data models. We have a data hub that we've built to integrate our data. We're able to look at the data model from the source system, the abstracted model we do for the hub, and we can use erwin to reverse-engineer a model and compare them. We also use these abilities for the lifecycle of the hub. If we make a change, we can run a comparison report and file it with the release notes.

    What needs improvement?

    I would like to see improved reporting and, potentially, dashboards built on top of that. Right now, it's a little manual. More automated reporting and dashboard views would help because currently you have to push things out to a spreadsheet, or to HTML, and there aren't many other options that I know of. I would like to be able to produce graphs and additional things right in the tool, instead of having to export the data somewhere else. And that should work in an intuitive way which doesn't require so much of my time or my exporting things to a spreadsheet to make the reporting work.

    For how long have I used the solution?

    I've used the erwin Data Modeling tool since about 1990. I work more with the Standard Edition, 64-bit.

    What do I think about the stability of the solution?

    It's stable. This specific tool has been around a long time and it has matured. We don't encounter many defects and, when we do, a ticket is typically taken care of within a couple of days.

    What do I think about the scalability of the solution?

    We're using standalone versions, so we don't need to scale much. In the Workgroup Edition we've got it on a server and we have concurrent licensing, and we've had no issues with performance. It can definitely handle multiple users when we need it to.

    At any time we have six to 10 people using the Workgroup Edition. They are logical data modelers and DBAs.

    We've already increased the number of people using it and we've likely topped-out for a while, but we did double it each year over the past three years. We added more licenses and more people during that time. It has probably evolved as far as it's going to for our company because we don't have more people in those roles. We've met our objectives in terms of how much we need.

    How are customer service and technical support?

    I would rate erwin's technical support at seven out of 10. One of the reasons is that it's inconsistent. Sometimes we get responses quickly, and sometimes it takes a couple of days. But it's mostly good. It's online, so that's helpful. But we've had to follow up on tickets that we just weren't hearing a status on from them.

    They publish good forums so you can see if somebody else is having a given problem and that's helpful. That way you know it's not just you.

    Which solution did I use previously and why did I switch?

    We did not have a previous solution.

    How was the initial setup?

    I've brought this tool into four different companies, when I came to each as a data architect. So I was always involved early on in establishing the tool and the usage guidelines. The setup process is pretty straightforward, and it has improved over the years.

    To install or make updates takes an hour, maybe.

    A lot of the implementation strategy for Data Modeler in my current company was the starting of a data governance and data architecture program. Three years ago, those concepts were brand-new to this company. We got the tool as part of the new program.

    For deployment and maintenance of the solution we need one to two people. Once it's installed, it's very low maintenance.

    What about the implementation team?

    We did it ourselves, because we have experience.

    What was our ROI?

    We're very happy with the return on investment. It has probably exceeded the expectations of some, just because the program is new and they hadn't seen tools before. So everyone is really happy with it.

    erwin's automation of reusable design rules and standards, especially compared to those of basic drawing tools, has been part of our high ROI. We're using a tool that we keep building upon, and we are also able to report on it and generate code from it. So it has drastically improved what was a manual process for doing those same things. That's one of the main reasons we got it.

    What's my experience with pricing, setup cost, and licensing?

    We pay maintenance on a yearly basis, and it's a low cost. There are no additional costs or transactional fees.

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool worth it.

    Which other solutions did I evaluate?

    We looked at a couple of solutions. Embarcadero was one of them.

    erwin can definitely handle more DBMSs and formats. It's not just SQL. It has a long list of interfaces with Oracle and SQL Server and XSD formats. That's a very rich set of interfaces. It also does both reverse- and forward-engineering well, through a physical and logical data model. And one of the other things is that it has dimensional modeling. We wanted to use it for our data warehouse and BI, and I don't believe Embarcadero had that capability at the time. Most tools don't have all of that, so erwin was more complete. erwin also has several choices for notation and we specifically wanted to use IDEF notation. erwin is very strong in that.

    The con for erwin is the reporting, compared to other tools. The interface and reporting could be improved.

    What other advice do I have?

    My advice would depend on how you're going to be using it. I would definitely advise that, at a minimum, you maintain logical and physical views of the data. That's one of the strengths of the tool. Also, while this might sound like a minor thing, it's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that. You should do that early on so that you don't have to redo anything later to make things look more cohesive.

    Another feature of erwin is that it can help you enforce your naming standards. It has little modules that you can set up and, as you're building the data model, it's ensuring that they conform to the naming standards that you've developed. I think that's something that some people don't realize is there and don't take advantage of.

    The biggest lesson I have learned from using this solution faces in two directions. One is the ability to engage the business to participate in the modeling. The second is that the forward-engineering and automation of the technical solution make it more seamless all the way through. We can meet with the business, we can model, and then we can generate a solution in a database, or a service, and this tool is our primary way for interacting with those roles, and producing the actual output. It's made things more seamless.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    EDW Architect/ Data Modeler at a financial services firm with 10,001+ employees
    Real User
    Feb 6, 2020
    We can input large files in one shot using the Bulk Editor feature
    Pros and Cons
    • "The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database."
    • "Some Source official systems give us DDLs to work with and they have contents not required to be part of the DDL before we reverse engineer in the erwin DM. Therefore, we manually make changes to those scripts and edit them, then reverse-engineer within the tool. So, it does take some time to edit these DDL scripts generated by the source operational systems. What I would suggest: It would be helpful if there were a place within the erwin tool to import the file and automatically eliminate all the unnecessary lines of code, and just have the clean code built-in to generate the table/data model."

    What is our primary use case?

    We work on different platforms like SQL Sever, Oracle, DB2, Teradata and NOSQL. When we take in requirements, it will be through Excel spreadsheet which is a Mapping Document and this contains information about Source and Target and there mapping and transformation rules. We understand the requirements and start building the conceptual model and then the logical model. When we have these Data Models built in erwin Data Modeler tool, we generate the PDF Data Model diagrams and take it to the team (DBA, BSAs, QA and others)  to explain the model diagram. Once everything is reviewed, then we go on to discuss the physical Data Model. This is one aspect of the requirement from Data Warehouse perspective. 

    Other aspect of the requirement can be from the operational systems where the application requirements might come through as DDLs with SQL extension files where we reverse engineer those files and have the models generated within erwin Data Modeler. Some of them, we follow the same templates as they are. But some others, once we reverse-engineer and have that Model within the erwin, we make changes to entity names, table names and capture metadata according to RBC standards. We have standards defined internally, and we follow and apply these standards on the Data Models.

    How has it helped my organization?

    There are different access level permissions given to different users who are Data Modelers, Data Architects, Database Administrators, etc. These permission have read, write and delete options. Some team members only have read-only access to the Data Models while others have more. Therefore, this helps us with security and maintain the Data Models.

    The solution’s ability to generate database code from a model for a wide array of data sources cuts development time in only some scenarios for us where we have the data model build into the erwin tool. E.g., I can generate a DDL for the DBAs to create tables on the database. But, in other scenarios, it will be the DBAs who will access the erwin tool with read-only access. They will fetch the DDLs from the models that we created. Once the DDL is generated from the erwin tool, it is all about running the script on the database to create tables and relationships. There are some other scenarios where we might add an index or a default value based on the requirements. 90 percent of the work is being done by the tool.

    The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database.

    What is most valuable?

    We have a very large number of operational and Data Mart Data Models inside of the erwin tool. It has a huge volume of metadata captured. Therefore, when we are working on a very large requirement, there is an option called Bulk Editor where we can input large files into the erwin in one shot to build the Data Mode with much lesser time. All the built-in features are easy to use.

    We make use of the solution’s configurable workspace and modeling canvas. All the features available help us to build our Data Model, show the entities, and the relationship between the entities, define the data types and add description of the entities and attributes. With all of this we can take out the PDF version of the Data Model diagram, then send them across for any teams to review.

    Not to forget the version saving feature. Every time we make changes by adding, deleting and modifying to the Data Models and save, the tool automatically create a new Data Model versions so we don't lose any work. We can go back to the previous versions and reverse all the changes and make it a current version if needed.



    What needs improvement?

    Some Source official systems give us DDLs to work with and they have contents not required to be part of the DDL before we reverse engineer in the erwin DM. Therefore, we manually make changes to those scripts and edit them, then reverse-engineer within the tool. So, it does take some time to edit these DDL scripts generated by the source operational systems. What I would suggest: It would be helpful if there were a place within the erwin tool to import the file and automatically eliminate all the unnecessary lines of code, and just have the clean code built-in to generate the table/data model.

    For how long have I used the solution?

    I have been using this tool for five years. I have used this tool at my previous companies as well as in my current company.

    What do I think about the stability of the solution?

    One recent scenario that we came across was in our day-by-day activities, where Data Models are growing in very large numbers. For some reason, the performance was bit low. It was suggested to upgrade to the newer version, which is erwin Data Modeler 2019 R1. So, we are already in the process of moving into the newer version. Once we migrate, we will do all the user testing to see how the performance has increased from the previous version. If there still are any performance issues or other features errors, we will get back to the support team.

    So far, whenever we have moved to a newer version, there has always been a positive result. We keep that version until we see a newer version. Every six months or once a year, we get in touch with the erwin support team to ask for any suggestions to see if any new features added and any enhancement to the newest version. Or, is it a right time to move into the newest version or just stick to our current version? They suggest based on our use cases and requirements.

    For deployment and maintenance of this solution, five to 10 people are needed. E.g., two people are involved from our team, two DBAs, and two people from the server team and other teams.

    What do I think about the scalability of the solution?

    What we have is a huge volume of data so far. We have a very large number of Data  Models with Operational Systems, Data Marts and it still has room for extension and expansion. 

    Within my current company, this product has been accessed by Data Modelers, Database Administrators, Data Architects, and Data Scientists. 50 to 100 people have access to this solution.

    How are customer service and technical support?

    Once a year or every two years, we upgrade to the latest version. If we are looking for any new features or enhancements to be used for new use cases or requirements, we get in touch with the erwin support team. They are very helpful in understanding and providing the best possible suggestions and solutions with a very impressive SLAs. They really guide us and give us a solution when we have to upgrade versions.

    Which solution did I use previously and why did I switch?

    I have not used another solution with my current company. While I have used other solutions before, the majority of the time, I have been with erwin Data Modeler.

    How was the initial setup?

    Whenever there is a new release, we do the testing, installation from scratch. The initial setup is straightforward. Once you install the product, it downloads onto your system. Once you double click, it gives you the basic instructions, like any other product. You just have to click on "next", where everything is configured already. 

    Somethings might be company-specific requirements. For these, you have to make sure you select the right options. Apart from that, everything is straightforward. Until you get to the last page, where you give it your Server details and selecting the windows credentials to log in, and that is company specific.

    Once we have it on the production environment, privileges are given only to Data Modelers who can read, write, and delete to design the Data Model.

    What about the implementation team?

    This is implemented in-house where this software is packaged by Application Support team who deploys it on the production environment on our internal Software Center application. To download and install this solution takes about 40 to 50 minutes.

    What was our ROI?

    We haven't moved away from this product for a very long time. I am sure the company has seen benefits and profits out of the solution, saving a lot of work effort and resources.

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources makes the cost of the tool definitely worth it.

    What's my experience with pricing, setup cost, and licensing?

    This company had bought the license for three years, and it's not an individual license. While you can buy a license for each individual, that would be very expensive. There is something called concurrent licenses where you can purchase licenses in bulk and 15 to 20 people can access the license and model. Concurrent licenses are scalable to the number of users and are proportional to the cost. 

    Which other solutions did I evaluate?

    When I joined the company, the product was already here. Our internal team will have a meeting to discussion on new releases of this product. When we talk to the erwin support team, we ask, "What are the newest features? Will these be beneficial for our company based on the requirements and use cases?" Once everyone has given their opinion, then we move forward in upgrading to newer version considering the performance, new features or enhancements.

    What other advice do I have?

    For our use cases and requirements, we are very happy with the erwin product. If we come across any issues or have any doubts about the tool, we get really good support from erwin support team.

    They definitely have a positive impact on overall solutioning because of how they design and capture data. This is definitely something any company who is involved with data should look into, specifically when there are many database platforms and dealing with huge volume of data. It is definitely scalable as well, as we are one of the biggest financial institutions and have a very massive Data Models inside this tool.

    The biggest lesson learnt from using this solution is how we can capture metadata along with the data structure of the database models. Sometimes, when we go to the business showing the designs of the conceptual/logical model, they want to understand what the table and each field is about. So, we do have an option to go into each entities/attributes to add the respective information and show them the metadata captured for these entities and attributes.

    I would rate the newest releas as 9.5 out of 10. When our requirement use case change, the solution moves to a newer version and everything works fine. We are happy with that. However, as time goes, a year or two, we might come across some situations where we look for better enhancements of features or newer features.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free erwin Data Modeler Report and get advice and tips from experienced pros sharing their opinions.
    Updated: January 2026
    Buyer's Guide
    Download our free erwin Data Modeler Report and get advice and tips from experienced pros sharing their opinions.