Try our new research platform with insights from 80,000+ expert users
reviewer1479621 - PeerSpot reviewer
Senior Data Warehouse Architect at a financial services firm with 1,001-5,000 employees
Real User
Jan 7, 2021
Support for Snowflake is very helpful from the data modeling perspective, and JDBC/native connectivity simplifies the push mechanism
Pros and Cons
  • "The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc."
  • "We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them."

What is our primary use case?

We use erwin DM as a data modeling tool. All projects in the data warehouse area go through the erwin model first and get reviewed and get approved. That's part of the project life cycle. And then we exude the scripts out of DM into Snowflake, which is our target database. Any changes that happen after that also go through erwin and we then make a master copy of the erwin model.

Our solution architecture for projects that involve erwin DM and Snowflake is an on-prem Data Modeler desktop version, and we have a SQL database behind it and that's where the models are stored. In terms of erwin Data Modeler, Snowflake is the only database we're using.

We are not utilizing a complete round-trip from DM for Snowflake. We are only doing one side of it. We are not doing reverse-engineering. We only go from the data model to the physical layer.

How has it helped my organization?

We use erwin Data Modeler for all enterprise data warehouse-related projects. It is very vital that the models should be up and running and available to the end-users for their reporting purposes. They need to be able to go through them and to understand what kinds of components and attributes are available. In addition, the kinds of relationships that are built in the data warehouse are visible through erwin DM. It is very important to keep everybody up to the mark and on the same page. We distribute erwin models to all the business users, our business analysts, as well as the developers. It's the first step for us. Before something gets approved we generally don't do any data work. What erwin DM does is critical for us.

erwin DM's support for Snowflake is very helpful from the data modeling perspective and, obviously, the JDBC and native connectivity also helps us in simplifying the push mechanism we have in erwin DM. 

What is most valuable?

Primarily, we use erwin for data modeling only, the functionality which is available to do logical models and the physical model. Those are the two areas which we use the most: we use a conceptual model first and the logical model, and then the physical model.

When we do the conceptual data model, we will look at the source and how the objects in the source interact, and that will give us a very clear understanding of how the data is set up in the source environment. The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc. The physical model, obviously, helps in executing the data model in Snowflake, on the physical layer.

Compatibility and support for cloud-based databases is very important in our environment because Snowflake is the only database to which we push our physical data structures. So any data modeling tool we use should be compatible with a cloud data warehouse, like Snowflake. It is definitely a very important functionality and feature for us.

What needs improvement?

We are planning to move, in 2021, into their server version, where multiple data modelers can work at the same time and share their models. It has become a pain point to merge the models from individual desktops and get them into a single data model, when multiple data modelers are working on a particular project. It becomes a nightmare for the senior data modeler to bring them together, especially when it comes to recreating them when you want to merge them. That's difficult. So we are looking at the version that will be a server-based model, where the data modelers can bring the data out, they can share, and they can merge their data models with existing data model on the server.

The version we're not using now—the server version—would definitely help us with the pain point when it comes to merging the models. When you have the desktop version, merging the models, two into one, requires more time. But when we go over to the server, the data models can automatically pull and push.

We will have to see what the scalability is like in that version.

Apart from that, the solution seems to be fine.

Buyer's Guide
erwin Data Modeler
January 2026
Learn what your peers think about erwin Data Modeler. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
880,844 professionals have used our research since 2012.

For how long have I used the solution?

I've been using erwin DM for years, since the early 2000s and onwards. It's a very robust tool for data modeling purposes.

What do I think about the scalability of the solution?

We have five to seven data modelers working on it at any moment in time. We have not seen any scalability issues, slowness, or that it is not supporting that level of use, because it's all desktop-based

When we go into the server model, where the web server is involved, we will have to see. And the dataset storage in the desktop model is also very limited, so I don't think going to the server model is going to impact scalability.

In our company, erwin DM is used only in the data warehouse area at this moment. I don't see any plans, from the management perspective, to extend it. It's mostly for ER diagrams and we will continue to use it in the same way. Depending on the usage, the number of concurrent users might go up a little bit.

How are customer service and support?

I have interacted with erwin's technical support lately regarding the server version and they have been very proactive in answering those questions as well as following up with me. They ask if they have resolved the issue or if anything still needs to be done. I'm very happy with erwin's support.

What other advice do I have?

The biggest lesson I have learned from using erwin DM, irrespective of whether it's for Snowflake or not, is that having the model upfront and getting it approved helps in reducing project go-live time. Everybody is on the same page: all the developers, how they interact, how they need to connect the various objects to generate their ETL processes. It also definitely helps business analysts and end-users to understand how to write their Tableau reports. If they want to know where the objects are, how they connect to each other, and whether they are a one-to-one or one-to-many relationship, etc., they can get it out of this solution. It's a very central piece of the development and the delivery process.

We use Talend as our ETL and BI vendor for workload. We don't combine it with erwin DM. Right now, each is used for its own specific need and purpose. erwin DM is mostly for our data modeling purposes, and Talend is for integration purposes.

Overall, erwin DM's support for Snowflake is very good. It's very stable and user-friendly and our data modelers live, day in and day out, on it. No complaints. There is nothing that impacts their performance.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Data Modeler at a logistics company with 10,001+ employees
Real User
Apr 16, 2020
Makes our data modeling staff more productive and has helped standardize data modeling efforts
Pros and Cons
  • "We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa."
  • "Complete Compare is set up only to compare properties that are of interest to us, but some of the differences cannot be brought over from one version of the model to another. This is despite the fact that we are clicking to bring objects from one place to another. Therefore, it's hard to tell at times if Complete Compare is working as intended without having to manually go into the details and check everything. If it could be redesigned to a degree where it is easier to use when we bring things over from one site to another and be sure that it's been done correctly, that would be nice to have. We would probably use the tool more often if the Complete Compare were easier to use."

What is our primary use case?

We use erwin to design conceptual, logical, and physical data models for new projects. We use a Forward Engineering tool to forward engineer data models into new database structures. We use the reverse engineering tool to bring databases into data models and erwin. We also generate HTML reports of the models to share with our customers.

Whenever we do have a new project that requires a new approach, we do try using erwin for it. For example, if we have an XSD message file, then we would try to see if there is a way to get that into erwin for better visibility of the structures that we have to work with.

How has it helped my organization?

The product has helped us standardize our data modeling efforts across the enterprise in regards to visuals and naming. We also use the Mart Tool from erwin, which allows us to store our data models in a centralized repository, which gives everyone visibility on what is out there and how it is all related.

We discuss existing and new business requirements with business users, data architects, and application developers to figure out how to capture and visualize concepts in their relationships. One thing we do have standard in all of our models is that we use the information engineering notation. This is standard across our enterprise. We do use a diagram hierarchical layout to help visualize things, especially when we reverse engineer a database, as we want to have some sort of a clear visual layout of things.

What is most valuable?

We find a few of erwin tools most valuable:

  • The Bulk Editor lets us easily make a lot of similar changes within our data model.
  • We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa.
  • The Report Designer is extremely useful because we can create reports to share with our business users and have a business discussion with them on how things work.

We find the text manipulation through the Bulk Editor to be extremely helpful. There were times where we had a set of entities which were not following our standards. With the help of the Bulk Editor, we were able to form those names with a few Excel formulas to follow our standards.

The Reverse Engineering functionality is good and easy to follow. It works really well. For the most part, we have been able to get any database to work with our data model format.

We quite heavily use the templates that exist to apply our standards to the data models created by our data modelers. We are able to use the templates to apply things like Naming Standards, casing on names, and colors to all our data models without having to be on top of it.

What needs improvement?

Complete Compare is not user-friendly. For example, the save known changes as snapshot does not work as expected. We are unable to find the exported files in our workstations at times. Complete Compare is set up only to compare properties that are of interest to us, but some of the differences cannot be brought over from one version of the model to another. This is despite the fact that we are clicking to bring objects from one place to another. Therefore, it's hard to tell at times if Complete Compare is working as intended without having to manually go into the details and check everything. If it could be redesigned to a degree where it is easier to use when we bring things over from one site to another and be sure that it's been done correctly, that would be nice to have. We would probably use the tool more often if the Complete Compare were easier to use.

The client performance could be improved. Currently, in some cases, when we delete entities it causes the program to crash. Similarly, for Mart's performance, we need to reindex the database indexes periodically. Otherwise browsing through the Mart, trying to open up or save a data model takes unusually long.

There are several bugs we discovered. If those were fixed, that would be a nice improvement. We encounter model corruption over time, and it is one of those things that happens. There is a fix that we run to repair this corruption by saving the model as an XML file or to the Complete Compare tool. If this process could somehow be automated, having erwin detect when a model is corrupted and do this process on its own, that would be helpful.

There are several Mart features that could be added. E.g., a way to automatically remove inactive sessions older than a specified date. This way we can focus on seeing which users have been utilizing our central repository recently, as opposed to seeing all of what happened since five years ago. This would be less of a problem if the mart administrator did not have trouble displaying all of the sessions.

On the client side, there are some features that would come in handy for us, e.g., Google Cloud Platform support or support for some of the other cloud databases.

If we had a better way to connect and reverse engineer the databases into data models, that would help us.

Alter scripts can be troublesome to work with at times. If they can be set up to work better, that would help. On the Forward Engineering side of things, by default, the alter syntax is not enabled when creating alter scripts. We strongly believe this is something that should be enabled by default.

On the Naming Standards (NSM) side of things, there is a way in erwin to translate logical names into physical names based on our business dictionary that we created. However, it would be nice if we could have more than one NSM entry with the same logical element name based on importance or usage. Also, if erwin could bring in the definitions as part of the NSM and into a model, then we could use those definitions on entities and attributes. That would be beneficial.

For how long have I used the solution?

We have been using it for at least 15 years, a very long time.

What do I think about the stability of the solution?

Overall, the server is mostly stable. After we implemented the reindexing fix on our database, everything works pretty well. On the client side, it is mostly stable, but sometimes it's not. There are certain actions that cause the client to crash. This has been much less of the case since we switched to the 64-bit version of erwin, which has been a great improvement.

We have found erwin’s code generation ensures accurate engineering of data sources. We haven't seen any issues. We pass our code off to DBAs to implement. Therefore, the DDL that we generate gets passed up to the DBAs who will add some physical features and may add some performance indexes, then we will reverse engineer that information and have that in our data models.

For our bug related issues, we have been given the recommendation to upgrade to the latest version. We are in process of doing that and will see how that works out. We also submitted some other things through erwin's idea board. There are a few issues that we haven't reached out to erwin on yet.

Currently, we have a team of people who take turns helping out other users. They figure out how to do different things. If there is a server side issue, we do have several people as well who will look into that. In the past, we did manage a lot with one person. However, we realized it was quite an undertaking. You either need one fully dedicated person to look into this or several people to take turns.

We have a Windows Server and a SQL Server database. Therefore, we have SQL Server dedicated staff to help us with any SQL Server issues and Windows support staff who help us with any Windows issues. We don't generally have any issues with erwin. From a technical support side, we do have a support staff if we were to run into any issues. Our team of five data modelers are pretty well-experienced with both the tool, Mart, and any sort of communication issues that we might have to deal with, e.g., if the SQL server went down, then these folks would be the liaisons to the SQL Server team.

What do I think about the scalability of the solution?

Given our mostly constant user base and constant growth of new data, our impressions of the scalability are great. Currently, we have about 2000 models in the Mart repository. Reaching this capacity has slowed down interactions with the Mart as opposed to when we had a fresh Mart. When we first started using the Mart server, it took about two seconds to open things like the Catalog Manager or Mart Open dialogue. Now, it takes around 10 seconds to do that part. For the most part, it seems to be pretty scalable. We've been able to continue using the tool given our large volume of models.

There are 35 to 40 users plus some occasional DBAs who use it to tweak any of the DDLs that they might want to pull.

We are able to develop our data models for mission-critical tasks with the solution’s configurable workspace and modeling canvas. We have 20 enterprise data modelers. We are mostly working on the standard RDBMSs: SQL Server, Db2, and Oracle. We also use some cloud technologies, like GCP, Azure, and Couchbase. Then, there are approximately another 15 data modelers which work exclusively in Oracle Business Intelligence from a data modeling aspect. This is for dimensional repository and data warehouse stuff. Therefore, we have about 35 to 40 data modelers in our organization for pretty much every major project that passes some sort of funding gate. Anything that is mission-critical for our organization will come through one of our two managers, depending on whether it's relational modeling or dimensional modeling. All of the database designs come through these two groups. There are some smaller database designs which we may not be involved with, but all of the critical application work comes through these teams. In regards to focusing on mission-critical tasks, we really wouldn't be able to do it without a tool like erwin. Since we are all very well-trained in erwin, it is the tool that we leverage to do this.

Erwin generates the DDL for all our projects. We rely on the tool for accuracy as some of our projects have hundreds of entities and tables.

How are customer service and technical support?

When it is bug related, we get a bug fix or are told to upgrade to the latest version. This has worked out in the past. Where it is question related, we have been pretty happy with their Tier 1 support's responses. We will receive some sort of a solution or suggestion on how to proceed in a very timely manner.

We would like support for JSON reverse engineering. That is something which is completely missing, but is something we have been working with quite often recently. If erwin could support this, that would be incredible.

How was the initial setup?

On the client side, the setup was mostly straightforward. It was a matter of going through the installer, reading a little bit, then proceeding to the next step. In the end, the installation was successful.

On the server side, it has been a bit more complex. We did have some documentation provided by erwin, but it wasn't fully intuitive nor step-by-step. Some things were missing. It was enough to get started, then figure things out along the way.

On the client side, it takes five to 15 minutes to do the installation or upgrade to a newer version. On the server side, from the moment we backed up everything on the server and disabled the old mart application, the upgrade took about two hours. If you include all the planning, testing, and giving support users enough time to do everything, the upgrade took about three months. In general, these are the timeframes we experienced through in the past.

What about the implementation team?

We simply used the documentation provided by erwin. Between the few of us that worked on the upgrade at our company, we had enough of a technical background to be able to figure out things out on our own. There were five to 10 people who worked on this initially:

  • We had one person who helped with the database side of things.
  • We had another person do everything on the application server.
  • To test out of the different features of erwin in the new version and ensure that the existing features worked as intended, we involved several additional people from our team.

We go through a pretty rigorous testing procedure when we bring in a new release of any software like this. Although it's not affecting customers directly, it certainly affects 35 to 40 people. Therefore, we want to ensure that we do not mess them up by not having something work. Normally, we go through this with any product. We first install it on a test environment and have a bunch of folks jump on. This is to ensure everything is working the way we want and work out all the kinks when setting up the production server before we move it into production.

What was our ROI?

It is an invaluable tool for us. It has been part of our data governance process in regards to database design for at least 15 years.

The amount of time saved is proportional to the amount of changes in the databases that we are implementing at any time. The more code we generate (because the model is bigger), that saves us more time because we don't have to write everything up manually and check to make sure that the code is correct. If we had to give a number, this saves us anywhere from minutes to hours of work. The time frame depends on the data modeler, as some data modelers generate more code than others. Therefore, it could be on a daily, weekly, or monthly basis and depends on the project. Some projects are in maintenance mode and not going through a lot of changes. It is way easier to use this solution because then we have a data model to reference for something that was developed approximately two months ago and somebody can just pick it up versus if someone had to generate changes to a database without a data modeling tool.

The tool certainly makes the data modeling staff more productive than if they did not have a similar tool. Without erwin, our jobs would be a lot more tedious and take a lot more time.

Which other solutions did I evaluate?

We evaluated IDERA two years ago and decided to stay with erwin mostly because the staff is mostly familiar and comfortable with the tool. We think that was the overriding factor. The other thing would be converting from erwin to IDERA would be a major undertaking that we just weren't prepared to do.

The fact that it can generate DDL is a major advantage over something like Visio, where you can also do a database diagram. We don't have a Visio version that would generate DDL, so I'm assuming it doesn't, and any tool that can generate code for database definition will certainly have an advantage over a product that doesn't.

What other advice do I have?

I would certainly recommend this product to anyone else interested in trying it out. The support from the vendor is great. The tool overall performs well and is a good product to use.

Having a collaborative environment such as the one that erwin provides through the Mart is extremely beneficial. Even if multiple people aren't working on a single model, it's nice to have a centralized place to have all the models. It gives us visibility and a central place to keep everything in one place. Also, it supports versioning, which allows us to revisit it at different points in time to go back to in the model, which is really helpful.

We do not use erwin to make changes directly to the database.

We have no current plans to increase our usage of erwin other than adding more models.

We would rate the solution overall as an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
erwin Data Modeler
January 2026
Learn what your peers think about erwin Data Modeler. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
880,844 professionals have used our research since 2012.
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
Real User
Jan 30, 2020
Allows us to bring in data from dozens of platforms and search holistically across all of them
Pros and Cons
  • "When you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable."
  • "erwin has versioning so you can keep versions, over time, of those models and you can compare any version to any version. If you're looking at a specific database and you want to see what changed over time, that's really useful. You can go back to a different version or connect that to your change-control processes so you can see what was released when."
  • "One of the things I've been talking to the erwin team about through the years is that every data model should have the ability to be multi-language... When I was working at Honda, it became very difficult to work with the Japanese teams using just one model. You can have two models, one in English and one in Japanese, but that means you have to keep the updates back and forth, and that always increases the risk of something not being updated."

What is our primary use case?

erwin Data Modeler does conceptual, logical, and physical database or data structure capture and design, and creates a library of such things.

We use erwin Data Modeler to do all of the levels of analysis that a data architect does. We do conceptual data modeling, which is very high-level and doesn't have columns and tables. It's more concepts that the business described to us in words. We can then use the graphic interface to create boxes that contain descriptions of things and connect things together. It helps us to do a scope statement at the beginning of a project to corral what the area is that the data is going to be using.

Then we do logical data models, which are completely platform-independent. They're only about datasets, the owned attribution, and different key analyses to determine what primary keys we want.

And then we do database designs, which relates to the physical data models. 

We also do reverse-engineering where we are capturing the catalogs of existing systems, or purchased software, or even external vendor datasets. They send us data sets and we can reverse-engineer what they send us, especially the backup snapshots where a vendor in the cloud will send data as a backup restore. So to help the documentation for the reporting team, we do reverse-engineering so that they know what the table and column structure look like, along with sizing, nullability, and keys and constraints.

erwin is on-prem. We have the Workgroup Edition, which means that we don't just have client-side software. We have client-side software that stores the data models back into a database which is on an on-prem server.

How has it helped my organization?

When I got to my current company two years ago, it didn't have any collection of its data assets into reporting services. If someone wanted to know where a social security number was in all the databases, they had to download all of the structures and do all of the research. I came in and did a full production-environment, reverse-engineer library. Once I did that using erwin front end, I could help the CCPA team find the PII data by simply doing Workgroup Edition Data Mart reports that crossed all of the environments. 

The cool thing about that is that the erwin models will bring in data from a dozen or two dozen different platforms. But once those models are in your Mart structures, you can do your search, looking for something like names of columns, across all of them. So you could be doing a search across Oracle, PostgreSQL and, because it's in your library, you can look at your assets holistically. For us, we went from zero to 500,000 columns of information. You can do that in Excel or in other ways, but this is a very simple way to do it. And you don't need to be highly-trained and skilled. You could actually bring in a college intern and set them loose with creating those libraries for you. Not needing highly-skilled people is one of the great things about erwin. It's very intuitive and it's not hard to use.

At my current company, we're not using it for much custom work, but in my past, the solution's ability to generate database code from a model for a wide array of data sources absolutely helped to cut development time. If you do your design on paper or in an erwin model before the developers start coding, and you review them to make sure that you've got everything in there, you do much less break-and-fix. If you can have an overview model, even for your Agile developers, and say, "This is where we're going," even if you don't deploy at all, it makes it much simpler. You don't have to drop your structures and recreate and reload your test data because you're pretty confident you've gotten your database design right, before people start coding.

erwin improves your standards because your naming standards and your design standards can all be reviewed much easier. You can make sure that misspellings, for instance, don't get all the way to production, or to the point where you have to live with them because people are already coding against them. You can do so much more QA analysis on your structure before it's deployed, if you're using a model.

What is most valuable?

You could probably use something like Visio to draw boxes and lines, especially for conceptual, very high-level things. But when you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable.

In addition, erwin has versioning so you can keep versions, over time, of those models and you can compare any version to any version. If you're looking at a specific database and you want to see what changed over time, that's really useful. You can go back to a different version or connect that to your change-control processes so you can see what was released when.

With versioning, you can also compare between development environments and production environments. You can see what may not have actually changed or what changes are in the works. It also enables you to do the kind of troubleshooting where you're looking at: Why on this server does this copy of something seem to behave differently than on that server? erwin highlights that really quickly for you. You don't have to closely eyeball your comparison. erwin creates a report that comes back and says what is different. And you can focus on almost anything, from the privileges in the catalog to a data type or a name anomaly. Even for servers that are case-sensitive in their structure, it will tell you the difference between something in all-caps and something that's mixed-case. If you're getting to that level of detail when you're troubleshooting, erwin is great at doing that sort of thing.

In terms of the solution's visual data models for helping to overcome data source complexity, erwin shows you "what is," if you're talking about the physical layer. When it comes to being able to make things clearer and more understandable, it depends on what your structure is. If you've just reverse-engineered SAP, it's abbreviated German. You may need other tools to help you understand it. If you're doing forward work — if you're going from conceptual to logical to physical — erwin is fabulous at letting you change what you see in the graphic. You can change your data model from just looking at primary keys to looking at primary keys and foreign keys, to looking at just the definition of the table in boxes. It allows you to change that visualization depending on your audience. If you're working with the DBAs, you can add metadata and it expands the box showing the visual of the table structure, so you can concentrate on just data types, or you can do data types and nullability and foreign keys, and all different sorts of things. You can do the indexes on top of it as well. You could end up with a table graphic that's the width of your screen if you've added all the details in.

And if it's too hard to look at that way — if you're trying, for instance, to make sure that EmpID Is always a varchar 250 — it also has the ability to take that graphic and move it into what's called the Bulk Editor. That looks much more like an Excel spreadsheet, within a view in your erwin model. You can sort your Excel spreadsheet by column name and see all of the details next to it. That way, everywhere EmpID shows up in that model, it is now in more of a column-row view, and you can easily look at that to make sure that all the EmpIDs say varchar 250. If you see one that's wrong, you can actually change it in the Bulk Editor and it changes it in the graphic automatically, because an erwin model really isn't a graphic, it's much more like a little Access database. So when you change it on one view, it fixes it in the other.

In addition, anybody using erwin to do forward engineering will find the solution's ability to compare and synchronize data sources with data models, in terms of the speed of keeping them in sync, to be almost instantaneous. You can connect an erwin data model to a database and deploy your changes, or you can deploy just delta changes. Or you can deploy one little piece because you've identified one little piece of your model. But most of comparing and synchronizing data sources with data models comes down to people and process. The tool will absolutely help you get there, but it's not going to take on all of the requirements of putting standards and processes in place. If you haven't tied your erwin Data Modeler to your change-control, it can't help you. So it's not a dynamic connection to your servers, it's just a tool that you can use with your environments.

Also, while I'm not configuring erwin, I do have templates that erwin lets me set up to configure models: different templates do colors and domains and prebuilt macros for definitions, based on different things. You don't have to configure erwin. You just have to tell it what sort of a platform you're either going to or coming from. You can also set up some draw templates and customize the colorization of different things. If you want all your primary keys to be red, you can configure that, and set that up as a template.

Finally, the solution's code generation ensures accurate engineering of data sources. With reverse-engineering, I have found it to be completely accurate. I've never found a time when it didn't get the source information correctly into the model. If you're doing a data warehousing project, where you're going from source to target, erwin can produce an extremely comfortable and dependable and trusted graphic of where you're coming from, while you design where you're going to. You know what the data types are, what the nullability is — the structure of the data. You don't know all the characterizations of data values because erwin is not profiling data values. It's just picking up the catalog structure of the tables. But it is completely trustworthy, once you've reverse-engineered it. It has never let me down along those lines.

What needs improvement?

One of the things I've been talking to the erwin team about through the years is that every data model should have the ability to be multi-language. So along with the fact that I can change, for example, the graphic of the model to look at just the definitions in boxes, or just the key structures in the boxes, I'd love to be able to change the language. When I was working at Honda, it became very difficult to work with the Japanese teams using just one model. You can have two models, one in English and one in Japanese, but that means you have to keep the updates back and forth, and that always increases the risk of something not being updated.

The world is getting to be a very small place, and being able to have one file that has all of that metadata in whatever form you need to read it, is the best way to manage that data. That would be a big change for them and it would be a big change to the Mart structure. It would be a one-to-many on the logical side of the business names, but it would also a one-to-many on the definition side of the tables and the columns and everything else, where you can have notes. I know that it's a big change I'm asking for, and they've had to put it off a little bit, but their business glossary tool now kind of looks at it that way. I'm hoping that the erwin model itself will be able to allow for that in the future.

For how long have I used the solution?

I've been using erwin Data Modeler from way before erwin owned it; since the '90s when it was Logic Works. That was before it went to Platinum and before it went to CA. And now they're spun off as erwin.

What do I think about the stability of the solution?

It's a very stable tool. It doesn't have problems with crashing or anything like that.

What do I think about the scalability of the solution?

I've never had a problem with its scalability, especially using the Workgroup Edition, because you keep all of your models in the database. It's not a problem to collect hundreds of different data models. Even scalability on your desktop or in your laptop would be more about the laptop itself, not the tool. It's kind of like Word. It saves the data outside of itself, so it doesn't have that problem.

There was no data modeling tool when I got here two years ago, so it is new to the culture, and this is a 40-year-old company. It is mostly being used with our master data management and our data warehousing, which is still doing a lot of development work. It's being expanded into supporting the data governance initiatives, to do data asset management. And I'm expecting that over time it will be used more for data asset change-control. We use a lot of vendor-purchased products, and being able to see the difference between their table structures before an upgrade and after an upgrade isn't being documented in a model right now, but it probably will be.

Also, the new California Consumer Privacy Act is forcing us to do much more of that data governance and data asset management, as well as data classification, so that we can identify PII data. That's definitely picking up steam.

How are customer service and technical support?

I use their technical support all the time. Sometimes it's just to ask them — because it's such a rich tool, they move menu items in the upgrades sometimes — "Okay, where did you put it this time?" But they've always been very helpful. They do have live chat on their website. About 75 percent of the time the chat agents can answer my question. If not, they hand me off to somebody. Given the amount of time I've worked with erwin, I almost know all their first names. They've always been very good and have taken care of me.

A lot of the technical staff moved with the tool, so they've stayed intact as it went through buyouts. I've always enjoyed working with the erwin team. They're very supportive, very helpful, and are very responsive to my requests and thoughts.

Which solution did I use previously and why did I switch?

My current company did not have a previous solution, other than Excel spreadsheets and Visio — nothing that I would call an industry-standard modeling tool.

How was the initial setup?

I was involved with the purchase and installation in my current company. I work with the DBAs so I don't touch the buttons for the installation. But the erwin support team is always a great help. I have never heard from any of the DBAs, during any of my "lifecycles," that installation is anything more than straightforward.

There's all sorts of bureaucracy that happens at a company, and that's true in our company as well. The deployment happened over the course of a couple of days: the installation, the tests, the verification, and making sure that the client-side could connect to the databases. I don't think any of that took too much time, other than getting everybody together to do it.

Our implementation strategy was to work with a very temporary dev environment and then roll it to a prod environment and then drop the dev environment. We don't keep a dev environment full-time because it is just a COTS tool. They do backups and restores just like any other mission-critical data. And we're using a combination of named licenses and concurrent licenses in our strategy so that we can leverage who uses it the most.

As for the number of people involved in an upgrade. I take on the SME role. We have the main DBA who is scheduling the upgrade into the environment. Then we generally have a DBA who is assigned to do the upgrade. And our service desk helps with the deployment of the client-side out to the users. So there are four people involved.

What was our ROI?

We saw return on our investment in erwin once we got our model library in place across all of our different data environments. Of course, you can always search using your DBA tools to find different things on a server. But once you've got your models in place, you can cross all the servers in your search, because you've pulled all that metadata into one place. It doesn't matter if it's an Oracle backend, an Access backend, a mission-critical Excel spreadsheet. Whatever it is that you have a model of, you can go search for something like a social security number. Just being able to do that, it almost pays for itself. When you think of how much time people spend to try to find things, it's completely amazing.

It depends on how many servers you have, how complex your environment is, and how many of your teams are going to look at stuff. If you have a really obfuscated structure, then you're actually profiling the data to figure things out.

Being able to type in, "Go find column names with SSN in them," it comes back almost immediately. That probably gets you 80 percent of the way to finding that particular aspect. How much time did we spend in the Y2K crisis just to find dates? Just identifying the columns that were going to be impacted was a feat. I keep telling my cohorts that social security number data is going to be the next Y2K. As soon as we run out of numbers, they're going to have to add a digit, but everything is hard-coded to the current span of digits. As soon as the federal government decides that it's going to do that, we are all going to have to go fix it.

The nice thing about having your assets in a database is that the more value-add you've done on your models, the less you have to look at physical names on columns. If you've put your logical or your business names on columns, that's even better.

I could imagine that in very serious research, you're going to cut 80 percent off the time it would take, depending on how complex your environment is. You can get there so much faster. Obviously, it won't give you everything because human beings just don't have it all written down. Or it could be that some nitwit is putting social security numbers into note fields and you don't know about it. But it's going to get you a long way there.

The erwin model is much more like an Access database. The return on investment is that it is a very three-dimensional type of metadata collection about your model. In some of Visio, you can add notes on a little graphic piece. But you can't add multiples. You could approximate multiples with carriage-returns in the block, but you can't categorize your metadata. You also can't add more value about that metadata. One little box on an erwin model can be opened logically and there will be 10 tabs worth of value-add you can put in. You can open the model so that you're looking at the physical side of the house, and still have another 10 tabs that have nothing to do with the logical side, other than that they share the primary key of the little graphic piece that you're looking at.

erwin is so much more flexible. And, with respect to return on investment, it's customizable. erwin has the concept of user-defined properties where if you need to do something special within your models that says something like, "Is this used by this line of business?" you can create flags, or dates, or text, or drop-down lists, and attach it to anything in the model itself. In that way you've created some value-add that is customized to your company's needs. To me that adds tremendous power to the return on investment. You can't do that with just plain drawing tools.

What's my experience with pricing, setup cost, and licensing?

We came up with a two-part concept with our licensing. Our data architects have named licenses that only they can use. We have four named licenses today. But we also bought three concurrent licenses, two that are just for developers and the DBAs, and one that's a "read-only" that anybody can use. It's a little bit difficult for me to tell you how many people use those, but probably no less than 10 and possibly upwards of 25.

We pay for maintenance on a yearly basis. There are no additional costs for the Workgroup Edition, which has the server component. That is the edition where you can save your models back to a database, which we installed on SQL Server, but I think you can install it on any of several different platforms.

Which other solutions did I evaluate?

Our company looked at two others. Because I have worked with erwin for so long, I wanted to make sure, when I came in, that my current company got the opportunity to make its choice based on what everybody's needs were here. We did a full vendor tool assessment back then. Although I don't have it in front of me, I know we looked at Embarcadero and it may be that we also did the highest level of Visio, so that between them we looked at a very high-grade tool and something that would just get us by. 

When I got here, the DBAs had already put acquiring an erwin license into their next year's budget. They had already made that choice. But I took us all the way back to doing a tool compare because I wanted to make sure that everybody got the opportunity to weigh in on the choice that was made.

A lot of the difference between erwin and other products was the licensing and pricing structure for maintenance. Some of it was the inter-connectability with other tools. erwin does a really good job of building bridges between many different tools. Part of it was also its ability to be very sustainable because it had the Workgroup database backend, which Embarcadero has as well, but Visio does not. That was part of the decision point: whether we wanted to go with something really small and move up to a more industry-standard tool, or just take the opportunity to bring in a couple of licenses. We brought in a smaller footprint last year, and we added a few more licenses in 2019.

The primary reasons that erwin was selected were that it was much more affordable for us and it was easily maintainable.

What other advice do I have?

Take the time, especially if you're going to use Workgroup, but even if you're using desktops, to figure out how you're going to manage the models. They need to have a naming convention. They need to have a directory organization that makes sense to you. They need to have change-control, just like code. You need to figure out how you're going to use it because once it gets past 50 models, finding something and knowing how to change it and where to change it and where to publish it back out is going to be your biggest headache. You need to think long-term. It's easy when you just have a few models. As soon as you have 1,000 of them, unless you've thought ahead, you're going to have a huge cleanup problem.

The biggest lesson I take away from using erwin Data Modeler is that we should all be doing much better library sciences with our data assets than we do. erwin is a great tool to capture your library sciences. It can tell you what you need to know about a piece of data, or a row of data as a dataset in a table, or a collection of tables. You can add information not just about single things but collections of things. 

We should have many more people whose job it is to add that value. Right now, companies still mostly use erwin for custom development and it needs to be much more built into documentation of any type of data. I use erwin to do data models of reports and of API calls, for example. Any data set, to me, qualifies as needing a model so that you can tell what data elements are in it and what that dataset is used for.

Through all the years, erwin has done a great job of making things better and better. There are always things that we're talking about in terms of improving it, but the fact that it's now starting to integrate better with data governance-type tools so that all of your definitions can move to more of a glossary form, rather than just being in the models, is tremendous. The more that that's integrated back and forth, the better it's going to be.

Out of all of the modeling tools, erwin is a 10 out of 10. It hits all the high points for me. There are some pieces of functionality that competitors come up with, maybe a little bit earlier, but it's a leapfrog-type of thing. Every time the vendors find that something is needed in the world of modelers, they all start to bring it in. I find erwin to be very responsive to those needs. So now, erwin has NoSQL modeling aspects in the tool and they're connecting with their own suite of data governance tools. That means you can push definitions to your data governance tool or bring them back from your data governance tool. It's starting to become much more of an integrated solution, rather than just a standalone.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Richard Halter - PeerSpot reviewer
President at a tech services company with 51-200 employees
Real User
Top 5
Jan 27, 2022
Beautiful model for the new microservices world that is easy to use
Pros and Cons
  • "It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars."
  • "The navigation is a little bit of a challenge. It's painful. For example, if you've got a view open and you want to try to move from side to side, the standard today is being able to drag and drop left and right. You can't really do that in the model. Moving around the model is painful because it doesn't follow the Windows model today."

What is our primary use case?

I was part of a standards organization and we built a data model that is a standard data model for use in retail. That data model is now been released in version 7.3 and it is implemented all over the world. We don't implement the model, we've built the logical model and then the companies build their own physical model from there. 

erwin is a retail data model, which means that it handles the operational side of retail, which means there are somewhere around 8,000 attributes in it. It has got around 10 groupings of things. We have a grouping on transactions and there are all kinds of transactions that can occur in retail. The whole customer life cycle is covered in the inventory, items, and all that. The use case is for retail operations. It's massive. There are hundreds of use cases in this.

How has it helped my organization?

We don't implement, we simply tell other people how to do it. It's a beautiful model for the new microservices world, so we can help people understand how to fit this into their world. In terms of us actually doing something and implementing it and all that, that's really not in scope for what we do.

erwin is easy. In the microservices world, having a unified retail model like this one that is a standard and allows two companies to inter-operate easily in the past. In fact, the whole reason the model was created was in 1993, was because about half a dozen major retail CIOs got together and said, "We've got to have a standard model because every time we buy a new point of sale system, we need to re-architect our entire enterprise." They started building this model back in 1993, and the beauty of it is it does precisely what they say. A retailer can now integrate two vendor's systems easily, as long as they all follow the same model. It reduces their cost of integration dramatically, as well as being quite a powerful model in and of itself.

It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, and the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars.

What is most valuable?

erwin is pretty easy. I've been using it for so long it's like second nature. 

The visual data models are pretty easy for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage. It's easy to add, change, and update things. We get feedback from retailers. For example, somebody wants to update something in the item area, they want to use a new item identifier and it's just a matter of going in and adding it to the numerations for that. Or somebody might come in and say, "We're using a little bit of a different pricing model so we need to add this information into the pricing area." Or people will say "We need to add Bitcoin," so we can go in and add Bitcoin and the attributes you need to support it and do it very easily. At this point, we're not adding new capabilities, we're simply expanding existing ones.

What needs improvement?

The navigation is a little bit of a challenge. It's painful. For example, if you've got a view open and you want to try to move from side to side, the standard today is being able to drag and drop left and right. You can't really do that in the model. Moving around the model is painful because it doesn't follow the Windows model today.

Otherwise, it's got everything I need and it's not hard to use for me.

What do I think about the stability of the solution?

The stability is great. We don't have any problems. 

How are customer service and support?

I actually did use their support, I had some issues getting it installed and it had to do with that they've given a copy of the Data Modeler for me to support the standard data model, and getting that approved and authorized and all that was a bit of a challenge. I went through the help desk and they got it done pretty easy for me. I had a unique problem.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I had used D-Base. This was a long time ago, but I used D-Base to build a model for the oil industry. That was a long time ago. It was a 1980s vintage so there is no comparison.

How was the initial setup?

The initial setup is straightforward. You can install it without a lot of hassle.

What's my experience with pricing, setup cost, and licensing?

They gave us a copy because of supporting a standards data model, so pricing and all that is really not something I can compare. I think it's a bit expensive, but it supports and does what we want.

Which other solutions did I evaluate?

At one point we had a data modeler who wanted to switch to Embarcadero, and it turned out that that was a huge mess so we dropped it. It didn't last very long, but it was a data modeler who came in and wanted to do it in Embarcadero. I think she had an agreement with them and got a bonus for trying to get it converted or approved to convert but it was such a huge mess we didn't do it.

The Embarcadero model is huge. It's got 8,000 attributes in it. Being able to go through and validate that every one of those 8,000 attributes properly converted over to the correct place in Embarcadero was such a massive job. We didn't mess with it. It's not just the attributes, but it's the relationships and table names. It was a huge job so we didn't do it. I suspect if we had gone to Embarcadero, it would have been just fine, but it was just too big of a job.

What other advice do I have?

erwin DM is good. It does the job and it's been around a long time, so I think it would be a good one to use. I don't have any problems with it.

I would rate erwin DM a nine out of ten. Nothing is perfect. I don't have any real issues with it. It does everything we need it to do. It's really good.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Data Management & Automation Manager at a consultancy with 11-50 employees
Reseller
Dec 5, 2021
Saves a lot of development time
Pros and Cons
  • "The most valuable features are the ability to reverse engineer and do model comparison. With the reverse engineering, I can understand the databases from third-party products. With the model comparison, I can track the differences between two versions of the same database."
  • "I would like to have more data sources from other, different vendors. In recent years, the vendor has reduced the number of data sources, and I would like to have more data sources for every brand. For example, with Oracle, I would like to have compatibility for many versions, including old ones, not just the most recent."

What is our primary use case?

We usually use it to design new databases as well as reverse engineer some databases from third-party products, e.g., ERPs or monetary software.

What is most valuable?

The most valuable features are the ability to reverse engineer and do model comparison. With the reverse engineering, I can understand the databases from third-party products. With the model comparison, I can track the differences between two versions of the same database.

Because I can graphically see the Modeler database, that is very helpful for my job as it helps me understand the database. It is very different from SQL and DML scripts, which are very hard to understand with just sentences. When we have a graphic, that is very helpful. We can save time understanding that database.

I like the synchronization ability a lot because it can tell me to apply some level of governance to my models. I can be sure that the model in my documentation or development environment matches with the database that is working in our production environment. It is accurate. Though, it is not always fast when we have dozens of tables, but it works. I wait about an hour in order to have a big database synchronized.

The solution’s code generation ensures accurate engineering of data sources. It avoids rework.

What needs improvement?

I would like to have more data sources from other, different vendors. In recent years, the vendor has reduced the number of data sources, and I would like to have more data sources for every brand. For example, with Oracle, I would like to have compatibility for many versions, including old ones, not just the most recent. 

The technical support could be better. They could give faster solutions.

For how long have I used the solution?

I have been using this solution since 1995.

What do I think about the stability of the solution?

It is stable.

What do I think about the scalability of the solution?

Only when the database is too big, then we could have some trouble. We are talking about maybe 12,000 tables, then it starts to have some problems.

With erwin, we just need to add memory to the computer in order to work with bigger databases. However, it would be good to have erwin for other platforms, e.g., Linux and Macintosh, not just Windows. 

How are customer service and support?

The technical support is good. They are highly skilled. 

Which solution did I use previously and why did I switch?

Before erwin, I was manually using Notebook for my databases. Before erwin, I was designing databases and analyzing them manually all the time.

We chose erwin because it was the only solution which could help us design a database on the computer.

What was our ROI?

It saves a lot of development time. I think we are saving from two weeks to one month annually. It depends on the size and complexity of the database.

The solution’s automation of reusable design rules and standards is good compared to basic drawing tools. It saves time and keeps us from errors, which are very costly in the database. Therefore, we can get back our money very quickly.

The accuracy and speed of the solution in transforming complex designs into well-aligned data sources makes the cost of the tool worth it.

What's my experience with pricing, setup cost, and licensing?

erwin is expensive compared to other solutions. We are paying almost $6,000 per seat a month.

Which other solutions did I evaluate?

I have used different solutions along the way, but then I moved back to erwin. Besides erwin, I have tried IDERA Embarcadero, but I think erwin is more usable and has helped me to do my job better.

What other advice do I have?

I rate this solution as nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer. Reseller
PeerSpot user
it_user1425207 - PeerSpot reviewer
Senior Project Manager at a tech services company with 51-200 employees
Real User
Oct 18, 2021
Stable, scales well, satisfactory support, and saves time during project reengineering
Pros and Cons
  • "There is absolutely no problem with the stability."
  • "The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model."

What is our primary use case?

For the first 30 years of my career, I worked on many small projects. Since erwin was released, I used it to help develop projects up until about two years ago. At that time, I moved to a new company and I still use erwin in my current role.

When I moved to the new company, I recommended erwin and explained it to my colleagues and my clients. When the most recent version was released, I looked at the licensing and became familiar with its new features and benefits.

I have developed a couple of projects myself in the past two years, including one that had to do with mail, in Serbia, which was an interesting project. Another and the other to do with handling automotive equipment maintenance. One of the projects is something that I started from the beginning, whereas the other was reengineered with changes made and new features added.

I have also worked with erwin from a higher-level role. Rather than developing smaller projects, I have taken responsibility for a much larger project worth several million Euros.

How has it helped my organization?

In general, if you start using erwin from the beginning of a project then it provides a lot of benefits. You have to start with the process modeling, and then find data and create an entity, and the process continues. Essentially, you have to have something before you create the data model. However, if you're talking about reengineering a project that has existing data models or existing processes, then the benefits of using erwin are really big. You can save 50% of the time if you're working on reengineering existing processes or existing data models.

The visual data models are okay for helping to overcome data source complexity. If the project is started with erwin from the beginning then I can create the database, stored procedures, and everything that I need. However, when it comes to reengineering an existing product, and if the database changes then some of the stored procedures, as well as other things also need to change. For example, in one project, the original database was Informix and the new one is Microsoft SQL Server.

What needs improvement?

The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model. If you have a legacy database like Informix, Oracle, SQL Server, or something similar, then you need to create a data warehouse database. These use completely different logic and you need to create some procedures to map the tables.

The number of databases should be extended.

To have more documentation or available knowledge on how to connect is very important. This is probably the most important issue that I have experienced. Specifically, I would like more information on how to connect, how to transfer, and how to do the mapping from a legacy database.

If you try to open a file from an older version of erwin, you can only open files from one version back. This is all that they support, so they need to add the option of opening all older versions. As it is now, they push people to buy a new version every year.

For how long have I used the solution?

We have been using erwin since the beginning when it was first released by Logic Works in 1993.

What do I think about the stability of the solution?

There is absolutely no problem with the stability.

What do I think about the scalability of the solution?

In terms of scalability, there is not enough long-term support for each version of erwin. In the past, the extensions of some erwin models, or files were ER1. After that, the file extension was ERW and now it is ERAN, which created some confusion.

In my current company, I am the only person using erwin because we are not specialists in development. In my previous company, five or six people were using it.

How are customer service and support?

The support is okay and I am satisfied with it. However, it's a little slower getting support for the role that I'm in now, as compared to when I was at my previous company.

In the past, the support was always okay. Within a few hours, I either had an answer or was at least speaking with them. We sent emails to discuss how to solve the problem.

Overall, I'm really satisfied with the support.

Which solution did I use previously and why did I switch?

I have used several other modeling tools in the past, including SAP PowerDesigner and Bizagi. My experience with them has depended on what I needed to do. For example, Bizagi has a completely different way of developing a model. I am not satisfied with it because they don't follow the rules for relational modeling.

On the other hand, Power Designer is quite a good tool that works well. It's a complex tool that can be used for data modeling and process modeling. They use BPMN methodology and in terms of functionality, it has enough. From a cost perspective, it is cheaper than erwin.

How was the initial setup?

The initial setup is straightforward, it was no problem.

The installation can be done in five minutes. The new version may take a little longer, but it is very fast.

What about the implementation team?

When we have completed, we have erwin come to analyze the process.

We start with global entities, or how I can see it on a higher level without talking about the relationship model. I am looking for the relation, and foreign keys, then we search for the stored procedure and functions.

We look at the first creating the keys, the primary and alternative keys in the tables, entities, and at the end, we develop the indexing. The indexing requires daily analysis when you put the database in operation they look at the speed of everything. you can change the indexing to make your database faster.

What was our ROI?

In my previous company, there we had a really large return on investment from using erwin. In one of the systems that we re-engineered, there were more than 2,000 tables. If these had to be created from the beginning then it would have taken a really long time to collect all of the information. When it comes to reengineering, the database usually stays the same with perhaps 20% to 30% of the model being modified.

In my current company, we are trying to educate our clients on using erwin. Many of them are not using it in their everyday business. The problem is that bigger organizations, like government departments, usually want to have somebody from outside their own organization develop the solution.

What's my experience with pricing, setup cost, and licensing?

The price of erwin Data Modeler is very expensive, in particular for this part of the world. I think that for the United States and Europe, the price is probably okay. However, in Serbia, the salary of an IT engineer is perhaps 50% of what it is in the United States. Because of this, erwin needs to have a different pricing model for different countries.

For example, you cannot sell products in places like Serbia, Croatia, Bosnia, Bulgaria, Romania, and other places in this part of Europe at the same price as countries like Germany, Norway, or the United States. This is something that needs to change from a licensing perspective.

What other advice do I have?

In terms of erwin's code generation and the accurate engineering of data sources, for some of the databases, it is quite okay. However, in others, it is not exactly following the rules of the database in the way that I want to generate the model.

There are two ways to generate a model. The first is to create a schema, which is a textual file that contains everything needed to create a complete database structure. The second is to have erwin connect to the databases directly. In this case, erwin installs and creates the database.

In some cases, it is better to first create a DB schema, which is an SQL file where you can look for syntax errors or other problems in the code. Once complete, you can create the database, including the tables and everything else.

When I start to use erwin in a project, it is normally right after I analyze the process. The second thing I do is look at the global entities, so I can view the system from a high level without dealing with the relationship model. After that, I start looking for relationships, creating the primary and alternative keys in the table. I then start looking for foreign keys. At that stage, I begin to look for stored procedures and functions. After this, I work on the creation of indexes.

The indexing needs to be analyzed daily, once the database is put into operation. This helps with database performance. When you change the indexing, the database gets faster.

My advice for anybody who is planning to use erwin is that sometimes, it should be used to develop models right from the beginning. It will depend on the project, as well as the organization and the experience that they have with erwin. It is also possible to have different people and different teams from the same company working on one model. For example, we have three development centers that are all working on the same model.

The biggest lesson that I have learned from using erwin DM is that it pushes you to use the notation and methodology exactly. You must follow the rules. Several years ago, they started adding tools and options that are used to verify a model, and this functionality helps to point out mistakes in the models. Once the model is correct, you can move on to working with the databases and the specifics of each one. You can move very easily between databases such as Informix, Oracle, and MySQL, without losing much time.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
Dec 1, 2020
Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it
Pros and Cons
  • "The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables and columns or the entities and attributes, ensures that the users understand what the data is so that they can use it best and most successfully."
  • "I would like to see the reporting capabilities be more dynamic and more inclusive of information. The API is very sparsely understood by people across the user community."

What is our primary use case?

We use the erwin Data Modeler tool to document conceptual, logical, and physical data design. Business data models capture the understanding of the data from a business perspective, which can then drive physical design to ensure data is represented and used correctly.

How has it helped my organization?

The automated generation of the DDL ensures that the data store looks exactly as the data design. It also ensures that the standards that are governed are followed and implemented successfully.

What is most valuable?

We use the diagrams and data dictionary capabilities to help users understand the data environments, as well as how the data relates to each other. We'll use the naming standard master file to govern and ensure that we have consistent naming and abbreviations across and within data stores. We use the forward engineering templates to standardize and govern the generation of the data definition language that is used to actually make the changes to the data stores. We also use the Compare capability to ensure that we have up to date production data models. And we are looking forward to the integration of the Data Modeler metadata with the data intelligence suite in R2.

The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables, columns, the entities, and attributes, ensures that the users understand what the data is so that they can use it best and most successfully.

Its ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is excellent. 

We don't typically use the configurable workspace and modeling canvas because while the platform allows for the flexibility to dynamically include multiple colors and multiple themes, feedback from business users is that the multiple colors and themes can become overwhelming. When you do that, you need to include a key so that people understand what the colors mean.

Its ability to generate database code from a model for a wide array of data sources cuts our development time. By how much depends on the number of changes that are required within the data store. It is certainly better to automate the forward engineering of the DDL creation, rather than having someone manually type it all out and then possibly make a human error with spelling irregularities.

Its code generation ensures accurate engineering of data sources. It decreases development time because it's automated.

What needs improvement?

I would like to see the reporting capabilities be more dynamic and more inclusive of information. The API is very sparsely understood by people across the user community.

I would also like to see a greater amount of integration with the erwin Data Intelligence Suite and the erwin Web Portal for the diagram delivery. That would be beneficial to all.

For how long have I used the solution?

I have been using erwin for twenty years. 

What do I think about the stability of the solution?

It's very stable, especially having been available for use for so many years.

What do I think about the scalability of the solution?

It is scaling well to include the new data structures, rather than being stagnant and only continuing to support the older DBMS types.

We have over 100 Data Modelers in my company and the users of the metadata go into the 1,000s.

We have an administrator who is responsible for the software upgrades, we have a governance community in the Center of Excellence, and we have the actual Data Modelers themselves who provide the delivery of the physical data models. We have data architects who create business, conceptual, and logical data models. And then, of course, we have our developers who use the data model information to understand the code that they are writing. We also have the business users who use the diagrams and the data dictionaries to understand the data so that they use it correctly.

Data Modeler is being used very extensively. We are considered power users within the community of users.

As new applications are developed, we may or may not need new licenses for erwin Data Modeler.

Which solution did I use previously and why did I switch?

I have used SILVERRUN, which is a very old tool and actually has Sunset. I have also used SAP Sybase PowerDesigner. The primary reason for using PowerDesigner over erwin Data Modeler for that decision was that we were able to program the PL/SQL right into Sybase PowerDesigner. At the time, it had the capability to order the run of the PL/SQL. So the Sybase PowerDesigner would not make the changes to the database via the DDL, but it also generated the PL/SQL code that moved the data from source to target. That's a capability that erwin Data Modeler has never had. I don't know if it is on the roadmap for inclusion in the future, but I also do not see it as a requirement for erwin Data Modeler going forward because there are many ETL tools out there readily available.

I've also used IDERA. The interesting feature about IDERA that differentiates it from erwin Data Modeler is that the model repository actually separates the logical data models from the physical data models. Whereas erwin is basically the flip of a switch. It's not a true logical model, it's a logical representation of the physical data model.

I think the other thing that sets erwin Data Modeler apart is the model Mart repository, which protects a company's intellectual property within the data models and makes them available across the company so that the information is shared with anyone who has an erwin Data Modeler license. That was not available in SILVERRUN. It was also not available when I used PowerDesigner at the time. It was about 15 years ago for PowerDesigner. It is available for IDERA.

How was the initial setup?

I find the setup straightforward. It is very easy to install. It took minutes.

What was our ROI?

We have seen ROI.

The reusability of some of the information within erwin Data Modeler, coupled with the capability to govern the information such as the data domains, the naming standard master file, degeneration of the DDL, every piece of automation ensures that there is consistency across and within data stores, and reduces the time to deliver the information because of the automation and governance built into the tool.

Whether or not the accuracy and speed of the solution and transforming complex designs into well-aligned data sources make the cost of the tool worth it would be a judgment call. I do think it is worth it. But of course, in this day and age where people are offshoring all of their work trying to save money, makes one consider the cost of any investment.

What's my experience with pricing, setup cost, and licensing?

I think that the pricing is reasonable. It has called Concurrent licensing, where you can have a number of people share an erwin license. I think that that pricing is a little bit high, but that is a personal opinion.

What other advice do I have?

The biggest lesson that I've learned is actually with a lack of data modeling. We have teams who have complained that data modeling takes too long. They would rather have developers manually code the DDL, which creates a lot of mistakes, increases the backlog, and increases not only the time to delivery but the cost to delivery. There is a lack of understanding of the agile methodology around data modeling and the incorporation of the emergent design happening in the scrum teams with the intentional design of the data architect creating a data model. Given an opportunity to follow the correct path and perform data modeling, we have seen a significant return on investment with decreases in delivery time and decreases in project cost.

I would rate erwin Data Modeler a ten out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Independent Consultant at a tech consulting company with 1-10 employees
Real User
Oct 26, 2020
Complete Compare is good for double checking your work and ensuring that your model reflects the database design
Pros and Cons
  • "The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems."
  • "The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work."

What is our primary use case?

The use case was normally to update data model designs for transaction processing systems and data warehouse systems. Part of our group also was doing data deployment, though I personally didn't do it. The work I did was mostly for the online transaction systems and for external file designs.

I didn't use it for data sources. I used the solution for generation of code for the target in the database. Therefore, I went from the model to the database by generating the DDL code out of erwin.

We had it on-premise. There was a local database server on SQL, then we each had a client that we install on our machines.

How has it helped my organization?

At one of my previous jobs, we had a lot of disparate databases that people built on their PCs, which were under their desk. We were under a mandate to bring all of that into a controlled environment that our DBAs could monitor, tune, etc. Therefore, this was a big improvement. I put the data that was in whatever source into an Excel spreadsheet, reverse engineering it into a SQL file and putting in the commas, and then I could reverse engineer that SQL into a data model. That saved us a tremendous amount of time instead of building the data model from scratch.

I educated a number of my colleagues who were in data architecture and writing the DDL by hand. I showed them, "You do it this way from the model." That way, you never have to worry about introducing errors or having a disconnect between what is in the model and the database. I was able to get management support for that. We enhanced the accuracy of our data models.

What is most valuable?

I do like the whole idea of being able to identify your business rules. In my last position, I got acquainted with using it for data lineage, which is so important now with the current regulatory environment because there are so many laws or regulations that need to be adhered to. 

If you're able to show where the data came from, then you know the source. For example, I was able to use user-defined properties (UDPs) on one job where we were bringing in the data from external XML files. I would put it at the UDP level, where the data came from. On another job, we upgraded a homegrown database that didn't meet our standards, so we changed the naming standards. I put in the formally known UDPs so I could run reports, because our folks in MIS who were running the reports were more familiar with the old names than the new names. Therefore, I could run the report so they could see, "This is where you find what you used to call X, and it is now called Y." That helped. 

The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.

Complete Compare is good for double checking your work, how your model compares with prior versions, and making sure that your model reflects the database design. At my job before my last one, every now and then the DBAs would go in and make updates to correct a production problem, and sometimes they would forget to let us know so we could update the model. Therefore, periodically, we would go in and compare the model to the database to ensure that there weren't any new indexes or changes to the sizes of certain data fields without our knowing it. However, at the last job I had, the DBAs wouldn't do anything to the database unless it came from the data architects so I didn't use that particular function as much.

If the source of the data is an L2TP system and you're bringing it into a data warehouse, erwin's ability to compare and synchronize data sources with data models, in terms of accuracy and speed, is excellent for keeping them in sync. We did a lot of our source to target work with Informatica. We used erwin to sometimes generate the spreadsheets that we would give our developers. This was a wonderful feature that isn't very well-known nor well-publicized by erwin. 

Previously, we were manually building these Excel spreadsheets. By using erwin, we could click on the target environment, which is the table that we wanted to populate. Then, it would automatically generate the input to the Excel spreadsheet for the source. That worked out very well.

What needs improvement?

When you do a data model, you can detect the table. However, sometimes I would find it quicker to just do a screenshot of the tables in the data model, put it in a Word document, and send it to the software designers and business users to let them see that this is how I organized the data. We could also share the information on team calls, then everybody could see it. That was quicker than trying to run reports out of erwin, because sometimes we got mixed results which took us more time than what they were worth. If you're just going in and making changes to a handful of tables, I didn't find the reporting capabilities that flexible or easy to use. 

The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work.

I don't see that it helped me that much in identifying data sources. Instead, I would have to look at something like an XML file, then organize and design it myself.

For how long have I used the solution?

I started working with Data Modeler when I was in the transportation industry. However, that was in the nineties, when it was version 1 and less than $1,000.

What do I think about the stability of the solution?

I found it pretty stable. I didn't have any problems with it. 

Sometimes, when you're working with model Mart, once in a while the connection would drop. What I don't like is that if you don't consistently save, you could lose a lot of changes. That's something that I think should work more like Word. If for some reason your system goes down, there's an interruption, or you just forget or get distracted by a phone call, then you go back and something happened. You might have lost hours worth of work. That was always painful.

What do I think about the scalability of the solution?

I have worked on databases that had as many as a thousand tables. In terms of volume and versioning, it is fine. We've used the model Mart to house versions that introduce another level of complexity to keep the versioning consistent. 

There is a big learning curve with using model Mart. Therefore, a lot of groups don't really fully utilize it the way they should. You need somebody to go in there every now and then to clean things up. We had some pretty serious standards around when you deployed it to production and how you moved it in model Mart. We would use Complete Compare there. It scaled well that way. 

In terms of the number of users, we had 20 to 30 different data architects using it. I don't know that everybody was on it full-time, all the time. I never saw a conflict where we were having trouble because too many people were using it. From that point, it was fine.

I think the team got as large as it was going to get. In fact, right now they're on a hiring freeze because of COVID-19.

How are customer service and technical support?

Over a period of five or 10 years, the few times I've had to go all the way through to erwin, I talked to the same young lady, who is very good. She understood the problem, worked it, and would give me the solution within two phone calls. This was very good.

Which solution did I use previously and why did I switch?

Prior to erwin, I had used Bachman and IEF. Bachman I liked better, but IEF was way too cumbersome. 

Bachman was acquired by another company and disappeared from the marketplace. The graphics were very pretty on Bachman. Its strongest feature was reverse engineering databases. I found erwin just as robust with its reverse engineering. 

IEF also disappeared from the marketplace, and I didn't use it very much. I didn't like it, as it was way too cumbersome. You needed a local administrator. It was really tough. It promised to generate code and database as well as supposed to be an all encompassing case tool. I just don't think it really delivered on that promise.

It could very well be that the coding of those solutions didn't keep up with the latest languages. There was a real consolidation of data modeling tools in the last 15 to 18 years. Now, you've only got erwin and maybe Embarcadero. I don't think there's anything else. erwin absorbed a lot of the other solutions but didn't integrate them very well. We were suffering when it didn't work. However, with the latest versions, I think they've overcome a lot of those problems.

How was the initial setup?

Usually, the companies already had erwin in place. We had one company where the DBAs would sort of get us going.

The upgrades were complex. They required a lot of testing. About a year ago, we held off doing them because we wanted to upgrade to the latest version as well as we were in the midst of a very big system upgrade. Nobody wanted to take the time. It took one of our architects working with other internal organizations, then there were about three or four of us who tried to do the testing of the features. It was a big investment of time, and I thought that it should have been more straightforward. I think companies would be more willing to upgrade if it wasn't so painful.

The upgrade took probably two months because nobody was working on it full-time. They would work on it while they could. One of the architects ended up working late, over the weekends, and everything trying to get it ready before we could roll it out to the entire team.

For the upgrades, there were ;at least half a dozen people across three different groups. There were three or four data architects in our group, then we had two or three desktop support and infrastructure people for the server issues.

What about the implementation team?

I think they used Sandhill for the initial installation.

If it's the first time, I recommend engaging a third-party integrator, like Sandhill, whom I found them very good and responsive.

What's my experience with pricing, setup cost, and licensing?

We always had a problem keeping track of all the licenses. All of a sudden you might get a message that your license expired and you didn't know, and it happens at different times. At GM Finance, they engaged Sandhill to help us manage it. I was less involved because of the use of Sandhill, who was very helpful when we had trouble with our license. I remember you had to put in these long string of characters and be very careful that you didn't cut and paste it in an email, but that you generated it. It was so sensitive and really difficult until the upgrades.

if there was a serious problem, then it was usually around the licensing, where there was some glitch in the licensing. Then, we would call Sandhill who would help us out with it. That's something where we had to invoke a third-party for any technical difficulties.

I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit.

I think you buy a block of users because I know the company always wanted to manage the number of licenses. 

Which other solutions did I evaluate?

I really haven't spent a lot of time on other data modeling tools. I have heard people complain about erwin quite a bit, "Oh, we wish we had Embarcadero," or something like that. I haven't worked with those tools, so I really can't say that they're better or worse than erwin, since erwin is the only data modeling tool that I've used in the last 15 years.

What other advice do I have?

There might be some effort to do some cloud work at my previous place of employment, but I wasn't on those projects. I don't think they've settled on how they're going to depict the data.

Some of the stuff in erwin Evolve, and the way in which it meshes with erwin Data Modeler, was very cool.

Sometimes, your model would get corrupted, but you could reverse engineer it and go back in, then regenerate the model by using the XML that was underlying the model. This would repair it. When I showed this to my boss, he was very impressed. He said, "Oh man, this is where we used to always have to call Sandhill." I replied, "You don't have to do that. You need to do this." That worked out pretty well.

Biggest lesson learnt: The value of understanding your data in a graphical way has been very rich in communicating to developers and testers when they recognize the relationships and the business rules. It made their lives so much easier in the capturing of the metadata and business English definitions, then generating them. Everybody on the team could understand what this data element or group of data elements represented. This is the biggest feature that I've used in my development and career.

I would rate this solution as an eight out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free erwin Data Modeler Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2026
Buyer's Guide
Download our free erwin Data Modeler Report and get advice and tips from experienced pros sharing their opinions.