We changed our name from IT Central Station: Here's why

erwin Data Modeler (DM) OverviewUNIXBusinessApplication

erwin Data Modeler (DM) is #1 ranked solution in top Database Design tools, #2 ranked solution in top Architecture Management tools, and #5 ranked solution in Business Process Design tools. PeerSpot users give erwin Data Modeler (DM) an average rating of 8 out of 10. erwin Data Modeler (DM) is most commonly compared to SAP PowerDesigner: erwin Data Modeler (DM) vs SAP PowerDesigner. erwin Data Modeler (DM) is popular among the large enterprise segment, accounting for 96% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 24% of all views.
What is erwin Data Modeler (DM)?

erwin pioneered data modeling, and erwin Data Modeler (erwin DM) remains trusted, award-winning software for data modeling and database design, automating complex and time-consuming tasks. Use it to discover and document any data from anywhere for consistency, clarity and artifact reuse across large-scale data integration, master data management, metadata management, Big Data, business intelligence and analytics initiatives – all while supporting data governance and intelligence efforts.

erwin Data Modeler (DM) was previously known as erwin DM.

erwin Data Modeler (DM) Buyer's Guide

Download the erwin Data Modeler (DM) Buyer's Guide including reviews and more. Updated: January 2022

erwin Data Modeler (DM) Customers

 Premera, America Honda Motors, Aetna, Kaiser Permanente, Dental Dental Cali, Cigna, Staples

erwin Data Modeler (DM) Video

erwin Data Modeler (DM) Pricing Advice

What users are saying about erwin Data Modeler (DM) pricing:
  • "The primary reasons that erwin was selected were that it was much more affordable for us [than Embarcadero] and it was easily maintainable."
  • "I don't specifically know what we're paying now. About three years ago, in another organization, I have this memory of 6,000 AUD a seat or something like that, but I am not sure. In the mid-2000s, it was something like 1,200 AUD a seat. I get the impression that there was a price jump when it was spun off from CA as a separate company, which is understandable, but it could sometimes be a barrier in some organizations picking it up. I haven't talked to erwin people yet, but I'm going to suggest to them that they could perhaps think of having an entry-level product that is priced a bit lower, and then, you can buy the extra suite."
  • "I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit."
  • "An issue right now would be that erwin doesn't have a freely available browser (that I am aware of) for people who are not data modelers or data engineers that a consumer could use to look at the data models and play with it. This would not be to make any changes, but just to visually look at what exists. There are other products out there which do have end user browsers available and allow them to access data models via the data modeling tool."
  • "This company had bought the license for three years, and it's not an individual license. While you can buy a license for each individual, that would be very expensive. There is something called concurrent licenses where you can purchase licenses in bulk and 15 to 20 people can access the license and model. Concurrent licenses are scalable to the number of users and are proportional to the cost."
  • erwin Data Modeler (DM) Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
    Real User
    Top 5Leaderboard
    Allows us to bring in data from dozens of platforms and search holistically across all of them
    Pros and Cons
    • "When you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable."
    • "erwin has versioning so you can keep versions, over time, of those models and you can compare any version to any version. If you're looking at a specific database and you want to see what changed over time, that's really useful. You can go back to a different version or connect that to your change-control processes so you can see what was released when."
    • "One of the things I've been talking to the erwin team about through the years is that every data model should have the ability to be multi-language... When I was working at Honda, it became very difficult to work with the Japanese teams using just one model. You can have two models, one in English and one in Japanese, but that means you have to keep the updates back and forth, and that always increases the risk of something not being updated."

    What is our primary use case?

    erwin Data Modeler does conceptual, logical, and physical database or data structure capture and design, and creates a library of such things.

    We use erwin Data Modeler to do all of the levels of analysis that a data architect does. We do conceptual data modeling, which is very high-level and doesn't have columns and tables. It's more concepts that the business described to us in words. We can then use the graphic interface to create boxes that contain descriptions of things and connect things together. It helps us to do a scope statement at the beginning of a project to corral what the area is that the data is going to be using.

    Then we do logical data models, which are completely platform-independent. They're only about datasets, the owned attribution, and different key analyses to determine what primary keys we want.

    And then we do database designs, which relates to the physical data models. 

    We also do reverse-engineering where we are capturing the catalogs of existing systems, or purchased software, or even external vendor datasets. They send us data sets and we can reverse-engineer what they send us, especially the backup snapshots where a vendor in the cloud will send data as a backup restore. So to help the documentation for the reporting team, we do reverse-engineering so that they know what the table and column structure look like, along with sizing, nullability, and keys and constraints.

    erwin is on-prem. We have the Workgroup Edition, which means that we don't just have client-side software. We have client-side software that stores the data models back into a database which is on an on-prem server.

    How has it helped my organization?

    When I got to my current company two years ago, it didn't have any collection of its data assets into reporting services. If someone wanted to know where a social security number was in all the databases, they had to download all of the structures and do all of the research. I came in and did a full production-environment, reverse-engineer library. Once I did that using erwin front end, I could help the CCPA team find the PII data by simply doing Workgroup Edition Data Mart reports that crossed all of the environments. 

    The cool thing about that is that the erwin models will bring in data from a dozen or two dozen different platforms. But once those models are in your Mart structures, you can do your search, looking for something like names of columns, across all of them. So you could be doing a search across Oracle, PostgreSQL and, because it's in your library, you can look at your assets holistically. For us, we went from zero to 500,000 columns of information. You can do that in Excel or in other ways, but this is a very simple way to do it. And you don't need to be highly-trained and skilled. You could actually bring in a college intern and set them loose with creating those libraries for you. Not needing highly-skilled people is one of the great things about erwin. It's very intuitive and it's not hard to use.

    At my current company, we're not using it for much custom work, but in my past, the solution's ability to generate database code from a model for a wide array of data sources absolutely helped to cut development time. If you do your design on paper or in an erwin model before the developers start coding, and you review them to make sure that you've got everything in there, you do much less break-and-fix. If you can have an overview model, even for your Agile developers, and say, "This is where we're going," even if you don't deploy at all, it makes it much simpler. You don't have to drop your structures and recreate and reload your test data because you're pretty confident you've gotten your database design right, before people start coding.

    erwin improves your standards because your naming standards and your design standards can all be reviewed much easier. You can make sure that misspellings, for instance, don't get all the way to production, or to the point where you have to live with them because people are already coding against them. You can do so much more QA analysis on your structure before it's deployed, if you're using a model.

    What is most valuable?

    You could probably use something like Visio to draw boxes and lines, especially for conceptual, very high-level things. But when you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable.

    In addition, erwin has versioning so you can keep versions, over time, of those models and you can compare any version to any version. If you're looking at a specific database and you want to see what changed over time, that's really useful. You can go back to a different version or connect that to your change-control processes so you can see what was released when.

    With versioning, you can also compare between development environments and production environments. You can see what may not have actually changed or what changes are in the works. It also enables you to do the kind of troubleshooting where you're looking at: Why on this server does this copy of something seem to behave differently than on that server? erwin highlights that really quickly for you. You don't have to closely eyeball your comparison. erwin creates a report that comes back and says what is different. And you can focus on almost anything, from the privileges in the catalog to a data type or a name anomaly. Even for servers that are case-sensitive in their structure, it will tell you the difference between something in all-caps and something that's mixed-case. If you're getting to that level of detail when you're troubleshooting, erwin is great at doing that sort of thing.

    In terms of the solution's visual data models for helping to overcome data source complexity, erwin shows you "what is," if you're talking about the physical layer. When it comes to being able to make things clearer and more understandable, it depends on what your structure is. If you've just reverse-engineered SAP, it's abbreviated German. You may need other tools to help you understand it. If you're doing forward work — if you're going from conceptual to logical to physical — erwin is fabulous at letting you change what you see in the graphic. You can change your data model from just looking at primary keys to looking at primary keys and foreign keys, to looking at just the definition of the table in boxes. It allows you to change that visualization depending on your audience. If you're working with the DBAs, you can add metadata and it expands the box showing the visual of the table structure, so you can concentrate on just data types, or you can do data types and nullability and foreign keys, and all different sorts of things. You can do the indexes on top of it as well. You could end up with a table graphic that's the width of your screen if you've added all the details in.

    And if it's too hard to look at that way — if you're trying, for instance, to make sure that EmpID Is always a varchar 250 — it also has the ability to take that graphic and move it into what's called the Bulk Editor. That looks much more like an Excel spreadsheet, within a view in your erwin model. You can sort your Excel spreadsheet by column name and see all of the details next to it. That way, everywhere EmpID shows up in that model, it is now in more of a column-row view, and you can easily look at that to make sure that all the EmpIDs say varchar 250. If you see one that's wrong, you can actually change it in the Bulk Editor and it changes it in the graphic automatically, because an erwin model really isn't a graphic, it's much more like a little Access database. So when you change it on one view, it fixes it in the other.

    In addition, anybody using erwin to do forward engineering will find the solution's ability to compare and synchronize data sources with data models, in terms of the speed of keeping them in sync, to be almost instantaneous. You can connect an erwin data model to a database and deploy your changes, or you can deploy just delta changes. Or you can deploy one little piece because you've identified one little piece of your model. But most of comparing and synchronizing data sources with data models comes down to people and process. The tool will absolutely help you get there, but it's not going to take on all of the requirements of putting standards and processes in place. If you haven't tied your erwin Data Modeler to your change-control, it can't help you. So it's not a dynamic connection to your servers, it's just a tool that you can use with your environments.

    Also, while I'm not configuring erwin, I do have templates that erwin lets me set up to configure models: different templates do colors and domains and prebuilt macros for definitions, based on different things. You don't have to configure erwin. You just have to tell it what sort of a platform you're either going to or coming from. You can also set up some draw templates and customize the colorization of different things. If you want all your primary keys to be red, you can configure that, and set that up as a template.

    Finally, the solution's code generation ensures accurate engineering of data sources. With reverse-engineering, I have found it to be completely accurate. I've never found a time when it didn't get the source information correctly into the model. If you're doing a data warehousing project, where you're going from source to target, erwin can produce an extremely comfortable and dependable and trusted graphic of where you're coming from, while you design where you're going to. You know what the data types are, what the nullability is — the structure of the data. You don't know all the characterizations of data values because erwin is not profiling data values. It's just picking up the catalog structure of the tables. But it is completely trustworthy, once you've reverse-engineered it. It has never let me down along those lines.

    What needs improvement?

    One of the things I've been talking to the erwin team about through the years is that every data model should have the ability to be multi-language. So along with the fact that I can change, for example, the graphic of the model to look at just the definitions in boxes, or just the key structures in the boxes, I'd love to be able to change the language. When I was working at Honda, it became very difficult to work with the Japanese teams using just one model. You can have two models, one in English and one in Japanese, but that means you have to keep the updates back and forth, and that always increases the risk of something not being updated.

    The world is getting to be a very small place, and being able to have one file that has all of that metadata in whatever form you need to read it, is the best way to manage that data. That would be a big change for them and it would be a big change to the Mart structure. It would be a one-to-many on the logical side of the business names, but it would also a one-to-many on the definition side of the tables and the columns and everything else, where you can have notes. I know that it's a big change I'm asking for, and they've had to put it off a little bit, but their business glossary tool now kind of looks at it that way. I'm hoping that the erwin model itself will be able to allow for that in the future.

    For how long have I used the solution?

    I've been using erwin Data Modeler from way before erwin owned it; since the '90s when it was Logic Works. That was before it went to Platinum and before it went to CA. And now they're spun off as erwin.

    What do I think about the stability of the solution?

    It's a very stable tool. It doesn't have problems with crashing or anything like that.

    What do I think about the scalability of the solution?

    I've never had a problem with its scalability, especially using the Workgroup Edition, because you keep all of your models in the database. It's not a problem to collect hundreds of different data models. Even scalability on your desktop or in your laptop would be more about the laptop itself, not the tool. It's kind of like Word. It saves the data outside of itself, so it doesn't have that problem.

    There was no data modeling tool when I got here two years ago, so it is new to the culture, and this is a 40-year-old company. It is mostly being used with our master data management and our data warehousing, which is still doing a lot of development work. It's being expanded into supporting the data governance initiatives, to do data asset management. And I'm expecting that over time it will be used more for data asset change-control. We use a lot of vendor-purchased products, and being able to see the difference between their table structures before an upgrade and after an upgrade isn't being documented in a model right now, but it probably will be.

    Also, the new California Consumer Privacy Act is forcing us to do much more of that data governance and data asset management, as well as data classification, so that we can identify PII data. That's definitely picking up steam.

    How are customer service and technical support?

    I use their technical support all the time. Sometimes it's just to ask them — because it's such a rich tool, they move menu items in the upgrades sometimes — "Okay, where did you put it this time?" But they've always been very helpful. They do have live chat on their website. About 75 percent of the time the chat agents can answer my question. If not, they hand me off to somebody. Given the amount of time I've worked with erwin, I almost know all their first names. They've always been very good and have taken care of me.

    A lot of the technical staff moved with the tool, so they've stayed intact as it went through buyouts. I've always enjoyed working with the erwin team. They're very supportive, very helpful, and are very responsive to my requests and thoughts.

    Which solution did I use previously and why did I switch?

    My current company did not have a previous solution, other than Excel spreadsheets and Visio — nothing that I would call an industry-standard modeling tool.

    How was the initial setup?

    I was involved with the purchase and installation in my current company. I work with the DBAs so I don't touch the buttons for the installation. But the erwin support team is always a great help. I have never heard from any of the DBAs, during any of my "lifecycles," that installation is anything more than straightforward.

    There's all sorts of bureaucracy that happens at a company, and that's true in our company as well. The deployment happened over the course of a couple of days: the installation, the tests, the verification, and making sure that the client-side could connect to the databases. I don't think any of that took too much time, other than getting everybody together to do it.

    Our implementation strategy was to work with a very temporary dev environment and then roll it to a prod environment and then drop the dev environment. We don't keep a dev environment full-time because it is just a COTS tool. They do backups and restores just like any other mission-critical data. And we're using a combination of named licenses and concurrent licenses in our strategy so that we can leverage who uses it the most.

    As for the number of people involved in an upgrade. I take on the SME role. We have the main DBA who is scheduling the upgrade into the environment. Then we generally have a DBA who is assigned to do the upgrade. And our service desk helps with the deployment of the client-side out to the users. So there are four people involved.

    What was our ROI?

    We saw return on our investment in erwin once we got our model library in place across all of our different data environments. Of course, you can always search using your DBA tools to find different things on a server. But once you've got your models in place, you can cross all the servers in your search, because you've pulled all that metadata into one place. It doesn't matter if it's an Oracle backend, an Access backend, a mission-critical Excel spreadsheet. Whatever it is that you have a model of, you can go search for something like a social security number. Just being able to do that, it almost pays for itself. When you think of how much time people spend to try to find things, it's completely amazing.

    It depends on how many servers you have, how complex your environment is, and how many of your teams are going to look at stuff. If you have a really obfuscated structure, then you're actually profiling the data to figure things out.

    Being able to type in, "Go find column names with SSN in them," it comes back almost immediately. That probably gets you 80 percent of the way to finding that particular aspect. How much time did we spend in the Y2K crisis just to find dates? Just identifying the columns that were going to be impacted was a feat. I keep telling my cohorts that social security number data is going to be the next Y2K. As soon as we run out of numbers, they're going to have to add a digit, but everything is hard-coded to the current span of digits. As soon as the federal government decides that it's going to do that, we are all going to have to go fix it.

    The nice thing about having your assets in a database is that the more value-add you've done on your models, the less you have to look at physical names on columns. If you've put your logical or your business names on columns, that's even better.

    I could imagine that in very serious research, you're going to cut 80 percent off the time it would take, depending on how complex your environment is. You can get there so much faster. Obviously, it won't give you everything because human beings just don't have it all written down. Or it could be that some nitwit is putting social security numbers into note fields and you don't know about it. But it's going to get you a long way there.

    The erwin model is much more like an Access database. The return on investment is that it is a very three-dimensional type of metadata collection about your model. In some of Visio, you can add notes on a little graphic piece. But you can't add multiples. You could approximate multiples with carriage-returns in the block, but you can't categorize your metadata. You also can't add more value about that metadata. One little box on an erwin model can be opened logically and there will be 10 tabs worth of value-add you can put in. You can open the model so that you're looking at the physical side of the house, and still have another 10 tabs that have nothing to do with the logical side, other than that they share the primary key of the little graphic piece that you're looking at.

    erwin is so much more flexible. And, with respect to return on investment, it's customizable. erwin has the concept of user-defined properties where if you need to do something special within your models that says something like, "Is this used by this line of business?" you can create flags, or dates, or text, or drop-down lists, and attach it to anything in the model itself. In that way you've created some value-add that is customized to your company's needs. To me that adds tremendous power to the return on investment. You can't do that with just plain drawing tools.

    What's my experience with pricing, setup cost, and licensing?

    We came up with a two-part concept with our licensing. Our data architects have named licenses that only they can use. We have four named licenses today. But we also bought three concurrent licenses, two that are just for developers and the DBAs, and one that's a "read-only" that anybody can use. It's a little bit difficult for me to tell you how many people use those, but probably no less than 10 and possibly upwards of 25.

    We pay for maintenance on a yearly basis. There are no additional costs for the Workgroup Edition, which has the server component. That is the edition where you can save your models back to a database, which we installed on SQL Server, but I think you can install it on any of several different platforms.

    Which other solutions did I evaluate?

    Our company looked at two others. Because I have worked with erwin for so long, I wanted to make sure, when I came in, that my current company got the opportunity to make its choice based on what everybody's needs were here. We did a full vendor tool assessment back then. Although I don't have it in front of me, I know we looked at Embarcadero and it may be that we also did the highest level of Visio, so that between them we looked at a very high-grade tool and something that would just get us by. 

    When I got here, the DBAs had already put acquiring an erwin license into their next year's budget. They had already made that choice. But I took us all the way back to doing a tool compare because I wanted to make sure that everybody got the opportunity to weigh in on the choice that was made.

    A lot of the difference between erwin and other products was the licensing and pricing structure for maintenance. Some of it was the inter-connectability with other tools. erwin does a really good job of building bridges between many different tools. Part of it was also its ability to be very sustainable because it had the Workgroup database backend, which Embarcadero has as well, but Visio does not. That was part of the decision point: whether we wanted to go with something really small and move up to a more industry-standard tool, or just take the opportunity to bring in a couple of licenses. We brought in a smaller footprint last year, and we added a few more licenses in 2019.

    The primary reasons that erwin was selected were that it was much more affordable for us and it was easily maintainable.

    What other advice do I have?

    Take the time, especially if you're going to use Workgroup, but even if you're using desktops, to figure out how you're going to manage the models. They need to have a naming convention. They need to have a directory organization that makes sense to you. They need to have change-control, just like code. You need to figure out how you're going to use it because once it gets past 50 models, finding something and knowing how to change it and where to change it and where to publish it back out is going to be your biggest headache. You need to think long-term. It's easy when you just have a few models. As soon as you have 1,000 of them, unless you've thought ahead, you're going to have a huge cleanup problem.

    The biggest lesson I take away from using erwin Data Modeler is that we should all be doing much better library sciences with our data assets than we do. erwin is a great tool to capture your library sciences. It can tell you what you need to know about a piece of data, or a row of data as a dataset in a table, or a collection of tables. You can add information not just about single things but collections of things. 

    We should have many more people whose job it is to add that value. Right now, companies still mostly use erwin for custom development and it needs to be much more built into documentation of any type of data. I use erwin to do data models of reports and of API calls, for example. Any data set, to me, qualifies as needing a model so that you can tell what data elements are in it and what that dataset is used for.

    Through all the years, erwin has done a great job of making things better and better. There are always things that we're talking about in terms of improving it, but the fact that it's now starting to integrate better with data governance-type tools so that all of your definitions can move to more of a glossary form, rather than just being in the models, is tremendous. The more that that's integrated back and forth, the better it's going to be.

    Out of all of the modeling tools, erwin is a 10 out of 10. It hits all the high points for me. There are some pieces of functionality that competitors come up with, maybe a little bit earlier, but it's a leapfrog-type of thing. Every time the vendors find that something is needed in the world of modelers, they all start to bring it in. I find erwin to be very responsive to those needs. So now, erwin has NoSQL modeling aspects in the tool and they're connecting with their own suite of data governance tools. That means you can push definitions to your data governance tool or bring them back from your data governance tool. It's starting to become much more of an integrated solution, rather than just a standalone.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Data Modeler at a logistics company with 10,001+ employees
    Real User
    Top 10
    Makes our data modeling staff more productive and has helped standardize data modeling efforts
    Pros and Cons
    • "We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa."
    • "Complete Compare is set up only to compare properties that are of interest to us, but some of the differences cannot be brought over from one version of the model to another. This is despite the fact that we are clicking to bring objects from one place to another. Therefore, it's hard to tell at times if Complete Compare is working as intended without having to manually go into the details and check everything. If it could be redesigned to a degree where it is easier to use when we bring things over from one site to another and be sure that it's been done correctly, that would be nice to have. We would probably use the tool more often if the Complete Compare were easier to use."

    What is our primary use case?

    We use erwin to design conceptual, logical, and physical data models for new projects. We use a Forward Engineering tool to forward engineer data models into new database structures. We use the reverse engineering tool to bring databases into data models and erwin. We also generate HTML reports of the models to share with our customers.

    Whenever we do have a new project that requires a new approach, we do try using erwin for it. For example, if we have an XSD message file, then we would try to see if there is a way to get that into erwin for better visibility of the structures that we have to work with.

    How has it helped my organization?

    The product has helped us standardize our data modeling efforts across the enterprise in regards to visuals and naming. We also use the Mart Tool from erwin, which allows us to store our data models in a centralized repository, which gives everyone visibility on what is out there and how it is all related.

    We discuss existing and new business requirements with business users, data architects, and application developers to figure out how to capture and visualize concepts in their relationships. One thing we do have standard in all of our models is that we use the information engineering notation. This is standard across our enterprise. We do use a diagram hierarchical layout to help visualize things, especially when we reverse engineer a database, as we want to have some sort of a clear visual layout of things.

    What is most valuable?

    We find a few of erwin tools most valuable:

    • The Bulk Editor lets us easily make a lot of similar changes within our data model.
    • We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa.
    • The Report Designer is extremely useful because we can create reports to share with our business users and have a business discussion with them on how things work.

    We find the text manipulation through the Bulk Editor to be extremely helpful. There were times where we had a set of entities which were not following our standards. With the help of the Bulk Editor, we were able to form those names with a few Excel formulas to follow our standards.

    The Reverse Engineering functionality is good and easy to follow. It works really well. For the most part, we have been able to get any database to work with our data model format.

    We quite heavily use the templates that exist to apply our standards to the data models created by our data modelers. We are able to use the templates to apply things like Naming Standards, casing on names, and colors to all our data models without having to be on top of it.

    What needs improvement?

    Complete Compare is not user-friendly. For example, the save known changes as snapshot does not work as expected. We are unable to find the exported files in our workstations at times. Complete Compare is set up only to compare properties that are of interest to us, but some of the differences cannot be brought over from one version of the model to another. This is despite the fact that we are clicking to bring objects from one place to another. Therefore, it's hard to tell at times if Complete Compare is working as intended without having to manually go into the details and check everything. If it could be redesigned to a degree where it is easier to use when we bring things over from one site to another and be sure that it's been done correctly, that would be nice to have. We would probably use the tool more often if the Complete Compare were easier to use.

    The client performance could be improved. Currently, in some cases, when we delete entities it causes the program to crash. Similarly, for Mart's performance, we need to reindex the database indexes periodically. Otherwise browsing through the Mart, trying to open up or save a data model takes unusually long.

    There are several bugs we discovered. If those were fixed, that would be a nice improvement. We encounter model corruption over time, and it is one of those things that happens. There is a fix that we run to repair this corruption by saving the model as an XML file or to the Complete Compare tool. If this process could somehow be automated, having erwin detect when a model is corrupted and do this process on its own, that would be helpful.

    There are several Mart features that could be added. E.g., a way to automatically remove inactive sessions older than a specified date. This way we can focus on seeing which users have been utilizing our central repository recently, as opposed to seeing all of what happened since five years ago. This would be less of a problem if the mart administrator did not have trouble displaying all of the sessions.

    On the client side, there are some features that would come in handy for us, e.g., Google Cloud Platform support or support for some of the other cloud databases.

    If we had a better way to connect and reverse engineer the databases into data models, that would help us.

    Alter scripts can be troublesome to work with at times. If they can be set up to work better, that would help. On the Forward Engineering side of things, by default, the alter syntax is not enabled when creating alter scripts. We strongly believe this is something that should be enabled by default.

    On the Naming Standards (NSM) side of things, there is a way in erwin to translate logical names into physical names based on our business dictionary that we created. However, it would be nice if we could have more than one NSM entry with the same logical element name based on importance or usage. Also, if erwin could bring in the definitions as part of the NSM and into a model, then we could use those definitions on entities and attributes. That would be beneficial.

    For how long have I used the solution?

    We have been using it for at least 15 years, a very long time.

    What do I think about the stability of the solution?

    Overall, the server is mostly stable. After we implemented the reindexing fix on our database, everything works pretty well. On the client side, it is mostly stable, but sometimes it's not. There are certain actions that cause the client to crash. This has been much less of the case since we switched to the 64-bit version of erwin, which has been a great improvement.

    We have found erwin’s code generation ensures accurate engineering of data sources. We haven't seen any issues. We pass our code off to DBAs to implement. Therefore, the DDL that we generate gets passed up to the DBAs who will add some physical features and may add some performance indexes, then we will reverse engineer that information and have that in our data models.

    For our bug related issues, we have been given the recommendation to upgrade to the latest version. We are in process of doing that and will see how that works out. We also submitted some other things through erwin's idea board. There are a few issues that we haven't reached out to erwin on yet.

    Currently, we have a team of people who take turns helping out other users. They figure out how to do different things. If there is a server side issue, we do have several people as well who will look into that. In the past, we did manage a lot with one person. However, we realized it was quite an undertaking. You either need one fully dedicated person to look into this or several people to take turns.

    We have a Windows Server and a SQL Server database. Therefore, we have SQL Server dedicated staff to help us with any SQL Server issues and Windows support staff who help us with any Windows issues. We don't generally have any issues with erwin. From a technical support side, we do have a support staff if we were to run into any issues. Our team of five data modelers are pretty well-experienced with both the tool, Mart, and any sort of communication issues that we might have to deal with, e.g., if the SQL server went down, then these folks would be the liaisons to the SQL Server team.

    What do I think about the scalability of the solution?

    Given our mostly constant user base and constant growth of new data, our impressions of the scalability are great. Currently, we have about 2000 models in the Mart repository. Reaching this capacity has slowed down interactions with the Mart as opposed to when we had a fresh Mart. When we first started using the Mart server, it took about two seconds to open things like the Catalog Manager or Mart Open dialogue. Now, it takes around 10 seconds to do that part. For the most part, it seems to be pretty scalable. We've been able to continue using the tool given our large volume of models.

    There are 35 to 40 users plus some occasional DBAs who use it to tweak any of the DDLs that they might want to pull.

    We are able to develop our data models for mission-critical tasks with the solution’s configurable workspace and modeling canvas. We have 20 enterprise data modelers. We are mostly working on the standard RDBMSs: SQL Server, Db2, and Oracle. We also use some cloud technologies, like GCP, Azure, and Couchbase. Then, there are approximately another 15 data modelers which work exclusively in Oracle Business Intelligence from a data modeling aspect. This is for dimensional repository and data warehouse stuff. Therefore, we have about 35 to 40 data modelers in our organization for pretty much every major project that passes some sort of funding gate. Anything that is mission-critical for our organization will come through one of our two managers, depending on whether it's relational modeling or dimensional modeling. All of the database designs come through these two groups. There are some smaller database designs which we may not be involved with, but all of the critical application work comes through these teams. In regards to focusing on mission-critical tasks, we really wouldn't be able to do it without a tool like erwin. Since we are all very well-trained in erwin, it is the tool that we leverage to do this.

    Erwin generates the DDL for all our projects. We rely on the tool for accuracy as some of our projects have hundreds of entities and tables.

    How are customer service and technical support?

    When it is bug related, we get a bug fix or are told to upgrade to the latest version. This has worked out in the past. Where it is question related, we have been pretty happy with their Tier 1 support's responses. We will receive some sort of a solution or suggestion on how to proceed in a very timely manner.

    We would like support for JSON reverse engineering. That is something which is completely missing, but is something we have been working with quite often recently. If erwin could support this, that would be incredible.

    How was the initial setup?

    On the client side, the setup was mostly straightforward. It was a matter of going through the installer, reading a little bit, then proceeding to the next step. In the end, the installation was successful.

    On the server side, it has been a bit more complex. We did have some documentation provided by erwin, but it wasn't fully intuitive nor step-by-step. Some things were missing. It was enough to get started, then figure things out along the way.

    On the client side, it takes five to 15 minutes to do the installation or upgrade to a newer version. On the server side, from the moment we backed up everything on the server and disabled the old mart application, the upgrade took about two hours. If you include all the planning, testing, and giving support users enough time to do everything, the upgrade took about three months. In general, these are the timeframes we experienced through in the past.

    What about the implementation team?

    We simply used the documentation provided by erwin. Between the few of us that worked on the upgrade at our company, we had enough of a technical background to be able to figure out things out on our own. There were five to 10 people who worked on this initially:

    • We had one person who helped with the database side of things.
    • We had another person do everything on the application server.
    • To test out of the different features of erwin in the new version and ensure that the existing features worked as intended, we involved several additional people from our team.

    We go through a pretty rigorous testing procedure when we bring in a new release of any software like this. Although it's not affecting customers directly, it certainly affects 35 to 40 people. Therefore, we want to ensure that we do not mess them up by not having something work. Normally, we go through this with any product. We first install it on a test environment and have a bunch of folks jump on. This is to ensure everything is working the way we want and work out all the kinks when setting up the production server before we move it into production.

    What was our ROI?

    It is an invaluable tool for us. It has been part of our data governance process in regards to database design for at least 15 years.

    The amount of time saved is proportional to the amount of changes in the databases that we are implementing at any time. The more code we generate (because the model is bigger), that saves us more time because we don't have to write everything up manually and check to make sure that the code is correct. If we had to give a number, this saves us anywhere from minutes to hours of work. The time frame depends on the data modeler, as some data modelers generate more code than others. Therefore, it could be on a daily, weekly, or monthly basis and depends on the project. Some projects are in maintenance mode and not going through a lot of changes. It is way easier to use this solution because then we have a data model to reference for something that was developed approximately two months ago and somebody can just pick it up versus if someone had to generate changes to a database without a data modeling tool.

    The tool certainly makes the data modeling staff more productive than if they did not have a similar tool. Without erwin, our jobs would be a lot more tedious and take a lot more time.

    Which other solutions did I evaluate?

    We evaluated IDERA two years ago and decided to stay with erwin mostly because the staff is mostly familiar and comfortable with the tool. We think that was the overriding factor. The other thing would be converting from erwin to IDERA would be a major undertaking that we just weren't prepared to do.

    The fact that it can generate DDL is a major advantage over something like Visio, where you can also do a database diagram. We don't have a Visio version that would generate DDL, so I'm assuming it doesn't, and any tool that can generate code for database definition will certainly have an advantage over a product that doesn't.

    What other advice do I have?

    I would certainly recommend this product to anyone else interested in trying it out. The support from the vendor is great. The tool overall performs well and is a good product to use.

    Having a collaborative environment such as the one that erwin provides through the Mart is extremely beneficial. Even if multiple people aren't working on a single model, it's nice to have a centralized place to have all the models. It gives us visibility and a central place to keep everything in one place. Also, it supports versioning, which allows us to revisit it at different points in time to go back to in the model, which is really helpful.

    We do not use erwin to make changes directly to the database.

    We have no current plans to increase our usage of erwin other than adding more models.

    We would rate the solution overall as an eight (out of 10).

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Learn what your peers think about erwin Data Modeler (DM). Get advice and tips from experienced pros sharing their opinions. Updated: January 2022.
    563,208 professionals have used our research since 2012.
    David Jaques-Watson
    Senior Consultant at a tech services company with 11-50 employees
    Real User
    Top 5Leaderboard
    Improves accuracy for generating target databases, allows us to pull metadata from a database, and makes it easy to display information and models
    Pros and Cons
    • "Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful."
    • "I still use Visio for conceptual modeling, and that's mainly because it is easier to change things, and you can relax some of the rules. DM's eventual target is a database, which means you actually have to dot all the Is and cross all the Ts, but in a conceptual model, you don't often know what you're working with. So, that's probably a constraint with erwin. They have made it a lot easier, and they've done a lot, but there is probably still room for improvement in terms of the ease of presentation back to the business. I'm comparing it with something like Visio where you can change colors on a box, change the text color and that sort of stuff, and change the lines. Such things are a whole lot easier in Visio, but once you get a theme organized in erwin, you can apply that theme to all of the objects. So, it becomes easier, but you do have to set up that theme."

    What is our primary use case?

    In one of the companies, we used it as an information tool. We created a logical model so that the business would know what was in the offices down to the warehouse. The current use case is also the same. We have some places for information, so we can do a logical data model for them, but, usually, it would go towards building an actual database, which also involves reverse engineering of an existing one because people don't know what's in there.

    It is currently on-prem, but we still have a separate server.

    How has it helped my organization?

    We want to bring different erwin components together and tell a business user story. So, having all of it on one platform to be able to tell one story makes it not as fragmented as components have been in the past. 

    In my previous company, when we had 1,000 tables, 6,000 columns, and 14 subject areas, trying to explain to people in the organization was difficult. Without the tool, it would have been impossible. With the tool, it was a lot easier because you could show a steward how this is his or her domain. For each steward, you could say, "Well, this is your domain over here." Once they had that, they could understand what you were talking about. So, it improved communication. We had a point where two stewards were looking at the models, and one of them said, "I think that one that you've got over there is actually mine." The other one said, "I think you're right." So, we actually moved an entity from one subject area to another because now they had the ability to see what was in their subject area. They could go and see what wasn't theirs and should be someone else's. If we didn't have the tool, we wouldn't have that visibility and wouldn't have been able to recognize that sort of situation. 

    Its ability to generate database code from a model for a wide array of data sources cuts development time. You don't have to re-key things. You put in the information at one spot, and it flows out from there. There are so many parameters you can put on the physical side. You can put in your indexes, and you can put in expected size changes. You can store all sorts of information within the model itself. It is a really good repository of all that sort of information, and then you just push a button, and it generates the other end. It works really well. In terms of time-saving, if you had to write it all out by hand, it would take weeks. It would probably take three or four times longer without the tool.

    It certainly improves accuracy for the generation of target databases because you're only putting information in one spot. You don't have to retype it. For example, I saw the word conceptual model misspelled today. So, if you have to re-key something, no matter how careful you are, you're going to misspell things, which would cause problems down the track, whereas if you make a mistake in DM, there is only one place you have to go and fix it, and then, you would regenerate the downstream stuff. This means that you don't have to touch anything physical. You generate it, and then you can use it.

    What is most valuable?

    Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful.

    I am currently trying to compare and synchronize data sources with data models, and it is pretty good. It shows you all the differences between the two systems. After that, it is a matter of what you want to do with them. It is certainly helpful for bringing models in and being able to compare. At the moment, I'm comparing something that's in a database with something that was in the DDL statement. So, these are two different sets of sources, and I can bring different sources together and compare them in the one, which is really helpful.

    What needs improvement?

    I still use Visio for conceptual modeling, and that's mainly because it is easier to change things, and you can relax some of the rules. DM's eventual target is a database, which means you actually have to dot all the Is and cross all the Ts, but in a conceptual model, you don't often know what you're working with. So, that's probably a constraint with erwin.

    They have made it a lot easier, and they've done a lot, but there is probably still room for improvement in terms of the ease of presentation back to the business. I'm comparing it with something like Visio where you can change colors on a box, change the text color and that sort of stuff, and change the lines. Such things are a whole lot easier in Visio, but once you get a theme organized in erwin, you can apply that theme to all of the objects. So, it becomes easier, but you do have to set up that theme. I think they've got three to four initial themes. There is a default theme, and then there are two or three others that you can pick from. So, having more color themes would help. In Visio, you have a series of themes where someone who knows about color has actually matched the colors to each other. So, if you use the colors in the theme, they will complement each other. So, erwin should provide a couple more themes.

    They could perhaps think of having an entry-level product that is priced a bit lower. For extra features, the users can pay more.

    For how long have I used the solution?

    I have been using it at least since 2003. I have used it at multiple organizations.

    What do I think about the stability of the solution?

    It has always been really stable in the different organizations that I've used it in. It has always been a pretty good product.

    What do I think about the scalability of the solution?

    It works fine with the number of people who have been using the product. We're talking about 10 to 12 people, not thousands of people. I haven't ever been in an organization where thousands of people even needed to get to the product. Probably the biggest drawback in scalability is the cost per seat rather than the actual product. The product works fine.

    Our current organization has probably about 5 to 10 people using it. We're a consultancy, so we're using it in various roles. So, a lot of it is to do with understanding. As consultants, we try to understand what a client has in the organization and what sort of data they have to make sure there is actually data in the system that can answer their business questions. So, that's the sort of thing we use it for. We can turn around and give them designs. We can show what it is, and then we can turn around and make it what it would be. It is used by analysts and developers. They are not developing software. They are probably developing the database, but then, people would develop software.

    I've used it on all the projects I've been on so far. I've been with this company for a short time, and it has come into play for pretty much all of the projects that I've been on. We want to use it more extensively. We want to use the erwin suite. We've got the modeler, but we also want to use their BI tool. We would like to evolve and come up with a story that links all of them together.

    We have only just got the BI suite installed. We're starting to play around with it and see what we can do with it. We're doing some training on it at the moment. In a previous company also, somebody from erwin came to show it to us, and it was reasonably new at that point. That was last year. It is a reasonably new product. So, getting them to talk to each other has also been fairly new. erwin has only done it in the last couple of years. 

    How are customer service and support?

    I haven't had dealings with them, but the dealings I've had with erwin as a company have always been really good. So, I would rate them a nine or 10 out of 10.

    Which solution did I use previously and why did I switch?

    I use Visio on the conceptual side. We've got Informatica, and I think it has got a modeling component in there. We try to get a range of products because we're doing consulting in various organizations, and they have got various tools. Usually, it depends on what a client has already installed. Sometimes, it also depends on their budget. Something like Informatica is usually at the top right end corner of the Gartner Quadrant, but it could also be overkill for smaller organizations because the benefit may not be there. So, a lot of time, it is horses for courses. You have to sort of tailor any solution to meet a client's needs.

    How was the initial setup?

    I haven't ever really installed erwin. One of the other guys has done that. Most of the places had it installed already. Usually, the complexity depends on how the organization does its software deployment. So, you have to go and request the software and then somebody has to give you the package. Once you get the package, it is pretty straightforward. It is usually less of a problem on erwin's side and more of an issue with how an organization deploys any erwin software, but once you deploy it, it works fine.

    Some places that I've worked with were very strict about doing testing on COTS products to make sure that there are no viruses on it and also to make sure that it plays nicely with the rest of the system. So, those sorts of organizations may take longer in terms of testing. You put it on a test machine first and make sure it is not going to kill anything. They might have to repackage some stuff before they put it out to the network. To deploy a vanilla thing, I would think that it would only take a couple of hours.

    In terms of maintenance, at the moment, I think we've got one person. The main thing is deploying new versions. You've got a server stood up, and you have to put the software out there. I don't know if there is anything else beyond that.

    What was our ROI?

    We haven't done an ROI for the current version. When you look at the total cost of creating or understanding what you've currently got through reverse engineering, and you look at the total cost of creating new products and new databases and maintaining them over time, and then you put that into the return on investment model, it is well worth it.

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool worth it. If you didn't have the tool and a single developer or a single modeler was trying to do the same thing, the speed would be three or four times slower. If you multiply that by the cost of that person and then you also consider the cost of the other people who are waiting for that person to create a database design, it multiplies out. So, it is well worth it.

    What's my experience with pricing, setup cost, and licensing?

    It has increased in price a fair amount over the years. It has always been expensive because it is a comprehensive product, and presumably, they have to do a tremendous amount of testing to make sure that everything works. It has always been dear because usually, a very specific target audience of data architects has the need for modelers, and not everyone in the organization would need to get a copy of it. Only people who are actually working in the database space need it. So, it has always been a very specialized piece of software, and it has been priced accordingly.

    I don't specifically know what we're paying now. About three years ago, in another organization, I have this memory of 6,000 AUD a seat or something like that, but I am not sure. In the mid-2000s, it was something like 1,200 AUD a seat. I get the impression that there was a price jump when it was spun off from CA as a separate company, which is understandable, but it could sometimes be a barrier in some organizations picking it up.

    I haven't talked to erwin people yet, but I'm going to suggest to them that they could perhaps think of having an entry-level product that is priced a bit lower, and then, you can buy the extra suite. That's what Microsoft does. They package a few things so that you have something, but if you want this extra stuff that has enterprise features, such as they talk to each other and have great bits and pieces, you have to pay more. I don't think there are any additional costs. It is per product, and there are different license levels. 

    What other advice do I have?

    Oracle Data Modeler, which is free, is one of the competitors that erwin has. You can't argue with the price point on that one, but erwin is much more comprehensive and easier to use. It is easier to display information and models to business people than something like Oracle Data Modeler, which does the job, but erwin does it a lot better. So, my advice would be that if you can afford it, get it.

    Its visual data models have certainly improved over time in terms of overcoming data source complexity and enabling understanding and collaboration around maintenance and usage. It was originally designed as a tool to build databases with, and it retains a lot of that. It still looks like that in a lot of cases, but it has also been made more business-friendly with a sort of new front end. So, it used to be all or nothing where when you wanted to show somebody just the entity names or just the entity descriptions, you had to switch all of the entities on your diagram just to show names. Now, you can show some of them. You can shrink down some of them, and you can keep some of them expanded. So, it has become a more useful information-sharing tool over time. It is extremely helpful.

    In my previous company, it was the enterprise data model, and you could paper a room with it if you printed the information out. To present that information to people, we had to chunk it down into subject areas. We had to present smaller amounts of information. Because it was linked to the underlying system, we could reuse the information that we had in a model in other models. The biggest lesson was to chunk the information down and present it in a digestible form rather than trying to show the entire thing because otherwise, people would run away screaming.

    One of the places didn't have a modeling tool in it, and they were trying to do the documentation using Confluence. It was just a nightmare trying to keep it maintained with different developers using different tables and then needing to throw something into one and adding something into another one. It was just a nightmare. If they had one tool where they could put it all in one place, it would have been so much easier than the mess they had.

    I would rate erwin Data Modeler a nine out of 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
    Flag as inappropriate
    Pam Rivera
    Independent Consultant at a tech consulting company with 1-10 employees
    Real User
    Top 5Leaderboard
    Complete Compare is good for double checking your work and ensuring that your model reflects the database design
    Pros and Cons
    • "The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems."
    • "The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work."

    What is our primary use case?

    The use case was normally to update data model designs for transaction processing systems and data warehouse systems. Part of our group also was doing data deployment, though I personally didn't do it. The work I did was mostly for the online transaction systems and for external file designs.

    I didn't use it for data sources. I used the solution for generation of code for the target in the database. Therefore, I went from the model to the database by generating the DDL code out of erwin.

    We had it on-premise. There was a local database server on SQL, then we each had a client that we install on our machines.

    How has it helped my organization?

    At one of my previous jobs, we had a lot of disparate databases that people built on their PCs, which were under their desk. We were under a mandate to bring all of that into a controlled environment that our DBAs could monitor, tune, etc. Therefore, this was a big improvement. I put the data that was in whatever source into an Excel spreadsheet, reverse engineering it into a SQL file and putting in the commas, and then I could reverse engineer that SQL into a data model. That saved us a tremendous amount of time instead of building the data model from scratch.

    I educated a number of my colleagues who were in data architecture and writing the DDL by hand. I showed them, "You do it this way from the model." That way, you never have to worry about introducing errors or having a disconnect between what is in the model and the database. I was able to get management support for that. We enhanced the accuracy of our data models.

    What is most valuable?

    I do like the whole idea of being able to identify your business rules. In my last position, I got acquainted with using it for data lineage, which is so important now with the current regulatory environment because there are so many laws or regulations that need to be adhered to. 

    If you're able to show where the data came from, then you know the source. For example, I was able to use user-defined properties (UDPs) on one job where we were bringing in the data from external XML files. I would put it at the UDP level, where the data came from. On another job, we upgraded a homegrown database that didn't meet our standards, so we changed the naming standards. I put in the formally known UDPs so I could run reports, because our folks in MIS who were running the reports were more familiar with the old names than the new names. Therefore, I could run the report so they could see, "This is where you find what you used to call X, and it is now called Y." That helped. 

    The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.

    Complete Compare is good for double checking your work, how your model compares with prior versions, and making sure that your model reflects the database design. At my job before my last one, every now and then the DBAs would go in and make updates to correct a production problem, and sometimes they would forget to let us know so we could update the model. Therefore, periodically, we would go in and compare the model to the database to ensure that there weren't any new indexes or changes to the sizes of certain data fields without our knowing it. However, at the last job I had, the DBAs wouldn't do anything to the database unless it came from the data architects so I didn't use that particular function as much.

    If the source of the data is an L2TP system and you're bringing it into a data warehouse, erwin's ability to compare and synchronize data sources with data models, in terms of accuracy and speed, is excellent for keeping them in sync. We did a lot of our source to target work with Informatica. We used erwin to sometimes generate the spreadsheets that we would give our developers. This was a wonderful feature that isn't very well-known nor well-publicized by erwin. 

    Previously, we were manually building these Excel spreadsheets. By using erwin, we could click on the target environment, which is the table that we wanted to populate. Then, it would automatically generate the input to the Excel spreadsheet for the source. That worked out very well.

    What needs improvement?

    When you do a data model, you can detect the table. However, sometimes I would find it quicker to just do a screenshot of the tables in the data model, put it in a Word document, and send it to the software designers and business users to let them see that this is how I organized the data. We could also share the information on team calls, then everybody could see it. That was quicker than trying to run reports out of erwin, because sometimes we got mixed results which took us more time than what they were worth. If you're just going in and making changes to a handful of tables, I didn't find the reporting capabilities that flexible or easy to use. 

    The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work.

    I don't see that it helped me that much in identifying data sources. Instead, I would have to look at something like an XML file, then organize and design it myself.

    For how long have I used the solution?

    I started working with Data Modeler when I was in the transportation industry. However, that was in the nineties, when it was version 1 and less than $1,000.

    What do I think about the stability of the solution?

    I found it pretty stable. I didn't have any problems with it. 

    Sometimes, when you're working with model Mart, once in a while the connection would drop. What I don't like is that if you don't consistently save, you could lose a lot of changes. That's something that I think should work more like Word. If for some reason your system goes down, there's an interruption, or you just forget or get distracted by a phone call, then you go back and something happened. You might have lost hours worth of work. That was always painful.

    What do I think about the scalability of the solution?

    I have worked on databases that had as many as a thousand tables. In terms of volume and versioning, it is fine. We've used the model Mart to house versions that introduce another level of complexity to keep the versioning consistent. 

    There is a big learning curve with using model Mart. Therefore, a lot of groups don't really fully utilize it the way they should. You need somebody to go in there every now and then to clean things up. We had some pretty serious standards around when you deployed it to production and how you moved it in model Mart. We would use Complete Compare there. It scaled well that way. 

    In terms of the number of users, we had 20 to 30 different data architects using it. I don't know that everybody was on it full-time, all the time. I never saw a conflict where we were having trouble because too many people were using it. From that point, it was fine.

    I think the team got as large as it was going to get. In fact, right now they're on a hiring freeze because of COVID-19.

    How are customer service and technical support?

    Over a period of five or 10 years, the few times I've had to go all the way through to erwin, I talked to the same young lady, who is very good. She understood the problem, worked it, and would give me the solution within two phone calls. This was very good.

    Which solution did I use previously and why did I switch?

    Prior to erwin, I had used Bachman and IEF. Bachman I liked better, but IEF was way too cumbersome. 

    Bachman was acquired by another company and disappeared from the marketplace. The graphics were very pretty on Bachman. Its strongest feature was reverse engineering databases. I found erwin just as robust with its reverse engineering. 

    IEF also disappeared from the marketplace, and I didn't use it very much. I didn't like it, as it was way too cumbersome. You needed a local administrator. It was really tough. It promised to generate code and database as well as supposed to be an all encompassing case tool. I just don't think it really delivered on that promise.

    It could very well be that the coding of those solutions didn't keep up with the latest languages. There was a real consolidation of data modeling tools in the last 15 to 18 years. Now, you've only got erwin and maybe Embarcadero. I don't think there's anything else. erwin absorbed a lot of the other solutions but didn't integrate them very well. We were suffering when it didn't work. However, with the latest versions, I think they've overcome a lot of those problems.

    How was the initial setup?

    Usually, the companies already had erwin in place. We had one company where the DBAs would sort of get us going.

    The upgrades were complex. They required a lot of testing. About a year ago, we held off doing them because we wanted to upgrade to the latest version as well as we were in the midst of a very big system upgrade. Nobody wanted to take the time. It took one of our architects working with other internal organizations, then there were about three or four of us who tried to do the testing of the features. It was a big investment of time, and I thought that it should have been more straightforward. I think companies would be more willing to upgrade if it wasn't so painful.

    The upgrade took probably two months because nobody was working on it full-time. They would work on it while they could. One of the architects ended up working late, over the weekends, and everything trying to get it ready before we could roll it out to the entire team.

    For the upgrades, there were ;at least half a dozen people across three different groups. There were three or four data architects in our group, then we had two or three desktop support and infrastructure people for the server issues.

    What about the implementation team?

    I think they used Sandhill for the initial installation.

    If it's the first time, I recommend engaging a third-party integrator, like Sandhill, whom I found them very good and responsive.

    What's my experience with pricing, setup cost, and licensing?

    We always had a problem keeping track of all the licenses. All of a sudden you might get a message that your license expired and you didn't know, and it happens at different times. At GM Finance, they engaged Sandhill to help us manage it. I was less involved because of the use of Sandhill, who was very helpful when we had trouble with our license. I remember you had to put in these long string of characters and be very careful that you didn't cut and paste it in an email, but that you generated it. It was so sensitive and really difficult until the upgrades.

    if there was a serious problem, then it was usually around the licensing, where there was some glitch in the licensing. Then, we would call Sandhill who would help us out with it. That's something where we had to invoke a third-party for any technical difficulties.

    I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit.

    I think you buy a block of users because I know the company always wanted to manage the number of licenses. 

    Which other solutions did I evaluate?

    I really haven't spent a lot of time on other data modeling tools. I have heard people complain about erwin quite a bit, "Oh, we wish we had Embarcadero," or something like that. I haven't worked with those tools, so I really can't say that they're better or worse than erwin, since erwin is the only data modeling tool that I've used in the last 15 years.

    What other advice do I have?

    There might be some effort to do some cloud work at my previous place of employment, but I wasn't on those projects. I don't think they've settled on how they're going to depict the data.

    Some of the stuff in erwin Evolve, and the way in which it meshes with erwin Data Modeler, was very cool.

    Sometimes, your model would get corrupted, but you could reverse engineer it and go back in, then regenerate the model by using the XML that was underlying the model. This would repair it. When I showed this to my boss, he was very impressed. He said, "Oh man, this is where we used to always have to call Sandhill." I replied, "You don't have to do that. You need to do this." That worked out pretty well.

    Biggest lesson learnt: The value of understanding your data in a graphical way has been very rich in communicating to developers and testers when they recognize the relationships and the business rules. It made their lives so much easier in the capturing of the metadata and business English definitions, then generating them. Everybody on the team could understand what this data element or group of data elements represented. This is the biggest feature that I've used in my development and career.

    I would rate this solution as an eight out of 10. 

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Beverly King De Loach
    Architecture Manager at CIGNA Corporation
    Real User
    Top 5Leaderboard
    The ability to generate database code from a model for a wide array of data sources cuts development time
    Pros and Cons
    • "We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work."
    • "I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk."

    What is our primary use case?

    We have a couple of really important use cases for erwin. One of them is that we automate the pull of metadata from the repository itself, so that we have all the model metadata that we can then put into a centralized hub that we can access with other applications. Another reason we pull all the metadata out of the model is to run it through our model validation application, which telsl us if this model is healthy or not and if it meets our standards or not.

    The other use case that's really important is managing the abbreviations file that erwin uses to convert logical terms into physical terms. The way that you manage it today within erwin is very manual and you'll go from a spreadsheet, make changes, and upload, et cetera-- but we've created an API application where if we take the main standard file and keep it in the database, make the changes in the database, then we have an application that goes out into the Mart file, deletes the glossary, replaces it with the table from the database. It's all automated at the push of a button. It's things that would take us days to make changes in the standard files and do updates in eight different files.

    How has it helped my organization?

    Data warehousing is the best example of how this product can make a huge difference because it's an integration of a lot of different source systems. You have to be able to visualize how you are going to make the information from sources A, B, and C merge together. It makes it very important.

    The ability to automatically generate DDL and have the to do it in different flavors (Teradata DDL or Oracle, et cetera), and to be able to fine-tune the forward engineering file so that it comes out the way your shop likes to see the DDL done is critical. It's soup to nuts from the design all the way to implementation. It's really critical.

    We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work.

    The code generation ensures accurate engineering of data sources especially because you can tweak it.

    Development time is another critical issue. If you had to tweak every single piece of code that comes off the line because there's only a one-size-fits-all solution, then the problem would not be worth anywhere near as much as it is. It has the ability to create a customized forward engineering code that you can use to generate your code for your shop so that it always comes out the way you want it.

    What is most valuable?

    The product itself is fantastic and it's about the only way to get an enterprise view of the data that you're designing. It's a design tool, obviously. Once you add the API to that where you can automate things, you can make bulk changes. You can integrate your data from erwin into another in-house application that doesn't have access to the data because the erwin data is encrypted. It's been quite a boon to us because we're very heavy into automation, to have the ability to create these ad hoc programs, to get at the data, and make changes on the fly. It's been a wonderful tool.

    A data modeling case tool is a key element if you are a data-centric team. There is no way around it. It's a communication tool. It's a way of looking at data and seeing visually how things fit together, what is not going to fit together. You have a way of talking about the design that gets you off of that piece of paper, where people are sitting down and they're saying, "Well, I need this field and I need that field and we need the other field." It just brings it up and makes it visible, which is critical.

    What needs improvement?

    I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk.

    We have a really good relationship with erwin and whenever we come across something and we contact the product developers and the contacts that we have, they immediately put fixes in, or they roll it into the next product. They're very responsive. I really don't have any complaints.

    It's a wonderful product and a great company.

    For how long have I used the solution?

    I've been using erwin since version 3.0 in the '80s.

    What do I think about the stability of the solution?

    It's very stable. It's very mature.

    What do I think about the scalability of the solution?

    We have about 70 licenses and we have about 70 people using the product full time. I've worked in shops where there were two or three to a dozen. Besides these 70, we also have other parts of the world, shops that have it. It scales right up. I have not worked in a shop where it was either too small or too large.

    We have full-time data modelers. We have architects. We don't make a distinction between the data architect and the data modeler. The data architect is designing the enterprise-level view of data and how we use it as a business and then modelers work on specific projects. They'll take this enterprise view and they'll create a project model for whatever it is that we're rolling out.

    We've got an architecture person, a modeler person, and we also have some developers who do some smaller database modeling when they had to get out something that's just used in-house. It's not used downstream by the end-user. We have the use of a portal product. Everybody at the company has access to the web portal product. They can go in and see what data has been designed and do impact analysis.

    The business analyst will look at it in the web portal to see what the downstream impact would be for them to change a particular name that the company uses for something. They check what the downstream and upstream implications are. Then, the developers use our DI tools for creating the mapping from the source system to the target system. Our data stewards use the tool for the business glossary and for how we define things. Every part of the company that deals with data uses eriwn.

    How are customer service and technical support?

    Customer service is fantastic. I know a lot of the guys by first name that work in tech support.

    When we have a problem, typically we're broke because we have people here on staff who answer most of the questions and most of the problems. If we have a problem, it's a big problem. They put us straight through and they handle us right away.

    Which solution did I use previously and why did I switch?

    I've used four different data modeling tools. Every modeling tool has its strong point but there's none of them out there that are as robust to me as erwin. If I have to choose one tool, it's going to be erwin, especially since I've gotten into the API and I know how to use it. Some of the things the other tools add in terms of being able to manipulate the underlying metadata, erwin has with that API. I won't say they now have it. They've had it since day one, but I've just picked it up in the last year or so.

    How was the initial setup?

    The setup gets more complex every time. Especially with 2019, they completely changed the interface and that was another learning curve. But for the most part, if you know data modeling, you can find the logical task that you want to do within the physical form and menus of the product. I didn't find the learning curve so bad because I was already a data modeler.

    I started the upgrade process today, as a matter of fact. We just got the software installed on a Mart, and I'm going through the new features. I'll play with it for a week. Then we'll get other testers to actually do some formal testing for a week. And then we'll put in our change because we're a large shop. It's around a month cycle to get an upgrade in place. That's if there are no problems. If we come across something that tells us that we can't use this product until this is changed or fixed then it's a stop. For the most part, a happy path, takes around a month in a large shop like ours.

    As far as the upgrade itself on dev, it took maybe an hour to upgrade the Mart. And it took me maybe an hour to upgrade the desktops that we use for testing.

    We've been doing upgrades for years. I'm been involved in it with multiple companies and it's what I do here. We have a cycle, a strategy, and a checklist that we go through for every upgrade.

    The first thing we do is we have a development system. We have virtual machines that we set up, so it's not on anybody's particular desktop. We upgrade the product and then one person will go through and I'll look at the new features and I'll see, number one, if we need the new features. Number two, if there is anything in these features that will break what we're doing today. Those are the first things I look at. If we pass those first two tests, then I start looking at the features and check what we are going to have and what it is going to involve in terms of training the user. We check how it is going to impact the modeler that's actually down in the trenches.

    I've got to do the training materials and then the next thing is we have a warranty period. We have a group that pushes the software to the desktop. We have a special day that we roll it out. And then we have a warranty period where we set a virtual call that anybody could sit in if they have a problem. We have a virtual call so that if anybody, when they come in on Monday morning, can't get into the product, or if they're having any problems with that at all, we're right there to answer their questions. We allow for that for the first week. After that, we turn everybody loose. Of course, it doesn't account for the physical part of backing up the database, doing the install, validating over the weekend, and all that stuff. It's just the standard software upgrade stuff.

    What about the implementation team?

    We implement in-house, but always have access to a World-Class vendor.

    What was our ROI?

    I wouldn't know how to measure ROI. I can only say that the alternative is spreadsheets, typing, visually inspecting things, never being able to integrate, never being able to communicate. I can't give an ROI, but I can say that I wouldn't want to work in a shop that didn't have a data modeling data tool.

    erwin's my first love. I know that I have been using it long enough that I am under the covers and I know it backward and forwards. It's the one I prefer.

    What's my experience with pricing, setup cost, and licensing?

    I don't deal with pricing or licensing here. I know that you can get a per-seat license. You can get concurrent licenses. To me, if you're a full-time modeler, you need a per-seat license. If you're a developer or a data steward, you use it a couple of times a day, maybe a couple of times a week, you can have concurrent licenses so that a group of five people will share one license. If someone's using it you can't, but if it's free then you can go ahead and use it, or you can lock it, or whatever. There are different ways of licensing it.

    What other advice do I have?

    The one thing that having a CASE tool does is it takes the drudge away from modeling. You get to actually think of what you're doing. You think about the solution and not how you are going to keep track of what you're doing. It frees you from a lot of mechanical things that are part of keeping track of data modeling, and it allows you to do the thinking part.

    There's not a lot of documentation on the API. You're pretty much going to have to teach yourself. If you have a specific problem where you've gotten to a certain point, you can always touch base with the guys at erwin and they will help you to get little snippets of code. But if you're doing things like we have, which is to write a full-blown application to extract the data or to make changes to the model, you're pretty much going to have to learn it on your own. That's just the one drawback of the API but if you're a programmer and you want to DM like me, it's a lot of fun.

    It's a challenge but it's very rewarding to be able to automate stuff that people are doing manually and to be able to hand them a solution.

    From one out of ten, I'd give erwin a 9.99. Everything has flaws. Everybody's got these little quirks like I mentioned about the ability to make changes that you shouldn't make. But as far as the product itself, I love it. It's right up there with a 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Sr. Data Engineer at a healthcare company with 10,001+ employees
    Real User
    Top 10
    Provides the ability to document primary/foreign key relationships and standardize them
    Pros and Cons
    • "What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes."
    • "erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge."

    What is our primary use case?

    I am responsible for both a combination of documenting our existing data models and using erwin Data Modeler as a primary visual design tool to design and document data models that we implement for our production services.

    My primary role is to document our databases using erwin to work with people and ensure that there is logically referential integrity from the perspective of the data models. I also generate the data definition language (DDL) changes necessary to maintain our data models and databases up to our client requirements in terms of their data, analytics, and whatever data manipulation that they want to do. I use erwin a lot.

    It is either installed locally or accessed through a server, depending on where I have been. I have had either a single application license or pooled license that I would acquire when I open up erwin from a server.

    How has it helped my organization?

    We get data from many different sources where I work. We have many clients. The data is all conceptually related. There are primary subject area domains common across most of our clients. However, the physical sources of the data, or how the data is defined and organized, often vary significantly from client to client. Therefore, data modeling tools like erwin provide us with the ability to create a visual construct from a subject area perspective of the data. We then use that as a source to normalize the data conceptually and standardized concepts that are documented or defined differently across our sources. Once we get the data, we can then treat the data that has been managed somewhat disparately from a common conceptual framework, which is quite important.

    At the moment, for what I'm doing, the interface to the physical database is really critical. erwin generally is good for databases. It is comfortable in generating a variety of versions of data models into DDL formats. That works fine.

    What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes.

    What is most valuable?

    Its visualization is the most valuable feature. The ability to make global changes throughout the data model. Data models are reasonably large: They are hundreds, and in some cases thousands, of tables and attributes. With any data model, there are many attributes that are common from a naming perspective and a data type perspective. It is possible with erwin to make global changes across all of the tables, columns, or attributes, whether you are doing it logically or physically. Also, we use it to set naming standards, then attempt to enforce naming standards and changes in naming from between the logical version of the data models and the physical versions of the data models, which is very advantageous. It also provides the ability to document primary/foreign key relationships and standardize them along with being able to review conceptually the data model names and data types, then visualize that across fairly large data models.

    The solution’s visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage is very important because you can create or define document subject areas within enterprise data models. You can create smaller subsets to be able to document those visually, assess the integrity, and review the integrity of the data models with the primary clients or the users of the data. It can also be used to establish communications that are logically and conceptually correct from a business expert perspective along with maintaining the physical and logical integrity of the data from a data management perspective. 

    What needs improvement?

    We are not using erwin's ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync to the fullest extent. Part of it is related to the sources of the data and databases that we are now working with and the ability of erwin to interface with those database platforms. There are some issues right now. Historically, erwin worked relatively well with major relational databases, like Oracle, SQL Server, Informix, and Sybase. Now, we are migrating our platforms to the big data platforms: Hadoop, Hive, and HBase. It is only the more recent versions of erwin that have the ability to interface successfully with the big data platforms. One of the issues that we have right now is that we haven't been able to upgrade the version that we currently have of erwin, which doesn't do a very good job of interfacing with our Hive and Hadoop environments. I believe the 2020 version is more successful, but I haven't been able to test that. 

    Much of what I do is documenting what we have. I am trying to document our primary data sources and databases in erwin so we have a common platform where we can visually discuss and make changes to the database. In the past couple of years, erwin has kind of supported importing or reverse engineering data models from Hive into erwin, but not necessarily exporting data models or forward generating the erwin-documented data models into Hive or Hadoop (based on my experience). I think the newest versions are better adapted to do that. It is an area of concern and a bit of frustration on my part at this time. I wish I had the latest version of erwin, either the 2020 R1 or R2 version, to see if I could be more successful in importing and exporting data models between erwin and Hive.

    erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge. 

    There are other characteristics of erwin, as far as interfacing directly with the databases, that we don't do. Historically, while erwin has existed, the problem is the people that I work with and who have done most of the data management and database creation are engineers. Very few of them have any understanding of data modeling tools and don't work conceptually from that perspective. They know how to write DDL syntax for whether it's SQL Server, Oracle, or Sybase, but they don't have much experience using a data modeling tool like erwin. They don't trust erwin nor would they trust any of its competitors. I trust erwin a lot more than our engineers do. The most that they trust the solution to do is to document and be able to see characteristics of the database, which are useful in terms of discussing the database from a conceptual perspective and with clients, rather than directly engineering the database via erwin. 

    erwin is more of a tool to document what exists, what potentially will exist, and create code that engineers can then harvest and manage/manipulate to their satisfaction. They can then use it to make changes directly to our databases. Currently, when the primary focus is on Hive databases or Hadoop environment, where there is no direct engineering at this point between erwin and those databases, any direct or indirect engineering at the moment is still with our Oracle Database.

    For how long have I used the solution?

    I have been using the solution on and off for 20 to 30 years.

    What do I think about the stability of the solution?

    It is pretty stable. Personally, I haven't run into any real glitches or problems with the output, the ability to import data when it does work correctly, the export/creation of DDL, or generation of reports.

    We are trying to upgrade. This has been going on now for several months. We're trying to upgrade to the 2020 version. Originally, it was 2020 R1, but I think at this point people are talking about the 2020 R2 version. Now, I'm not part of our direct communications with erwin in regards to Data Modeler, but there are some issues that erwin is currently working on that are issues for my company. This have prevented us from upgrading immediately to the 2020 version.

    What do I think about the scalability of the solution?

    This gets down to how you do your data modeling. If you do your data modeling in a conceptually correct manner, scaling isn't an issue. If you don't do your data modeling very well, then you are creating unnecessary complexities. Things can get a bit awkward. This isn't an erwin issue, but more a consequence of who is using the product.

    In the area that I'm working right now, I'm the only user. Within the company, there are other people and areas using the solution probably far more intimately in regards to their databases. I really don't know the number of licenses out there.

    How are customer service and technical support?

    The problem is that our issues are related to interfacing erwin Data Modeler with the Hadoop Hive environments. The issues have always been either what I was trying to do was not fully supported by our version of erwin Data Modeler. People have certainly tried to help, but there's only so much that they could tell me. So, it's been difficult. I am hoping that I can get back to people with some better answers once the newest version of erwin is available to us.

    Which solution did I use previously and why did I switch?

    The people who were previously responsible for the database development were very good engineers who knew how to write SQL. They could program anything themselves that they wanted to program. However, I really don't think that they really understood data modeling as such. They just wrote the code. Our code and models are still developing and not necessarily conformed to good data modeling practices. 

    How was the initial setup?

    In the past, I was involved in the initial setup. In traditional environments, it sets up pretty easily. In my current environment, where I'm trying to get it as intimately integrated with our big data platforms as possible, I'm finding it quite frustrating. However, I'm using an older version and think that is probably a significant part of the problem.

    What was our ROI?

    In other environments where I've worked, the solution’s ability to generate database code from a model for a wide array of data sources cuts development time. In this environment, erwin is not very tightly integrated into the development cycle. It is used more for documentation purposes at this point and for creating a nascent code which down the road gets potentially implemented. While it's not used that way at my current company, I think it would be better if it were, but there is a culture here that probably will prevent that from ever occurring.

    What's my experience with pricing, setup cost, and licensing?

    An issue right now would be that erwin doesn't have a freely available browser (that I am aware of) for people who are not data modelers or data engineers that a consumer could use to look at the data models and play with it. This would not be to make any changes, but just to visually look at what exists. There are other products out there which do have end user browsers available and allow them to access data models via the data modeling tool.

    Which other solutions did I evaluate?

    There is another tool now that people are using. It is not really a data modeling tool. It is more of a data model visualization tool, and that's SchemaSpy. We don't do data modeling with that. You get a visualization of the existing physical database. But that's where the engineers live, and that's what they think is great. This is a cultural, conceptual, understanding issue due to a lack of understanding and appreciation of what good data modeling tools do that I can't see changing based on the current corporate organization. 

    What other advice do I have?

    It is the only meaningful way to do any data modeling. It is impossible to conceptualize and document complex data environments and the integration between different data subject areas. You can write all the code or DDL you want, but it's absolutely impossible to maintain any sort of conceptual or logical integrity across a large complex enterprise environment without using a tool like erwin. 

    You want to look at what you are trying to accomplish with erwin before implementing it.

    • Does the product have the ability to support or accomplish that?
    • Based on the technologies that you have decided you want to use to manage your data, how intimately does it integrate with those technologies? 

    From my perspective of using the traditional relational databases, I think erwin probably works pretty well. 

    For the newer database technologies, such as the Hadoop environment databases, it's not clear to me how successful erwin is. However, I'm not talking from the perspective of somebody who has been aggressively using the latest version. I don't have access to it, so I'm afraid my concerns or issues may not be valid at this point. I will find out when we finally implement the latest erwin version.

    I would give the solution a seven or eight (out of 10).

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Data Modeler at a government with 10,001+ employees
    Real User
    Top 10Leaderboard
    The data comes to life to where customers understand exactly what they're asking for
    Pros and Cons
    • "It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features. erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for."
    • "I'd really like to see the PDF function become available. It would make my life much easier than what it is at the moment because whenever I need to collaborate with people that do not have erwin, I have to go through the wonkiness of going to Word and then save it from Word into PDF. There's a lot of differences between erwin 4.4 and 2020."

    What is our primary use case?

    When I work from home, my use case for erwin is for when I get a request for a database upgrade. Usually, the request comes in with a whole bunch of tables and names so I'll go into the DM and I'll start building out what they're asking for. Once we actually get them to be able to view it and understand it, then we'll go back and forth with the developers and the requesters to make sure that it's exactly what they're looking for. We'll spend a few days making sure everything looks correct. Once that's finished, I'll send it out. 

    Unfortunately, I can't do a PDF straight from erwin so I'll copy everything into Word and then save my Word as a PDF. With that PDF, I'll be able to send it off to all the stakeholders, not just the developers and the requesters, so that everybody can see it, even the ones that don't have erwin itself.

    My office use case is pretty much the same, except with the office, we add in Model Mart. We have our entire network, all the databases, and everything in Model Mart and it's over 1,500 different tables, relationships, attributes, and things like that. It's a really large model. Then, we break down that model into individual subject areas and we work through those. We go back to any new requests, we'll build them in Data Modeler and we'll go back and forth with the requesters, making sure everything looks like what they're expecting it to. They'll usually just send us either a spreadsheet of names and data types and then we build from there.

    How has it helped my organization?

    erwin brings data to life. We're currently working with a requester at that moment, who provided us with a spreadsheet of their ideas of tables and attributes with the metadata associated with each. Then they provided us a rudimentary diagram with tables and keys. I was able to put it into erwin along with the metadata that they were asking for, and it really brought questions to life. The people said, "We didn't realize the relationships were going to bring in these extra keys." And they didn't realize there were a lot of extra pieces coming in as well. Once we did that, we were able to show them exactly what they were asking for and it brought much more conversation between us.

    We don't use DM's modeling support for Snowflake cloud yet. I am interested in cloud technology and I just came across that support that erwin has. It made me even more interested in cloud technology. 

    Its ability to generate database code from a model for a wide array of data helps another office in my company that uses it quite a bit. 

    What is most valuable?

    The automatic build to the physical is a really nice feature. I like the fact that it will bring the keys down from one table to the next, from a parent to child table. Those two things make erwin a very easy to use product. 

    It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features.

    erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for.

    What needs improvement?

    I'd really like to see the PDF function become available. It would make my life much easier than what it is at the moment because whenever I need to collaborate with people that do not have erwin, I have to go through the wonkiness of going to Word and then save it from Word into PDF. There's a lot of differences between erwin 4.4 and 2020. It's a learning curve for me. It could be easier to use, but it's not a Windows/Microsoft type of application. It's close to it but it's also not. Once I've used it enough and learned it, then I'll know where all the pieces are.

    For how long have I used the solution?

    I've been a data modeler in my office for six years so I've been using erwin for six years. My office has been using erwin since the beginning of time. I'm not exactly sure when they started using it, but the office has been around for 20 years so they've probably been using it since erwin started.

    It's on our secret network and I believe they've been going back and forth quite a bit with erwin's tech teams as far as getting it to work because I think our workstations are virtual workstations and there were some issues with the licensing and the license server. I've been watching that from the peripherals but not really getting in the weeds with them. I'm not sure exactly what they're doing.

    What do I think about the stability of the solution?

    I've only had it crash on me once. I can't remember what I was doing and when or how it crashed. It was one of those inconvenient times and so I started again. I don't think an auto-save was done. That happened three weeks ago.

    What do I think about the scalability of the solution?

    I use it at home every day and there are days where I've used it almost an entire eight hour day. I'm using it quite heavily right now.

    How are customer service and technical support?

    The only time I've had to use erwin technical support was when I requested an extension on my trial license. They were really quick and good about it.

    How was the initial setup?

    The initial setup was straightforward. I was able to install it at home without a problem whatsoever. Within a few seconds, I was able to figure out how to start building a table. I had no problems whatsoever. I think my colleagues who are going into work might have a little bit of a different answer because of issues with service, license keys, and what have you.

    The deployment took five to ten minutes. There wasn't a lot of customization necessary. It's been a couple of months now since I've started doing it. I can see from the tab that I'm on that I need to just click on the table, click the area there, and start building tables. I've also had experience with it, so that makes it easier as well. It's intuitive.

    At the office, there's quite a bit of strategy on how they needed to deploy it and how they needed to have it totally set up in the virtual world. They were upgrading from an older version.

    At our office, we have two or three different people that were truly involved, but we did have one main person doing the going back and forth with erwin as far as getting help and setting it up. That took a couple of weeks, if not longer, to actually get it set up working correctly.

    We bought a total of 10 licenses, although I'm not so sure. It's less than 25.

    What was our ROI?

    I would definitely say that it's a time saver once you learn how to use the application. It takes a little while to teach people how to use it just like with any other application, but as far as time-saving afterward, it's invaluable. As far as taking the time to truly show a person the end result, we can show them exactly what we're talking about and that's really invaluable. I'm sure the deployment would say the same thing as far as being able to build the database off of it.

    The accuracy and speed in transforming complex designs into well-aligned data sources make the cost of the tool worth it. At the same time, I don't do that.

    It saves us a couple of hours of actually trying to build something. It's not something that my office does every day. However, when we do it, I could not imagine building tables or building a diagram from any other tools that are currently in the office. It's impossible to do it from PowerPoint or Word. 

    What's my experience with pricing, setup cost, and licensing?

    I don't think that the pricing for my office is horrible. However, from my home, there's absolutely no way I could afford erwin on my own as far as doing my own work.

    There have been discussions between my office and the actual company that I work for and trying to decide on who would actually pay the bill. I'm the person stuck in the middle saying that I can't do my work here and luckily, I've been able to get one or two extensions on my free trial license from erwin. However, I'm afraid that I won't be able to get my company to pay for it and fairly soon the trial license will end up expiring on me.

    I decided to build physical only but later on that kind of bit me and so I will start building logical first and then the physical. It would be nice to be able to build out my own set of tables and maybe a Model Mart type of situation but I don't see me being able to afford a copy at home for myself. I won't be able to continue keeping a trial copy forever and until COVID is over.

    Which other solutions did I evaluate?

    When COVID started, I did start looking at home versions of other freeware because I had time to actually do some research. I found that most of the freeware wasn't really free. It was also still kind of clunky and one of the applications that I was using didn't automatically bring the keys down and for me, that was a killer right there. I would not suggest the application to anyone. From the trial copies of the other applications they use, I think that's where erwin really comes up ahead, above the other applications.

    What other advice do I have?

    The biggest lesson I have learned from erwin is the old cliche, that a picture is worth a thousand words. It is truly erwin in itself. When a person asks for a set of tables and they actually see that diagram visually, it really assists in any meeting that you will have. It is key to any meeting you have.

    I would rate Data Modeler an eight out of ten. The reason for this rating is because I did a couple of dumb attributes and it took me forever to find how to truly delete it. It was a parent-child relationship and I deleted the parent and did not answer the question from the next box that popped up correctly. So I had an attribute hanging out in a table and it took me forever to find the dangling relationships. Because of that, I knocked it down a rating because it did take me a long time to find that.

    I'm quite happy with the modeling tool. It does just about everything that I need it to do. I can't really think of what it doesn't do that I would need other than the PDF. I'm really happy with it.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    EDW Architect/ Data Modeler at Royal Bank of Canada
    Real User
    Top 10
    We can input large files in one shot using the Bulk Editor feature
    Pros and Cons
    • "The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database."
    • "Some Source official systems give us DDLs to work with and they have contents not required to be part of the DDL before we reverse engineer in the erwin DM. Therefore, we manually make changes to those scripts and edit them, then reverse-engineer within the tool. So, it does take some time to edit these DDL scripts generated by the source operational systems. What I would suggest: It would be helpful if there were a place within the erwin tool to import the file and automatically eliminate all the unnecessary lines of code, and just have the clean code built-in to generate the table/data model."

    What is our primary use case?

    We work on different platforms like SQL Sever, Oracle, DB2, Teradata and NOSQL. When we take in requirements, it will be through Excel spreadsheet which is a Mapping Document and this contains information about Source and Target and there mapping and transformation rules. We understand the requirements and start building the conceptual model and then the logical model. When we have these Data Models built in erwin Data Modeler tool, we generate the PDF Data Model diagrams and take it to the team (DBA, BSAs, QA and others)  to explain the model diagram. Once everything is reviewed, then we go on to discuss the physical Data Model. This is one aspect of the requirement from Data Warehouse perspective. 

    Other aspect of the requirement can be from the operational systems where the application requirements might come through as DDLs with SQL extension files where we reverse engineer those files and have the models generated within erwin Data Modeler. Some of them, we follow the same templates as they are. But some others, once we reverse-engineer and have that Model within the erwin, we make changes to entity names, table names and capture metadata according to RBC standards. We have standards defined internally, and we follow and apply these standards on the Data Models.

    How has it helped my organization?

    There are different access level permissions given to different users who are Data Modelers, Data Architects, Database Administrators, etc. These permission have read, write and delete options. Some team members only have read-only access to the Data Models while others have more. Therefore, this helps us with security and maintain the Data Models.

    The solution’s ability to generate database code from a model for a wide array of data sources cuts development time in only some scenarios for us where we have the data model build into the erwin tool. E.g., I can generate a DDL for the DBAs to create tables on the database. But, in other scenarios, it will be the DBAs who will access the erwin tool with read-only access. They will fetch the DDLs from the models that we created. Once the DDL is generated from the erwin tool, it is all about running the script on the database to create tables and relationships. There are some other scenarios where we might add an index or a default value based on the requirements. 90 percent of the work is being done by the tool.

    The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database.

    What is most valuable?

    We have a very large number of operational and Data Mart Data Models inside of the erwin tool. It has a huge volume of metadata captured. Therefore, when we are working on a very large requirement, there is an option called Bulk Editor where we can input large files into the erwin in one shot to build the Data Mode with much lesser time. All the built-in features are easy to use.

    We make use of the solution’s configurable workspace and modeling canvas. All the features available help us to build our Data Model, show the entities, and the relationship between the entities, define the data types and add description of the entities and attributes. With all of this we can take out the PDF version of the Data Model diagram, then send them across for any teams to review.

    Not to forget the version saving feature. Every time we make changes by adding, deleting and modifying to the Data Models and save, the tool automatically create a new Data Model versions so we don't lose any work. We can go back to the previous versions and reverse all the changes and make it a current version if needed.



    What needs improvement?

    Some Source official systems give us DDLs to work with and they have contents not required to be part of the DDL before we reverse engineer in the erwin DM. Therefore, we manually make changes to those scripts and edit them, then reverse-engineer within the tool. So, it does take some time to edit these DDL scripts generated by the source operational systems. What I would suggest: It would be helpful if there were a place within the erwin tool to import the file and automatically eliminate all the unnecessary lines of code, and just have the clean code built-in to generate the table/data model.

    For how long have I used the solution?

    I have been using this tool for five years. I have used this tool at my previous companies as well as in my current company.

    What do I think about the stability of the solution?

    One recent scenario that we came across was in our day-by-day activities, where Data Models are growing in very large numbers. For some reason, the performance was bit low. It was suggested to upgrade to the newer version, which is erwin Data Modeler 2019 R1. So, we are already in the process of moving into the newer version. Once we migrate, we will do all the user testing to see how the performance has increased from the previous version. If there still are any performance issues or other features errors, we will get back to the support team.

    So far, whenever we have moved to a newer version, there has always been a positive result. We keep that version until we see a newer version. Every six months or once a year, we get in touch with the erwin support team to ask for any suggestions to see if any new features added and any enhancement to the newest version. Or, is it a right time to move into the newest version or just stick to our current version? They suggest based on our use cases and requirements.

    For deployment and maintenance of this solution, five to 10 people are needed. E.g., two people are involved from our team, two DBAs, and two people from the server team and other teams.

    What do I think about the scalability of the solution?

    What we have is a huge volume of data so far. We have a very large number of Data  Models with Operational Systems, Data Marts and it still has room for extension and expansion. 

    Within my current company, this product has been accessed by Data Modelers, Database Administrators, Data Architects, and Data Scientists. 50 to 100 people have access to this solution.

    How are customer service and technical support?

    Once a year or every two years, we upgrade to the latest version. If we are looking for any new features or enhancements to be used for new use cases or requirements, we get in touch with the erwin support team. They are very helpful in understanding and providing the best possible suggestions and solutions with a very impressive SLAs. They really guide us and give us a solution when we have to upgrade versions.

    Which solution did I use previously and why did I switch?

    I have not used another solution with my current company. While I have used other solutions before, the majority of the time, I have been with erwin Data Modeler.

    How was the initial setup?

    Whenever there is a new release, we do the testing, installation from scratch. The initial setup is straightforward. Once you install the product, it downloads onto your system. Once you double click, it gives you the basic instructions, like any other product. You just have to click on "next", where everything is configured already. 

    Somethings might be company-specific requirements. For these, you have to make sure you select the right options. Apart from that, everything is straightforward. Until you get to the last page, where you give it your Server details and selecting the windows credentials to log in, and that is company specific.

    Once we have it on the production environment, privileges are given only to Data Modelers who can read, write, and delete to design the Data Model.

    What about the implementation team?

    This is implemented in-house where this software is packaged by Application Support team who deploys it on the production environment on our internal Software Center application. To download and install this solution takes about 40 to 50 minutes.

    What was our ROI?

    We haven't moved away from this product for a very long time. I am sure the company has seen benefits and profits out of the solution, saving a lot of work effort and resources.

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources makes the cost of the tool definitely worth it.

    What's my experience with pricing, setup cost, and licensing?

    This company had bought the license for three years, and it's not an individual license. While you can buy a license for each individual, that would be very expensive. There is something called concurrent licenses where you can purchase licenses in bulk and 15 to 20 people can access the license and model. Concurrent licenses are scalable to the number of users and are proportional to the cost. 

    Which other solutions did I evaluate?

    When I joined the company, the product was already here. Our internal team will have a meeting to discussion on new releases of this product. When we talk to the erwin support team, we ask, "What are the newest features? Will these be beneficial for our company based on the requirements and use cases?" Once everyone has given their opinion, then we move forward in upgrading to newer version considering the performance, new features or enhancements.

    What other advice do I have?

    For our use cases and requirements, we are very happy with the erwin product. If we come across any issues or have any doubts about the tool, we get really good support from erwin support team.

    They definitely have a positive impact on overall solutioning because of how they design and capture data. This is definitely something any company who is involved with data should look into, specifically when there are many database platforms and dealing with huge volume of data. It is definitely scalable as well, as we are one of the biggest financial institutions and have a very massive Data Models inside this tool.

    The biggest lesson learnt from using this solution is how we can capture metadata along with the data structure of the database models. Sometimes, when we go to the business showing the designs of the conceptual/logical model, they want to understand what the table and each field is about. So, we do have an option to go into each entities/attributes to add the respective information and show them the metadata captured for these entities and attributes.

    I would rate the newest releas as 9.5 out of 10. When our requirement use case change, the solution moves to a newer version and everything works fine. We are happy with that. However, as time goes, a year or two, we might come across some situations where we look for better enhancements of features or newer features.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Buyer's Guide
    Download our free erwin Data Modeler (DM) Report and get advice and tips from experienced pros sharing their opinions.