IT Central Station is now PeerSpot: Here's why

erwin Data Modeler (DM) OverviewUNIXBusinessApplication

erwin Data Modeler (DM) is #1 ranked solution in top Database Design tools, #2 ranked solution in top Architecture Management tools, and #6 ranked solution in Business Process Design tools. PeerSpot users give erwin Data Modeler (DM) an average rating of 8 out of 10. erwin Data Modeler (DM) is most commonly compared to SAP PowerDesigner: erwin Data Modeler (DM) vs SAP PowerDesigner. erwin Data Modeler (DM) is popular among the large enterprise segment, accounting for 74% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 21% of all views.
erwin Data Modeler (DM) Buyer's Guide

Download the erwin Data Modeler (DM) Buyer's Guide including reviews and more. Updated: July 2022

What is erwin Data Modeler (DM)?

erwin pioneered data modeling, and erwin Data Modeler (erwin DM) remains trusted, award-winning software for data modeling and database design, automating complex and time-consuming tasks. Use it to discover and document any data from anywhere for consistency, clarity and artifact reuse across large-scale data integration, master data management, metadata management, Big Data, business intelligence and analytics initiatives – all while supporting data governance and intelligence efforts.

erwin Data Modeler (DM) was previously known as erwin DM.

erwin Data Modeler (DM) Customers

 Premera, America Honda Motors, Aetna, Kaiser Permanente, Dental Dental Cali, Cigna, Staples

erwin Data Modeler (DM) Video

erwin Data Modeler (DM) Pricing Advice

What users are saying about erwin Data Modeler (DM) pricing:
  • "I don't specifically know what we're paying now. About three years ago, in another organization, I have this memory of 6,000 AUD a seat or something like that, but I am not sure. In the mid-2000s, it was something like 1,200 AUD a seat. I get the impression that there was a price jump when it was spun off from CA as a separate company, which is understandable, but it could sometimes be a barrier in some organizations picking it up. I haven't talked to erwin people yet, but I'm going to suggest to them that they could perhaps think of having an entry-level product that is priced a bit lower, and then, you can buy the extra suite."
  • "I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit."
  • "An issue right now would be that erwin doesn't have a freely available browser (that I am aware of) for people who are not data modelers or data engineers that a consumer could use to look at the data models and play with it. This would not be to make any changes, but just to visually look at what exists. There are other products out there which do have end user browsers available and allow them to access data models via the data modeling tool."
  • "There are no costs in addition to the standard licensing fees."
  • "The price of erwin Data Modeler is very expensive, in particular for this part of the world."
  • erwin Data Modeler (DM) Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    David Jaques-Watson - PeerSpot reviewer
    Senior Consultant at a tech services company with 11-50 employees
    Real User
    Top 10Leaderboard
    Improves accuracy for generating target databases, allows us to pull metadata from a database, and makes it easy to display information and models
    Pros and Cons
    • "Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful."
    • "I still use Visio for conceptual modeling, and that's mainly because it is easier to change things, and you can relax some of the rules. DM's eventual target is a database, which means you actually have to dot all the Is and cross all the Ts, but in a conceptual model, you don't often know what you're working with. So, that's probably a constraint with erwin. They have made it a lot easier, and they've done a lot, but there is probably still room for improvement in terms of the ease of presentation back to the business. I'm comparing it with something like Visio where you can change colors on a box, change the text color and that sort of stuff, and change the lines. Such things are a whole lot easier in Visio, but once you get a theme organized in erwin, you can apply that theme to all of the objects. So, it becomes easier, but you do have to set up that theme."

    What is our primary use case?

    In one of the companies, we used it as an information tool. We created a logical model so that the business would know what was in the offices down to the warehouse. The current use case is also the same. We have some places for information, so we can do a logical data model for them, but, usually, it would go towards building an actual database, which also involves reverse engineering of an existing one because people don't know what's in there. It is currently on-prem, but we still have a separate server.

    How has it helped my organization?

    We want to bring different erwin components together and tell a business user story. So, having all of it on one platform to be able to tell one story makes it not as fragmented as components have been in the past.  In my previous company, when we had 1,000 tables, 6,000 columns, and 14 subject areas, trying to explain to people in the organization was difficult. Without the tool, it would have been impossible. With the tool, it was a lot easier because you could show a steward how this is his or her domain. For each steward, you could say, "Well, this is your domain over here." Once they had that, they could understand what you were talking about. So, it improved communication. We had a point where two stewards were looking at the models, and one of them said, "I think that one that you've got over there is actually mine." The other one said, "I think you're right." So, we actually moved an entity from one subject area to another because now they had the ability to see what was in their subject area. They could go and see what wasn't theirs and should be someone else's. If we didn't have the tool, we wouldn't have that visibility and wouldn't have been able to recognize that sort of situation.  Its ability to generate database code from a model for a wide array of data sources cuts development time. You don't have to re-key things. You put in the information at one spot, and it flows out from there. There are so many parameters you can put on the physical side. You can put in your indexes, and you can put in expected size changes. You can store all sorts of information within the model itself. It is a really good repository of all that sort of information, and then you just push a button, and it generates the other end. It works really well. In terms of time-saving, if you had to write it all out by hand, it would take weeks. It would probably take three or four times longer without the tool. It certainly improves accuracy for the generation of target databases because you're only putting information in one spot. You don't have to retype it. For example, I saw the word conceptual model misspelled today. So, if you have to re-key something, no matter how careful you are, you're going to misspell things, which would cause problems down the track, whereas if you make a mistake in DM, there is only one place you have to go and fix it, and then, you would regenerate the downstream stuff. This means that you don't have to touch anything physical. You generate it, and then you can use it.

    What is most valuable?

    Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful. I am currently trying to compare and synchronize data sources with data models, and it is pretty good. It shows you all the differences between the two systems. After that, it is a matter of what you want to do with them. It is certainly helpful for bringing models in and being able to compare. At the moment, I'm comparing something that's in a database with something that was in the DDL statement. So, these are two different sets of sources, and I can bring different sources together and compare them in the one, which is really helpful.

    What needs improvement?

    I still use Visio for conceptual modeling, and that's mainly because it is easier to change things, and you can relax some of the rules. DM's eventual target is a database, which means you actually have to dot all the Is and cross all the Ts, but in a conceptual model, you don't often know what you're working with. So, that's probably a constraint with erwin. They have made it a lot easier, and they've done a lot, but there is probably still room for improvement in terms of the ease of presentation back to the business. I'm comparing it with something like Visio where you can change colors on a box, change the text color and that sort of stuff, and change the lines. Such things are a whole lot easier in Visio, but once you get a theme organized in erwin, you can apply that theme to all of the objects. So, it becomes easier, but you do have to set up that theme. I think they've got three to four initial themes. There is a default theme, and then there are two or three others that you can pick from. So, having more color themes would help. In Visio, you have a series of themes where someone who knows about color has actually matched the colors to each other. So, if you use the colors in the theme, they will complement each other. So, erwin should provide a couple more themes. They could perhaps think of having an entry-level product that is priced a bit lower. For extra features, the users can pay more.
    Buyer's Guide
    erwin Data Modeler (DM)
    July 2022
    Learn what your peers think about erwin Data Modeler (DM). Get advice and tips from experienced pros sharing their opinions. Updated: July 2022.
    611,060 professionals have used our research since 2012.

    For how long have I used the solution?

    I have been using it at least since 2003. I have used it at multiple organizations.

    What do I think about the stability of the solution?

    It has always been really stable in the different organizations that I've used it in. It has always been a pretty good product.

    What do I think about the scalability of the solution?

    It works fine with the number of people who have been using the product. We're talking about 10 to 12 people, not thousands of people. I haven't ever been in an organization where thousands of people even needed to get to the product. Probably the biggest drawback in scalability is the cost per seat rather than the actual product. The product works fine. Our current organization has probably about 5 to 10 people using it. We're a consultancy, so we're using it in various roles. So, a lot of it is to do with understanding. As consultants, we try to understand what a client has in the organization and what sort of data they have to make sure there is actually data in the system that can answer their business questions. So, that's the sort of thing we use it for. We can turn around and give them designs. We can show what it is, and then we can turn around and make it what it would be. It is used by analysts and developers. They are not developing software. They are probably developing the database, but then, people would develop software. I've used it on all the projects I've been on so far. I've been with this company for a short time, and it has come into play for pretty much all of the projects that I've been on. We want to use it more extensively. We want to use the erwin suite. We've got the modeler, but we also want to use their BI tool. We would like to evolve and come up with a story that links all of them together. We have only just got the BI suite installed. We're starting to play around with it and see what we can do with it. We're doing some training on it at the moment. In a previous company also, somebody from erwin came to show it to us, and it was reasonably new at that point. That was last year. It is a reasonably new product. So, getting them to talk to each other has also been fairly new. erwin has only done it in the last couple of years. 

    How are customer service and support?

    I haven't had dealings with them, but the dealings I've had with erwin as a company have always been really good. So, I would rate them a nine or 10 out of 10.

    Which solution did I use previously and why did I switch?

    I use Visio on the conceptual side. We've got Informatica, and I think it has got a modeling component in there. We try to get a range of products because we're doing consulting in various organizations, and they have got various tools. Usually, it depends on what a client has already installed. Sometimes, it also depends on their budget. Something like Informatica is usually at the top right end corner of the Gartner Quadrant, but it could also be overkill for smaller organizations because the benefit may not be there. So, a lot of time, it is horses for courses. You have to sort of tailor any solution to meet a client's needs.

    How was the initial setup?

    I haven't ever really installed erwin. One of the other guys has done that. Most of the places had it installed already. Usually, the complexity depends on how the organization does its software deployment. So, you have to go and request the software and then somebody has to give you the package. Once you get the package, it is pretty straightforward. It is usually less of a problem on erwin's side and more of an issue with how an organization deploys any erwin software, but once you deploy it, it works fine. Some places that I've worked with were very strict about doing testing on COTS products to make sure that there are no viruses on it and also to make sure that it plays nicely with the rest of the system. So, those sorts of organizations may take longer in terms of testing. You put it on a test machine first and make sure it is not going to kill anything. They might have to repackage some stuff before they put it out to the network. To deploy a vanilla thing, I would think that it would only take a couple of hours. In terms of maintenance, at the moment, I think we've got one person. The main thing is deploying new versions. You've got a server stood up, and you have to put the software out there. I don't know if there is anything else beyond that.

    What was our ROI?

    We haven't done an ROI for the current version. When you look at the total cost of creating or understanding what you've currently got through reverse engineering, and you look at the total cost of creating new products and new databases and maintaining them over time, and then you put that into the return on investment model, it is well worth it. The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool worth it. If you didn't have the tool and a single developer or a single modeler was trying to do the same thing, the speed would be three or four times slower. If you multiply that by the cost of that person and then you also consider the cost of the other people who are waiting for that person to create a database design, it multiplies out. So, it is well worth it.

    What's my experience with pricing, setup cost, and licensing?

    It has increased in price a fair amount over the years. It has always been expensive because it is a comprehensive product, and presumably, they have to do a tremendous amount of testing to make sure that everything works. It has always been dear because usually, a very specific target audience of data architects has the need for modelers, and not everyone in the organization would need to get a copy of it. Only people who are actually working in the database space need it. So, it has always been a very specialized piece of software, and it has been priced accordingly. I don't specifically know what we're paying now. About three years ago, in another organization, I have this memory of 6,000 AUD a seat or something like that, but I am not sure. In the mid-2000s, it was something like 1,200 AUD a seat. I get the impression that there was a price jump when it was spun off from CA as a separate company, which is understandable, but it could sometimes be a barrier in some organizations picking it up. I haven't talked to erwin people yet, but I'm going to suggest to them that they could perhaps think of having an entry-level product that is priced a bit lower, and then, you can buy the extra suite. That's what Microsoft does. They package a few things so that you have something, but if you want this extra stuff that has enterprise features, such as they talk to each other and have great bits and pieces, you have to pay more. I don't think there are any additional costs. It is per product, and there are different license levels. 

    What other advice do I have?

    Oracle Data Modeler, which is free, is one of the competitors that erwin has. You can't argue with the price point on that one, but erwin is much more comprehensive and easier to use. It is easier to display information and models to business people than something like Oracle Data Modeler, which does the job, but erwin does it a lot better. So, my advice would be that if you can afford it, get it. Its visual data models have certainly improved over time in terms of overcoming data source complexity and enabling understanding and collaboration around maintenance and usage. It was originally designed as a tool to build databases with, and it retains a lot of that. It still looks like that in a lot of cases, but it has also been made more business-friendly with a sort of new front end. So, it used to be all or nothing where when you wanted to show somebody just the entity names or just the entity descriptions, you had to switch all of the entities on your diagram just to show names. Now, you can show some of them. You can shrink down some of them, and you can keep some of them expanded. So, it has become a more useful information-sharing tool over time. It is extremely helpful. In my previous company, it was the enterprise data model, and you could paper a room with it if you printed the information out. To present that information to people, we had to chunk it down into subject areas. We had to present smaller amounts of information. Because it was linked to the underlying system, we could reuse the information that we had in a model in other models. The biggest lesson was to chunk the information down and present it in a digestible form rather than trying to show the entire thing because otherwise, people would run away screaming. One of the places didn't have a modeling tool in it, and they were trying to do the documentation using Confluence. It was just a nightmare trying to keep it maintained with different developers using different tables and then needing to throw something into one and adding something into another one. It was just a nightmare. If they had one tool where they could put it all in one place, it would have been so much easier than the mess they had. I would rate erwin Data Modeler a nine out of 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
    Pam Rivera - PeerSpot reviewer
    Independent Consultant at a tech consulting company with 1-10 employees
    Real User
    Top 5Leaderboard
    Complete Compare is good for double checking your work and ensuring that your model reflects the database design
    Pros and Cons
    • "The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems."
    • "The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work."

    What is our primary use case?

    The use case was normally to update data model designs for transaction processing systems and data warehouse systems. Part of our group also was doing data deployment, though I personally didn't do it. The work I did was mostly for the online transaction systems and for external file designs.

    I didn't use it for data sources. I used the solution for generation of code for the target in the database. Therefore, I went from the model to the database by generating the DDL code out of erwin.

    We had it on-premise. There was a local database server on SQL, then we each had a client that we install on our machines.

    How has it helped my organization?

    At one of my previous jobs, we had a lot of disparate databases that people built on their PCs, which were under their desk. We were under a mandate to bring all of that into a controlled environment that our DBAs could monitor, tune, etc. Therefore, this was a big improvement. I put the data that was in whatever source into an Excel spreadsheet, reverse engineering it into a SQL file and putting in the commas, and then I could reverse engineer that SQL into a data model. That saved us a tremendous amount of time instead of building the data model from scratch.

    I educated a number of my colleagues who were in data architecture and writing the DDL by hand. I showed them, "You do it this way from the model." That way, you never have to worry about introducing errors or having a disconnect between what is in the model and the database. I was able to get management support for that. We enhanced the accuracy of our data models.

    What is most valuable?

    I do like the whole idea of being able to identify your business rules. In my last position, I got acquainted with using it for data lineage, which is so important now with the current regulatory environment because there are so many laws or regulations that need to be adhered to. 

    If you're able to show where the data came from, then you know the source. For example, I was able to use user-defined properties (UDPs) on one job where we were bringing in the data from external XML files. I would put it at the UDP level, where the data came from. On another job, we upgraded a homegrown database that didn't meet our standards, so we changed the naming standards. I put in the formally known UDPs so I could run reports, because our folks in MIS who were running the reports were more familiar with the old names than the new names. Therefore, I could run the report so they could see, "This is where you find what you used to call X, and it is now called Y." That helped. 

    The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.

    Complete Compare is good for double checking your work, how your model compares with prior versions, and making sure that your model reflects the database design. At my job before my last one, every now and then the DBAs would go in and make updates to correct a production problem, and sometimes they would forget to let us know so we could update the model. Therefore, periodically, we would go in and compare the model to the database to ensure that there weren't any new indexes or changes to the sizes of certain data fields without our knowing it. However, at the last job I had, the DBAs wouldn't do anything to the database unless it came from the data architects so I didn't use that particular function as much.

    If the source of the data is an L2TP system and you're bringing it into a data warehouse, erwin's ability to compare and synchronize data sources with data models, in terms of accuracy and speed, is excellent for keeping them in sync. We did a lot of our source to target work with Informatica. We used erwin to sometimes generate the spreadsheets that we would give our developers. This was a wonderful feature that isn't very well-known nor well-publicized by erwin. 

    Previously, we were manually building these Excel spreadsheets. By using erwin, we could click on the target environment, which is the table that we wanted to populate. Then, it would automatically generate the input to the Excel spreadsheet for the source. That worked out very well.

    What needs improvement?

    When you do a data model, you can detect the table. However, sometimes I would find it quicker to just do a screenshot of the tables in the data model, put it in a Word document, and send it to the software designers and business users to let them see that this is how I organized the data. We could also share the information on team calls, then everybody could see it. That was quicker than trying to run reports out of erwin, because sometimes we got mixed results which took us more time than what they were worth. If you're just going in and making changes to a handful of tables, I didn't find the reporting capabilities that flexible or easy to use. 

    The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work.

    I don't see that it helped me that much in identifying data sources. Instead, I would have to look at something like an XML file, then organize and design it myself.

    For how long have I used the solution?

    I started working with Data Modeler when I was in the transportation industry. However, that was in the nineties, when it was version 1 and less than $1,000.

    What do I think about the stability of the solution?

    I found it pretty stable. I didn't have any problems with it. 

    Sometimes, when you're working with model Mart, once in a while the connection would drop. What I don't like is that if you don't consistently save, you could lose a lot of changes. That's something that I think should work more like Word. If for some reason your system goes down, there's an interruption, or you just forget or get distracted by a phone call, then you go back and something happened. You might have lost hours worth of work. That was always painful.

    What do I think about the scalability of the solution?

    I have worked on databases that had as many as a thousand tables. In terms of volume and versioning, it is fine. We've used the model Mart to house versions that introduce another level of complexity to keep the versioning consistent. 

    There is a big learning curve with using model Mart. Therefore, a lot of groups don't really fully utilize it the way they should. You need somebody to go in there every now and then to clean things up. We had some pretty serious standards around when you deployed it to production and how you moved it in model Mart. We would use Complete Compare there. It scaled well that way. 

    In terms of the number of users, we had 20 to 30 different data architects using it. I don't know that everybody was on it full-time, all the time. I never saw a conflict where we were having trouble because too many people were using it. From that point, it was fine.

    I think the team got as large as it was going to get. In fact, right now they're on a hiring freeze because of COVID-19.

    How are customer service and technical support?

    Over a period of five or 10 years, the few times I've had to go all the way through to erwin, I talked to the same young lady, who is very good. She understood the problem, worked it, and would give me the solution within two phone calls. This was very good.

    Which solution did I use previously and why did I switch?

    Prior to erwin, I had used Bachman and IEF. Bachman I liked better, but IEF was way too cumbersome. 

    Bachman was acquired by another company and disappeared from the marketplace. The graphics were very pretty on Bachman. Its strongest feature was reverse engineering databases. I found erwin just as robust with its reverse engineering. 

    IEF also disappeared from the marketplace, and I didn't use it very much. I didn't like it, as it was way too cumbersome. You needed a local administrator. It was really tough. It promised to generate code and database as well as supposed to be an all encompassing case tool. I just don't think it really delivered on that promise.

    It could very well be that the coding of those solutions didn't keep up with the latest languages. There was a real consolidation of data modeling tools in the last 15 to 18 years. Now, you've only got erwin and maybe Embarcadero. I don't think there's anything else. erwin absorbed a lot of the other solutions but didn't integrate them very well. We were suffering when it didn't work. However, with the latest versions, I think they've overcome a lot of those problems.

    How was the initial setup?

    Usually, the companies already had erwin in place. We had one company where the DBAs would sort of get us going.

    The upgrades were complex. They required a lot of testing. About a year ago, we held off doing them because we wanted to upgrade to the latest version as well as we were in the midst of a very big system upgrade. Nobody wanted to take the time. It took one of our architects working with other internal organizations, then there were about three or four of us who tried to do the testing of the features. It was a big investment of time, and I thought that it should have been more straightforward. I think companies would be more willing to upgrade if it wasn't so painful.

    The upgrade took probably two months because nobody was working on it full-time. They would work on it while they could. One of the architects ended up working late, over the weekends, and everything trying to get it ready before we could roll it out to the entire team.

    For the upgrades, there were ;at least half a dozen people across three different groups. There were three or four data architects in our group, then we had two or three desktop support and infrastructure people for the server issues.

    What about the implementation team?

    I think they used Sandhill for the initial installation.

    If it's the first time, I recommend engaging a third-party integrator, like Sandhill, whom I found them very good and responsive.

    What's my experience with pricing, setup cost, and licensing?

    We always had a problem keeping track of all the licenses. All of a sudden you might get a message that your license expired and you didn't know, and it happens at different times. At GM Finance, they engaged Sandhill to help us manage it. I was less involved because of the use of Sandhill, who was very helpful when we had trouble with our license. I remember you had to put in these long string of characters and be very careful that you didn't cut and paste it in an email, but that you generated it. It was so sensitive and really difficult until the upgrades.

    if there was a serious problem, then it was usually around the licensing, where there was some glitch in the licensing. Then, we would call Sandhill who would help us out with it. That's something where we had to invoke a third-party for any technical difficulties.

    I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit.

    I think you buy a block of users because I know the company always wanted to manage the number of licenses. 

    Which other solutions did I evaluate?

    I really haven't spent a lot of time on other data modeling tools. I have heard people complain about erwin quite a bit, "Oh, we wish we had Embarcadero," or something like that. I haven't worked with those tools, so I really can't say that they're better or worse than erwin, since erwin is the only data modeling tool that I've used in the last 15 years.

    What other advice do I have?

    There might be some effort to do some cloud work at my previous place of employment, but I wasn't on those projects. I don't think they've settled on how they're going to depict the data.

    Some of the stuff in erwin Evolve, and the way in which it meshes with erwin Data Modeler, was very cool.

    Sometimes, your model would get corrupted, but you could reverse engineer it and go back in, then regenerate the model by using the XML that was underlying the model. This would repair it. When I showed this to my boss, he was very impressed. He said, "Oh man, this is where we used to always have to call Sandhill." I replied, "You don't have to do that. You need to do this." That worked out pretty well.

    Biggest lesson learnt: The value of understanding your data in a graphical way has been very rich in communicating to developers and testers when they recognize the relationships and the business rules. It made their lives so much easier in the capturing of the metadata and business English definitions, then generating them. Everybody on the team could understand what this data element or group of data elements represented. This is the biggest feature that I've used in my development and career.

    I would rate this solution as an eight out of 10. 

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Buyer's Guide
    erwin Data Modeler (DM)
    July 2022
    Learn what your peers think about erwin Data Modeler (DM). Get advice and tips from experienced pros sharing their opinions. Updated: July 2022.
    611,060 professionals have used our research since 2012.
    Beverly King De Loach - PeerSpot reviewer
    Architecture Manager at CIGNA Corporation
    Real User
    Top 5Leaderboard
    The ability to generate database code from a model for a wide array of data sources cuts development time
    Pros and Cons
    • "We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work."
    • "I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk."

    What is our primary use case?

    We have a couple of really important use cases for erwin. One of them is that we automate the pull of metadata from the repository itself, so that we have all the model metadata that we can then put into a centralized hub that we can access with other applications. Another reason we pull all the metadata out of the model is to run it through our model validation application, which telsl us if this model is healthy or not and if it meets our standards or not.

    The other use case that's really important is managing the abbreviations file that erwin uses to convert logical terms into physical terms. The way that you manage it today within erwin is very manual and you'll go from a spreadsheet, make changes, and upload, et cetera-- but we've created an API application where if we take the main standard file and keep it in the database, make the changes in the database, then we have an application that goes out into the Mart file, deletes the glossary, replaces it with the table from the database. It's all automated at the push of a button. It's things that would take us days to make changes in the standard files and do updates in eight different files.

    How has it helped my organization?

    Data warehousing is the best example of how this product can make a huge difference because it's an integration of a lot of different source systems. You have to be able to visualize how you are going to make the information from sources A, B, and C merge together. It makes it very important.

    The ability to automatically generate DDL and have the to do it in different flavors (Teradata DDL or Oracle, et cetera), and to be able to fine-tune the forward engineering file so that it comes out the way your shop likes to see the DDL done is critical. It's soup to nuts from the design all the way to implementation. It's really critical.

    We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work.

    The code generation ensures accurate engineering of data sources especially because you can tweak it.

    Development time is another critical issue. If you had to tweak every single piece of code that comes off the line because there's only a one-size-fits-all solution, then the problem would not be worth anywhere near as much as it is. It has the ability to create a customized forward engineering code that you can use to generate your code for your shop so that it always comes out the way you want it.

    What is most valuable?

    The product itself is fantastic and it's about the only way to get an enterprise view of the data that you're designing. It's a design tool, obviously. Once you add the API to that where you can automate things, you can make bulk changes. You can integrate your data from erwin into another in-house application that doesn't have access to the data because the erwin data is encrypted. It's been quite a boon to us because we're very heavy into automation, to have the ability to create these ad hoc programs, to get at the data, and make changes on the fly. It's been a wonderful tool.

    A data modeling case tool is a key element if you are a data-centric team. There is no way around it. It's a communication tool. It's a way of looking at data and seeing visually how things fit together, what is not going to fit together. You have a way of talking about the design that gets you off of that piece of paper, where people are sitting down and they're saying, "Well, I need this field and I need that field and we need the other field." It just brings it up and makes it visible, which is critical.

    What needs improvement?

    I love the product. I love the ability to get into the code, make it automated, and make it do what I want. I would like to see them put some kind of governance over the ability to make changes to the mart tables with the API, so that instead of just using the modeler's rights to a table -- it has a separate set of rights for API access. That would give us the ability to put governance around API applications. Right now a person with erwin and Excel/VBA has the ability to make changes to models with the API if they also have rights to make changes to the model from erwin. It's a risk.

    We have a really good relationship with erwin and whenever we come across something and we contact the product developers and the contacts that we have, they immediately put fixes in, or they roll it into the next product. They're very responsive. I really don't have any complaints.

    It's a wonderful product and a great company.

    For how long have I used the solution?

    I've been using erwin since version 3.0 in the '80s.

    What do I think about the stability of the solution?

    It's very stable. It's very mature.

    What do I think about the scalability of the solution?

    We have about 70 licenses and we have about 70 people using the product full time. I've worked in shops where there were two or three to a dozen. Besides these 70, we also have other parts of the world, shops that have it. It scales right up. I have not worked in a shop where it was either too small or too large.

    We have full-time data modelers. We have architects. We don't make a distinction between the data architect and the data modeler. The data architect is designing the enterprise-level view of data and how we use it as a business and then modelers work on specific projects. They'll take this enterprise view and they'll create a project model for whatever it is that we're rolling out.

    We've got an architecture person, a modeler person, and we also have some developers who do some smaller database modeling when they had to get out something that's just used in-house. It's not used downstream by the end-user. We have the use of a portal product. Everybody at the company has access to the web portal product. They can go in and see what data has been designed and do impact analysis.

    The business analyst will look at it in the web portal to see what the downstream impact would be for them to change a particular name that the company uses for something. They check what the downstream and upstream implications are. Then, the developers use our DI tools for creating the mapping from the source system to the target system. Our data stewards use the tool for the business glossary and for how we define things. Every part of the company that deals with data uses eriwn.

    How are customer service and technical support?

    Customer service is fantastic. I know a lot of the guys by first name that work in tech support.

    When we have a problem, typically we're broke because we have people here on staff who answer most of the questions and most of the problems. If we have a problem, it's a big problem. They put us straight through and they handle us right away.

    Which solution did I use previously and why did I switch?

    I've used four different data modeling tools. Every modeling tool has its strong point but there's none of them out there that are as robust to me as erwin. If I have to choose one tool, it's going to be erwin, especially since I've gotten into the API and I know how to use it. Some of the things the other tools add in terms of being able to manipulate the underlying metadata, erwin has with that API. I won't say they now have it. They've had it since day one, but I've just picked it up in the last year or so.

    How was the initial setup?

    The setup gets more complex every time. Especially with 2019, they completely changed the interface and that was another learning curve. But for the most part, if you know data modeling, you can find the logical task that you want to do within the physical form and menus of the product. I didn't find the learning curve so bad because I was already a data modeler.

    I started the upgrade process today, as a matter of fact. We just got the software installed on a Mart, and I'm going through the new features. I'll play with it for a week. Then we'll get other testers to actually do some formal testing for a week. And then we'll put in our change because we're a large shop. It's around a month cycle to get an upgrade in place. That's if there are no problems. If we come across something that tells us that we can't use this product until this is changed or fixed then it's a stop. For the most part, a happy path, takes around a month in a large shop like ours.

    As far as the upgrade itself on dev, it took maybe an hour to upgrade the Mart. And it took me maybe an hour to upgrade the desktops that we use for testing.

    We've been doing upgrades for years. I'm been involved in it with multiple companies and it's what I do here. We have a cycle, a strategy, and a checklist that we go through for every upgrade.

    The first thing we do is we have a development system. We have virtual machines that we set up, so it's not on anybody's particular desktop. We upgrade the product and then one person will go through and I'll look at the new features and I'll see, number one, if we need the new features. Number two, if there is anything in these features that will break what we're doing today. Those are the first things I look at. If we pass those first two tests, then I start looking at the features and check what we are going to have and what it is going to involve in terms of training the user. We check how it is going to impact the modeler that's actually down in the trenches.

    I've got to do the training materials and then the next thing is we have a warranty period. We have a group that pushes the software to the desktop. We have a special day that we roll it out. And then we have a warranty period where we set a virtual call that anybody could sit in if they have a problem. We have a virtual call so that if anybody, when they come in on Monday morning, can't get into the product, or if they're having any problems with that at all, we're right there to answer their questions. We allow for that for the first week. After that, we turn everybody loose. Of course, it doesn't account for the physical part of backing up the database, doing the install, validating over the weekend, and all that stuff. It's just the standard software upgrade stuff.

    What about the implementation team?

    We implement in-house, but always have access to a World-Class vendor.

    What was our ROI?

    I wouldn't know how to measure ROI. I can only say that the alternative is spreadsheets, typing, visually inspecting things, never being able to integrate, never being able to communicate. I can't give an ROI, but I can say that I wouldn't want to work in a shop that didn't have a data modeling data tool.

    erwin's my first love. I know that I have been using it long enough that I am under the covers and I know it backward and forwards. It's the one I prefer.

    What's my experience with pricing, setup cost, and licensing?

    I don't deal with pricing or licensing here. I know that you can get a per-seat license. You can get concurrent licenses. To me, if you're a full-time modeler, you need a per-seat license. If you're a developer or a data steward, you use it a couple of times a day, maybe a couple of times a week, you can have concurrent licenses so that a group of five people will share one license. If someone's using it you can't, but if it's free then you can go ahead and use it, or you can lock it, or whatever. There are different ways of licensing it.

    What other advice do I have?

    The one thing that having a CASE tool does is it takes the drudge away from modeling. You get to actually think of what you're doing. You think about the solution and not how you are going to keep track of what you're doing. It frees you from a lot of mechanical things that are part of keeping track of data modeling, and it allows you to do the thinking part.

    There's not a lot of documentation on the API. You're pretty much going to have to teach yourself. If you have a specific problem where you've gotten to a certain point, you can always touch base with the guys at erwin and they will help you to get little snippets of code. But if you're doing things like we have, which is to write a full-blown application to extract the data or to make changes to the model, you're pretty much going to have to learn it on your own. That's just the one drawback of the API but if you're a programmer and you want to DM like me, it's a lot of fun.

    It's a challenge but it's very rewarding to be able to automate stuff that people are doing manually and to be able to hand them a solution.

    From one out of ten, I'd give erwin a 9.99. Everything has flaws. Everybody's got these little quirks like I mentioned about the ability to make changes that you shouldn't make. But as far as the product itself, I love it. It's right up there with a 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Sr. Data Engineer at a healthcare company with 10,001+ employees
    Real User
    Provides the ability to document primary/foreign key relationships and standardize them
    Pros and Cons
    • "What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes."
    • "erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge."

    What is our primary use case?

    I am responsible for both a combination of documenting our existing data models and using erwin Data Modeler as a primary visual design tool to design and document data models that we implement for our production services.

    My primary role is to document our databases using erwin to work with people and ensure that there is logically referential integrity from the perspective of the data models. I also generate the data definition language (DDL) changes necessary to maintain our data models and databases up to our client requirements in terms of their data, analytics, and whatever data manipulation that they want to do. I use erwin a lot.

    It is either installed locally or accessed through a server, depending on where I have been. I have had either a single application license or pooled license that I would acquire when I open up erwin from a server.

    How has it helped my organization?

    We get data from many different sources where I work. We have many clients. The data is all conceptually related. There are primary subject area domains common across most of our clients. However, the physical sources of the data, or how the data is defined and organized, often vary significantly from client to client. Therefore, data modeling tools like erwin provide us with the ability to create a visual construct from a subject area perspective of the data. We then use that as a source to normalize the data conceptually and standardized concepts that are documented or defined differently across our sources. Once we get the data, we can then treat the data that has been managed somewhat disparately from a common conceptual framework, which is quite important.

    At the moment, for what I'm doing, the interface to the physical database is really critical. erwin generally is good for databases. It is comfortable in generating a variety of versions of data models into DDL formats. That works fine.

    What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes.

    What is most valuable?

    Its visualization is the most valuable feature. The ability to make global changes throughout the data model. Data models are reasonably large: They are hundreds, and in some cases thousands, of tables and attributes. With any data model, there are many attributes that are common from a naming perspective and a data type perspective. It is possible with erwin to make global changes across all of the tables, columns, or attributes, whether you are doing it logically or physically. Also, we use it to set naming standards, then attempt to enforce naming standards and changes in naming from between the logical version of the data models and the physical versions of the data models, which is very advantageous. It also provides the ability to document primary/foreign key relationships and standardize them along with being able to review conceptually the data model names and data types, then visualize that across fairly large data models.

    The solution’s visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage is very important because you can create or define document subject areas within enterprise data models. You can create smaller subsets to be able to document those visually, assess the integrity, and review the integrity of the data models with the primary clients or the users of the data. It can also be used to establish communications that are logically and conceptually correct from a business expert perspective along with maintaining the physical and logical integrity of the data from a data management perspective. 

    What needs improvement?

    We are not using erwin's ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync to the fullest extent. Part of it is related to the sources of the data and databases that we are now working with and the ability of erwin to interface with those database platforms. There are some issues right now. Historically, erwin worked relatively well with major relational databases, like Oracle, SQL Server, Informix, and Sybase. Now, we are migrating our platforms to the big data platforms: Hadoop, Hive, and HBase. It is only the more recent versions of erwin that have the ability to interface successfully with the big data platforms. One of the issues that we have right now is that we haven't been able to upgrade the version that we currently have of erwin, which doesn't do a very good job of interfacing with our Hive and Hadoop environments. I believe the 2020 version is more successful, but I haven't been able to test that. 

    Much of what I do is documenting what we have. I am trying to document our primary data sources and databases in erwin so we have a common platform where we can visually discuss and make changes to the database. In the past couple of years, erwin has kind of supported importing or reverse engineering data models from Hive into erwin, but not necessarily exporting data models or forward generating the erwin-documented data models into Hive or Hadoop (based on my experience). I think the newest versions are better adapted to do that. It is an area of concern and a bit of frustration on my part at this time. I wish I had the latest version of erwin, either the 2020 R1 or R2 version, to see if I could be more successful in importing and exporting data models between erwin and Hive.

    erwin generally fails to successfully reverse engineer our Oracle Databases into erwin data models. The way that they are engineered on our side, the syntax is correct from an Oracle perspective, but it seems to be very difficult for erwin to interpret. What I end up doing is using Oracle Data Modeler to reverse engineer into the Oracle data model, then forward engineer the DDL into an Oracle syntax, and importing that DDL into erwin in order to successfully bring in most of the information from our physical data models. That is a bit of a challenge. 

    There are other characteristics of erwin, as far as interfacing directly with the databases, that we don't do. Historically, while erwin has existed, the problem is the people that I work with and who have done most of the data management and database creation are engineers. Very few of them have any understanding of data modeling tools and don't work conceptually from that perspective. They know how to write DDL syntax for whether it's SQL Server, Oracle, or Sybase, but they don't have much experience using a data modeling tool like erwin. They don't trust erwin nor would they trust any of its competitors. I trust erwin a lot more than our engineers do. The most that they trust the solution to do is to document and be able to see characteristics of the database, which are useful in terms of discussing the database from a conceptual perspective and with clients, rather than directly engineering the database via erwin. 

    erwin is more of a tool to document what exists, what potentially will exist, and create code that engineers can then harvest and manage/manipulate to their satisfaction. They can then use it to make changes directly to our databases. Currently, when the primary focus is on Hive databases or Hadoop environment, where there is no direct engineering at this point between erwin and those databases, any direct or indirect engineering at the moment is still with our Oracle Database.

    For how long have I used the solution?

    I have been using the solution on and off for 20 to 30 years.

    What do I think about the stability of the solution?

    It is pretty stable. Personally, I haven't run into any real glitches or problems with the output, the ability to import data when it does work correctly, the export/creation of DDL, or generation of reports.

    We are trying to upgrade. This has been going on now for several months. We're trying to upgrade to the 2020 version. Originally, it was 2020 R1, but I think at this point people are talking about the 2020 R2 version. Now, I'm not part of our direct communications with erwin in regards to Data Modeler, but there are some issues that erwin is currently working on that are issues for my company. This have prevented us from upgrading immediately to the 2020 version.

    What do I think about the scalability of the solution?

    This gets down to how you do your data modeling. If you do your data modeling in a conceptually correct manner, scaling isn't an issue. If you don't do your data modeling very well, then you are creating unnecessary complexities. Things can get a bit awkward. This isn't an erwin issue, but more a consequence of who is using the product.

    In the area that I'm working right now, I'm the only user. Within the company, there are other people and areas using the solution probably far more intimately in regards to their databases. I really don't know the number of licenses out there.

    How are customer service and technical support?

    The problem is that our issues are related to interfacing erwin Data Modeler with the Hadoop Hive environments. The issues have always been either what I was trying to do was not fully supported by our version of erwin Data Modeler. People have certainly tried to help, but there's only so much that they could tell me. So, it's been difficult. I am hoping that I can get back to people with some better answers once the newest version of erwin is available to us.

    Which solution did I use previously and why did I switch?

    The people who were previously responsible for the database development were very good engineers who knew how to write SQL. They could program anything themselves that they wanted to program. However, I really don't think that they really understood data modeling as such. They just wrote the code. Our code and models are still developing and not necessarily conformed to good data modeling practices. 

    How was the initial setup?

    In the past, I was involved in the initial setup. In traditional environments, it sets up pretty easily. In my current environment, where I'm trying to get it as intimately integrated with our big data platforms as possible, I'm finding it quite frustrating. However, I'm using an older version and think that is probably a significant part of the problem.

    What was our ROI?

    In other environments where I've worked, the solution’s ability to generate database code from a model for a wide array of data sources cuts development time. In this environment, erwin is not very tightly integrated into the development cycle. It is used more for documentation purposes at this point and for creating a nascent code which down the road gets potentially implemented. While it's not used that way at my current company, I think it would be better if it were, but there is a culture here that probably will prevent that from ever occurring.

    What's my experience with pricing, setup cost, and licensing?

    An issue right now would be that erwin doesn't have a freely available browser (that I am aware of) for people who are not data modelers or data engineers that a consumer could use to look at the data models and play with it. This would not be to make any changes, but just to visually look at what exists. There are other products out there which do have end user browsers available and allow them to access data models via the data modeling tool.

    Which other solutions did I evaluate?

    There is another tool now that people are using. It is not really a data modeling tool. It is more of a data model visualization tool, and that's SchemaSpy. We don't do data modeling with that. You get a visualization of the existing physical database. But that's where the engineers live, and that's what they think is great. This is a cultural, conceptual, understanding issue due to a lack of understanding and appreciation of what good data modeling tools do that I can't see changing based on the current corporate organization. 

    What other advice do I have?

    It is the only meaningful way to do any data modeling. It is impossible to conceptualize and document complex data environments and the integration between different data subject areas. You can write all the code or DDL you want, but it's absolutely impossible to maintain any sort of conceptual or logical integrity across a large complex enterprise environment without using a tool like erwin. 

    You want to look at what you are trying to accomplish with erwin before implementing it.

    • Does the product have the ability to support or accomplish that?
    • Based on the technologies that you have decided you want to use to manage your data, how intimately does it integrate with those technologies? 

    From my perspective of using the traditional relational databases, I think erwin probably works pretty well. 

    For the newer database technologies, such as the Hadoop environment databases, it's not clear to me how successful erwin is. However, I'm not talking from the perspective of somebody who has been aggressively using the latest version. I don't have access to it, so I'm afraid my concerns or issues may not be valid at this point. I will find out when we finally implement the latest erwin version.

    I would give the solution a seven or eight (out of 10).

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Data Modeler at a government with 10,001+ employees
    Real User
    Top 10Leaderboard
    The data comes to life to where customers understand exactly what they're asking for
    Pros and Cons
    • "It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features. erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for."
    • "I'd really like to see the PDF function become available. It would make my life much easier than what it is at the moment because whenever I need to collaborate with people that do not have erwin, I have to go through the wonkiness of going to Word and then save it from Word into PDF. There's a lot of differences between erwin 4.4 and 2020."

    What is our primary use case?

    When I work from home, my use case for erwin is for when I get a request for a database upgrade. Usually, the request comes in with a whole bunch of tables and names so I'll go into the DM and I'll start building out what they're asking for. Once we actually get them to be able to view it and understand it, then we'll go back and forth with the developers and the requesters to make sure that it's exactly what they're looking for. We'll spend a few days making sure everything looks correct. Once that's finished, I'll send it out. 

    Unfortunately, I can't do a PDF straight from erwin so I'll copy everything into Word and then save my Word as a PDF. With that PDF, I'll be able to send it off to all the stakeholders, not just the developers and the requesters, so that everybody can see it, even the ones that don't have erwin itself.

    My office use case is pretty much the same, except with the office, we add in Model Mart. We have our entire network, all the databases, and everything in Model Mart and it's over 1,500 different tables, relationships, attributes, and things like that. It's a really large model. Then, we break down that model into individual subject areas and we work through those. We go back to any new requests, we'll build them in Data Modeler and we'll go back and forth with the requesters, making sure everything looks like what they're expecting it to. They'll usually just send us either a spreadsheet of names and data types and then we build from there.

    How has it helped my organization?

    erwin brings data to life. We're currently working with a requester at that moment, who provided us with a spreadsheet of their ideas of tables and attributes with the metadata associated with each. Then they provided us a rudimentary diagram with tables and keys. I was able to put it into erwin along with the metadata that they were asking for, and it really brought questions to life. The people said, "We didn't realize the relationships were going to bring in these extra keys." And they didn't realize there were a lot of extra pieces coming in as well. Once we did that, we were able to show them exactly what they were asking for and it brought much more conversation between us.

    We don't use DM's modeling support for Snowflake cloud yet. I am interested in cloud technology and I just came across that support that erwin has. It made me even more interested in cloud technology. 

    Its ability to generate database code from a model for a wide array of data helps another office in my company that uses it quite a bit. 

    What is most valuable?

    The automatic build to the physical is a really nice feature. I like the fact that it will bring the keys down from one table to the next, from a parent to child table. Those two things make erwin a very easy to use product. 

    It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features.

    erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for.

    What needs improvement?

    I'd really like to see the PDF function become available. It would make my life much easier than what it is at the moment because whenever I need to collaborate with people that do not have erwin, I have to go through the wonkiness of going to Word and then save it from Word into PDF. There's a lot of differences between erwin 4.4 and 2020. It's a learning curve for me. It could be easier to use, but it's not a Windows/Microsoft type of application. It's close to it but it's also not. Once I've used it enough and learned it, then I'll know where all the pieces are.

    For how long have I used the solution?

    I've been a data modeler in my office for six years so I've been using erwin for six years. My office has been using erwin since the beginning of time. I'm not exactly sure when they started using it, but the office has been around for 20 years so they've probably been using it since erwin started.

    It's on our secret network and I believe they've been going back and forth quite a bit with erwin's tech teams as far as getting it to work because I think our workstations are virtual workstations and there were some issues with the licensing and the license server. I've been watching that from the peripherals but not really getting in the weeds with them. I'm not sure exactly what they're doing.

    What do I think about the stability of the solution?

    I've only had it crash on me once. I can't remember what I was doing and when or how it crashed. It was one of those inconvenient times and so I started again. I don't think an auto-save was done. That happened three weeks ago.

    What do I think about the scalability of the solution?

    I use it at home every day and there are days where I've used it almost an entire eight hour day. I'm using it quite heavily right now.

    How are customer service and technical support?

    The only time I've had to use erwin technical support was when I requested an extension on my trial license. They were really quick and good about it.

    How was the initial setup?

    The initial setup was straightforward. I was able to install it at home without a problem whatsoever. Within a few seconds, I was able to figure out how to start building a table. I had no problems whatsoever. I think my colleagues who are going into work might have a little bit of a different answer because of issues with service, license keys, and what have you.

    The deployment took five to ten minutes. There wasn't a lot of customization necessary. It's been a couple of months now since I've started doing it. I can see from the tab that I'm on that I need to just click on the table, click the area there, and start building tables. I've also had experience with it, so that makes it easier as well. It's intuitive.

    At the office, there's quite a bit of strategy on how they needed to deploy it and how they needed to have it totally set up in the virtual world. They were upgrading from an older version.

    At our office, we have two or three different people that were truly involved, but we did have one main person doing the going back and forth with erwin as far as getting help and setting it up. That took a couple of weeks, if not longer, to actually get it set up working correctly.

    We bought a total of 10 licenses, although I'm not so sure. It's less than 25.

    What was our ROI?

    I would definitely say that it's a time saver once you learn how to use the application. It takes a little while to teach people how to use it just like with any other application, but as far as time-saving afterward, it's invaluable. As far as taking the time to truly show a person the end result, we can show them exactly what we're talking about and that's really invaluable. I'm sure the deployment would say the same thing as far as being able to build the database off of it.

    The accuracy and speed in transforming complex designs into well-aligned data sources make the cost of the tool worth it. At the same time, I don't do that.

    It saves us a couple of hours of actually trying to build something. It's not something that my office does every day. However, when we do it, I could not imagine building tables or building a diagram from any other tools that are currently in the office. It's impossible to do it from PowerPoint or Word. 

    What's my experience with pricing, setup cost, and licensing?

    I don't think that the pricing for my office is horrible. However, from my home, there's absolutely no way I could afford erwin on my own as far as doing my own work.

    There have been discussions between my office and the actual company that I work for and trying to decide on who would actually pay the bill. I'm the person stuck in the middle saying that I can't do my work here and luckily, I've been able to get one or two extensions on my free trial license from erwin. However, I'm afraid that I won't be able to get my company to pay for it and fairly soon the trial license will end up expiring on me.

    I decided to build physical only but later on that kind of bit me and so I will start building logical first and then the physical. It would be nice to be able to build out my own set of tables and maybe a Model Mart type of situation but I don't see me being able to afford a copy at home for myself. I won't be able to continue keeping a trial copy forever and until COVID is over.

    Which other solutions did I evaluate?

    When COVID started, I did start looking at home versions of other freeware because I had time to actually do some research. I found that most of the freeware wasn't really free. It was also still kind of clunky and one of the applications that I was using didn't automatically bring the keys down and for me, that was a killer right there. I would not suggest the application to anyone. From the trial copies of the other applications they use, I think that's where erwin really comes up ahead, above the other applications.

    What other advice do I have?

    The biggest lesson I have learned from erwin is the old cliche, that a picture is worth a thousand words. It is truly erwin in itself. When a person asks for a set of tables and they actually see that diagram visually, it really assists in any meeting that you will have. It is key to any meeting you have.

    I would rate Data Modeler an eight out of ten. The reason for this rating is because I did a couple of dumb attributes and it took me forever to find how to truly delete it. It was a parent-child relationship and I deleted the parent and did not answer the question from the next box that popped up correctly. So I had an attribute hanging out in a table and it took me forever to find the dangling relationships. Because of that, I knocked it down a rating because it did take me a long time to find that.

    I'm quite happy with the modeling tool. It does just about everything that I need it to do. I can't really think of what it doesn't do that I would need other than the PDF. I'm really happy with it.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Director of BI & Analytics at a logistics company with 10,001+ employees
    Real User
    Standardizes our practices, supports a wide variety of databases, and allows us to create logical data models
    Pros and Cons
    • "It allows us to create logical data models. We can represent a database model in business terms, which is very useful for us."
    • "It supports a wide variety of databases, including the latest ones. We have chosen to go for a cloud-based database, and it supports that, which is very useful."
    • "In terms of improvements, support could have been better in terms of installation, especially of workgroups. We struggled quite a bit to get it up and running. Collaboration could have been better from an installation perspective, but it is trivial as compared to what we use it for. Other than that, I don't have much feedback. It works pretty well, and the fact that we've been using it for more than a decade shows that it is quite solid."
    • "In terms of new features, it would be great to have a cloud base. We should be able to put it on the cloud for better collaboration and data models sharing."

    What is our primary use case?

    We are using it for a very specific use case, and it works pretty well for us. We do all of our database modeling based on this tool, and it is a repository of all data models in our business intelligence ecosystem. The logical representation of our metadata and anything that is created in a database, such as tables, is in it. 

    It is an on-prem workgroup. We have a workgroup server that hosts our model.

    How has it helped my organization?

    We utilize it for its cross-database capability and logical representation of the data model. We have recently started to use its collaboration features, and we also use it to define all our relationship constraints and referential integrity within our data model. So, a lot goes out of it.

    It has standardized our practices. For example, all customer-related entities and attributes have to follow a certain naming convention. It has helped in standardizing the process of creating our data models so that when we go and explore the data, we can combine them in a way in which we are confident of producing the right results. It has made a lot of difference in terms of naming standards, processes around our metadata, and the schema in which we create a database. We have a proper template to put the information through a well-structured data model. It helps users in getting the maximum value of the information that is available in the BI ecosystem. erwin Data Modeler makes it very simple and easy to navigate our very complex data.

    Its visual data models are very good and helpful for overcoming data source complexity and enabling understanding and collaboration around maintenance and usage. We have a complex business environment where we have retail and supply chain space for distribution. There are a lot of cases where we use the models for customer promotions and events and loyalty systems. Different data modelers can do their own subject areas, and then they can bring them together in a workgroup workspace. It has allowed us to collaborate and distribute the data modeling work. Previously, it used to be very single-threaded. Now, a lot of different teams can run their own modelers, and then, later on, integrate them, which is very useful. It is also very useful in the database migration process. You can take a logical model and seamlessly transfer it over to the database. That's very useful as well. 

    We use its modeling support for Snowflake Cloud. We don't use it in any special way. We use it the way we use an existing on-prem database. It just needs to follow Snowflake conventions, which it does. We have a standard logical model that can then translate to a physical model for any database we choose, and that's where erwin has been very helpful. We can set those naming standards, and it also does logical to physical translation seamlessly. This support for Snowflake is helpful. We have enough help to port our model from DB2 to Snowflake in terms of model creation. It has proven very helpful that way.

    It can create table structures across a wide variety of sources, which is very useful for us. It cuts the development time of our database code quite a bit. Otherwise, we would have to rely on Excel sheets. Currently, our average project size is anywhere from 3,000 to 4,000 hours, and out of that, we spend around 5% on data modeling. If we didn't have this tool, it will take almost twice more time for any project.

    What is most valuable?

    It allows us to create logical data models. We can represent a database model in business terms, which is very useful for us. 

    It supports a wide variety of databases, including the latest ones. We have chosen to go for a cloud-based database, and it supports that, which is very useful. 

    It is very useful for maintaining relationships between tables. We can put constraints and foreign key-primary key relationships into the model, and it gets translated into the physical database seamlessly. 

    Workgroup is another useful feature to store and share the models with the team for collaboration. 

    What needs improvement?

    In terms of improvements, support could have been better in terms of installation, especially of workgroups. We struggled quite a bit to get it up and running. Collaboration could have been better from an installation perspective, but it is trivial as compared to what we use it for. Other than that, I don't have much feedback. It works pretty well, and the fact that we've been using it for more than a decade shows that it is quite solid. 

    In terms of new features, it would be great to have a cloud base. We should be able to put it on the cloud for better collaboration and data models sharing.

    For how long have I used the solution?

    I have been using this solution for more than a decade.

    What do I think about the stability of the solution?

    It is very stable.

    What do I think about the scalability of the solution?

    It is fairly scalable. We really haven't pushed it to the limit with respect to scalability, but we haven't found any issues.

    Currently, we have around 20 users. They are mostly data modelers and data engineers. We have plans to increase its usage as deploy additional systems in our business unit. So, there are plans to scale up, but not in the immediate future.

    How are customer service and technical support?

    I have interacted with their technical support. I would rate them an eight out of 10.

    Which solution did I use previously and why did I switch?

    I have been in this company only for two years, but from the licensing, I know it has been more than 10 years. I am not aware of any other tool being used previously.

    How was the initial setup?

    There wasn't a lot of stuff. When things didn't work, we had to go and figure out why this isn't working and which ports should we open. There was a lot of back and forth communication with their support, and they were very helpful, but it gets pretty difficult when something that could be done in one to two hours takes you longer than that. It took us a few weeks to get it right, but once it started working, it was pretty seamless.

    There was no implementation strategy. You just download an installable and install it. The problem is that it requires a database, and it requires a particular configuration. All this is documented, but it doesn't work the way it is documented. So, it took time for us to figure out, "Hey, this thing is not working. Why is it not coming up?"

    For maintenance, we don't have anyone. For the deployment of the workgroup, it took just one person. My data engineering lead just went and did it all by himself. It is a pretty simple product. It just took us a while to figure it out, especially the collaborative tool. Generally, it is supposed to take half an hour for one person.

    What about the implementation team?

    We installed it ourselves. We did not use anybody to install it, maybe that's why it took us longer.

    What was our ROI?

    I don't have the metrics, but I would say we have seen an ROI. It has brought down the cost of implementation in terms of manpower. It might have saved us thousands of hours. It could also be more than a hundred thousand hours. 

    The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool totally worth it.

    What's my experience with pricing, setup cost, and licensing?

    There are no costs in addition to the standard licensing fees.

    What other advice do I have?

    In general, for its purpose or use cases, it is the best tool in the market. It does its part in terms of metadata, but we have other challenges that erwin cannot resolve. We have a large pool of legacy data sources that are not labeled, and erwin really can't help there. I don't see any other tool filling that space unless we go for a catalog, which is a different product space altogether. erwin can process the legacy files, but we're just not using it for that because we don't have the bandwidth.

    You need a skilled modeler to start off. It really depends on what kind of organization is implementing it: small scale, mid scale, or big scale, but collaboration really works. It is a very good tool, but proper training would be required to take full advantage of the tool. It helps to do a lot more on the job. You would need a lot of discipline before you start using the product. The standards and governance should be put up front before it can be utilized effectively.

    The biggest lesson that I have learned from using this solution is that it cannot resolve governance issues. You need to have proper standards in place before you start using this tool. Bad processes lead to bad outcomes. The tool will help you shepherd those processes, but it doesn't solve them. So, you need to have proper process governance and standards. You need to make the tool enforce those processes and standards. You should have proper controls on the data inside in order to get the best results. Governance and process discipline are pretty important. 

    On the database side, I come from organizations where some people follow one standard, and other people follow another set of standards, and if we use the same database and tools, then you get a mess. That's where the process discipline comes in for unified governance, which has got nothing to do with the tool. It has everything to do with how the organization is structured. The tool will help you to control that.

    I would rate erwin Data Modeler a nine out of 10. If it can be on the cloud without any installs, that would make it a 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Senior Project Manager at a tech services company with 51-200 employees
    Real User
    Stable, scales well, satisfactory support, and saves time during project reengineering
    Pros and Cons
    • "There is absolutely no problem with the stability."
    • "The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model."

    What is our primary use case?

    For the first 30 years of my career, I worked on many small projects. Since erwin was released, I used it to help develop projects up until about two years ago. At that time, I moved to a new company and I still use erwin in my current role.

    When I moved to the new company, I recommended erwin and explained it to my colleagues and my clients. When the most recent version was released, I looked at the licensing and became familiar with its new features and benefits.

    I have developed a couple of projects myself in the past two years, including one that had to do with mail, in Serbia, which was an interesting project. Another and the other to do with handling automotive equipment maintenance. One of the projects is something that I started from the beginning, whereas the other was reengineered with changes made and new features added.

    I have also worked with erwin from a higher-level role. Rather than developing smaller projects, I have taken responsibility for a much larger project worth several million Euros.

    How has it helped my organization?

    In general, if you start using erwin from the beginning of a project then it provides a lot of benefits. You have to start with the process modeling, and then find data and create an entity, and the process continues. Essentially, you have to have something before you create the data model. However, if you're talking about reengineering a project that has existing data models or existing processes, then the benefits of using erwin are really big. You can save 50% of the time if you're working on reengineering existing processes or existing data models.

    The visual data models are okay for helping to overcome data source complexity. If the project is started with erwin from the beginning then I can create the database, stored procedures, and everything that I need. However, when it comes to reengineering an existing product, and if the database changes then some of the stored procedures, as well as other things also need to change. For example, in one project, the original database was Informix and the new one is Microsoft SQL Server.

    What needs improvement?

    The erwin ETL functionality has room for improvement when it comes to mapping databases with a classic entity-relationship model to a data warehouse model. If you have a legacy database like Informix, Oracle, SQL Server, or something similar, then you need to create a data warehouse database. These use completely different logic and you need to create some procedures to map the tables.

    The number of databases should be extended.

    To have more documentation or available knowledge on how to connect is very important. This is probably the most important issue that I have experienced. Specifically, I would like more information on how to connect, how to transfer, and how to do the mapping from a legacy database.

    If you try to open a file from an older version of erwin, you can only open files from one version back. This is all that they support, so they need to add the option of opening all older versions. As it is now, they push people to buy a new version every year.

    For how long have I used the solution?

    We have been using erwin since the beginning when it was first released by Logic Works in 1993.

    What do I think about the stability of the solution?

    There is absolutely no problem with the stability.

    What do I think about the scalability of the solution?

    In terms of scalability, there is not enough long-term support for each version of erwin. In the past, the extensions of some erwin models, or files were ER1. After that, the file extension was ERW and now it is ERAN, which created some confusion.

    In my current company, I am the only person using erwin because we are not specialists in development. In my previous company, five or six people were using it.

    How are customer service and support?

    The support is okay and I am satisfied with it. However, it's a little slower getting support for the role that I'm in now, as compared to when I was at my previous company.

    In the past, the support was always okay. Within a few hours, I either had an answer or was at least speaking with them. We sent emails to discuss how to solve the problem.

    Overall, I'm really satisfied with the support.

    Which solution did I use previously and why did I switch?

    I have used several other modeling tools in the past, including SAP PowerDesigner and Bizagi. My experience with them has depended on what I needed to do. For example, Bizagi has a completely different way of developing a model. I am not satisfied with it because they don't follow the rules for relational modeling.

    On the other hand, Power Designer is quite a good tool that works well. It's a complex tool that can be used for data modeling and process modeling. They use BPMN methodology and in terms of functionality, it has enough. From a cost perspective, it is cheaper than erwin.

    How was the initial setup?

    The initial setup is straightforward, it was no problem.

    The installation can be done in five minutes. The new version may take a little longer, but it is very fast.

    What about the implementation team?

    When we have completed, we have erwin come to analyze the process.

    We start with global entities, or how I can see it on a higher level without talking about the relationship model. I am looking for the relation, and foreign keys, then we search for the stored procedure and functions.

    We look at the first creating the keys, the primary and alternative keys in the tables, entities, and at the end, we develop the indexing. The indexing requires daily analysis when you put the database in operation they look at the speed of everything. you can change the indexing to make your database faster.

    What was our ROI?

    In my previous company, there we had a really large return on investment from using erwin. In one of the systems that we re-engineered, there were more than 2,000 tables. If these had to be created from the beginning then it would have taken a really long time to collect all of the information. When it comes to reengineering, the database usually stays the same with perhaps 20% to 30% of the model being modified.

    In my current company, we are trying to educate our clients on using erwin. Many of them are not using it in their everyday business. The problem is that bigger organizations, like government departments, usually want to have somebody from outside their own organization develop the solution.

    What's my experience with pricing, setup cost, and licensing?

    The price of erwin Data Modeler is very expensive, in particular for this part of the world. I think that for the United States and Europe, the price is probably okay. However, in Serbia, the salary of an IT engineer is perhaps 50% of what it is in the United States. Because of this, erwin needs to have a different pricing model for different countries.

    For example, you cannot sell products in places like Serbia, Croatia, Bosnia, Bulgaria, Romania, and other places in this part of Europe at the same price as countries like Germany, Norway, or the United States. This is something that needs to change from a licensing perspective.

    What other advice do I have?

    In terms of erwin's code generation and the accurate engineering of data sources, for some of the databases, it is quite okay. However, in others, it is not exactly following the rules of the database in the way that I want to generate the model.

    There are two ways to generate a model. The first is to create a schema, which is a textual file that contains everything needed to create a complete database structure. The second is to have erwin connect to the databases directly. In this case, erwin installs and creates the database.

    In some cases, it is better to first create a DB schema, which is an SQL file where you can look for syntax errors or other problems in the code. Once complete, you can create the database, including the tables and everything else.

    When I start to use erwin in a project, it is normally right after I analyze the process. The second thing I do is look at the global entities, so I can view the system from a high level without dealing with the relationship model. After that, I start looking for relationships, creating the primary and alternative keys in the table. I then start looking for foreign keys. At that stage, I begin to look for stored procedures and functions. After this, I work on the creation of indexes.

    The indexing needs to be analyzed daily, once the database is put into operation. This helps with database performance. When you change the indexing, the database gets faster.

    My advice for anybody who is planning to use erwin is that sometimes, it should be used to develop models right from the beginning. It will depend on the project, as well as the organization and the experience that they have with erwin. It is also possible to have different people and different teams from the same company working on one model. For example, we have three development centers that are all working on the same model.

    The biggest lesson that I have learned from using erwin DM is that it pushes you to use the notation and methodology exactly. You must follow the rules. Several years ago, they started adding tools and options that are used to verify a model, and this functionality helps to point out mistakes in the models. Once the model is correct, you can move on to working with the databases and the specifics of each one. You can move very easily between databases such as Informix, Oracle, and MySQL, without losing much time.

    I would rate this solution a ten out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Flag as inappropriate
    Mike Matthews - PeerSpot reviewer
    IT Specialist at a government with 10,001+ employees
    Real User
    The fact that you can generate the DDL correctly from the model saves us a bunch of time
    Pros and Cons
    • "The modeling portion of the tool is the most valuable. There are some notes, naming standards, and other functions that we use as well. There's a whole boatload of functionality in this thing and we use maybe 10% of it. It seems to be pretty common that not all the functionality is fully utilized. But it's just got gobs and gobs of stuff that you can implement if you so choose to."
    • "The only real complaint I have is the time it takes to do a database comparison on a large model. If they could speed that up, that would be the only thing I can think of that needs improvement."

    What is our primary use case?

    erwin is deployed on individual desktops and the individual users install it or have a help desk person install it for them.

    Our primary use case is for during any type of project development or maintenance and application maintenance, we go through a process of modeling our data before it gets put into the database. We interact with the application development teams to determine what their requirements are and build the data models, and then turn them into actual physical database items.

    How has it helped my organization?

    erwin has definitely helped us improve our enforcement of standards and database design best practices. Before we really started using the tool or having a data modeling type of team, application development efforts all had their own database structures. Developers tend to not be too concerned with the data. They just want to make everything work for their application as easy as possible. Having the tool and having a team built around it has really helped us make sure that we're following the best normalization processes, we're not duplicating data, and we have a standard naming scheme that everybody has to follow.

    What is most valuable?

    The modeling portion of the tool is the most valuable. There are some notes, naming standards, and other functions that we use as well. There's a whole boatload of functionality in this thing and we use maybe 10% of it. It seems to be pretty common that not all the functionality is fully utilized. But it's got gobs and gobs of stuff that you can implement if you so choose to.

    We've definitely expounded on the amount of features we use. They've built in some automated naming standards that have been really helpful for us. That's probably the biggest leap we've used. We've always used the comments and notes features, but the automated naming features have been very helpful.

    Its ability to overcome data source complexity and enabling understanding and collaboration around maintenance and usage is extremely helpful because they give a visual to not only developers and database administrators, but the user base themselves. So the typical user isn't going to understand database functionality. Being able to show them a picture of how their data is actually going to look in the database is very helpful for their understanding of what we're trying to do with their data.

    erwin's ability to compare and synchronize data sources with data models in terms of accuracy and speed for keeping them in sync is very good. We utilize that service quite a bit. The one drawback is if you have an extremely large complex model, the compare process can take quite a bit of time, more than four hours. 

    Its ability to generate database code from a model for a wide array of data sources cuts development time. The fact that you can generate the DDL correctly from the model saves us a bunch of time. I would say it saves us around 40% to 50%. So even though you can generate the DDL, you still have to go in and tweak it a little bit. 

    What needs improvement?

    The only real complaint I have is the time it takes to do a database comparison on a large model. If they could speed that up, that would be the only thing I can think of that needs improvement.

    For how long have I used the solution?

    We've been using erwin since about 2000. We were using another product before, but it was way too cumbersome, so we switched to erwin.

    What do I think about the stability of the solution?

    Stability is excellent. It's been a solid product for years and I don't expect it to change.

    What do I think about the scalability of the solution?

    It's extremely scalable. Our environment has hundreds of tables. 

    We have five data modelers using the tool. That's the team that actually works with the app dev and DBAs to actually come up with the database design. Then we have another five users that act more in a read-only type of mode. They just want to look at the data models, but they don't actually do any of the design work.

    How are customer service and technical support?

    Their support was excellent. Typically it has to do with going through the upgrade process. If we have an issue, we'll reach out to them. The other thing we've had to reach out to them about was the time it was taking to do a data comparison on our extremely large model to the actual physical database. They were very helpful and very professional.

    We don't typically have problems transitioning between the models. We did last time, but it was actually an error on our end. It wasn't an error on the erwin end.

    Which solution did I use previously and why did I switch?

    We were using Cayenne. We switched because it was cumbersome.

    How was the initial setup?

    The initial setup is typically straightforward. Just follow their instructions and everything goes pretty smoothly.

    For the Data Modeler portion itself, on each desktop, the setup took around half an hour, and we have around 10 desktops.

    We didn't necessarily have a deployment strategy. We just gave the product to anybody that thought they needed it and let them run with it.

    For maintenance, we need one person, but it's definitely not a full-time job. It's just adding and subtracting users and going through the upgrade process when we do that. As far as installation, everybody basically installs it themselves. We don't require a full-time person for that either.

    We have a team around it, so if we add our data modeling team up, we use it about six hours a day per person. That would be about 18 hours a day for those guys. The read-only users rarely use it, so they're pretty insignificant. 

    We probably only use 10% to 20% of the functionality and I don't see us expanding on that a whole lot. There's a lot of neat little things in there, but we don't have time to implement them all. There's some overhead that goes with those functions that we choose not to undertake.

    Since we got a new guy on our team, he's gotten into some of those functions and has been able to utilize some of that stuff some more. We're actually probably closer to 30% or 40% of the functions at this point. We're not thinking about expanding because of the overhead. 

    What's my experience with pricing, setup cost, and licensing?

    I don't remember what our costs are. I know they just recently switched from a per seat type of licensing to a concurrent user type of licensing agreement, which is neither here nor there. I don't think it has increased or decreased the cost at all, but it's not obtrusive or invasive as far as the cost goes. It's fairly affordable.

    There are also internal costs if you have hosted on-prem because you have to have a server and database to stand it up on.

    Which other solutions did I evaluate?

    We didn't evaluate another solution because I had used erwin at another location and was extremely familiar with it. And I had also used Visio and some more manual-type methods like Visio. At the point that we decided to switch over, I was confident that erwin was the best solution out there.

    What other advice do I have?

    erwin is by far the best tool I've ever used. 

    My advice to somebody considering this solution is to go for it. It's easy. The functionality is fantastic. It's easy to pick up. It does basically everything you could want it to do.

    The automation of reusable design rules and standards has helped us immensely once we implemented it because having the automated naming standards and things like that, we don't have to go in and think about it. We don't have to go in and physically type it. Between generating the DDL and getting it into physical implementation was saving us 40% to 50% of time. It's because of those automated features that that's happening as opposed to having to sit there and type out the DDL from scratch, it saves a ton of time.

    It produces a time savings of about 40%.

    The accuracy and the speed of this solution in transforming complex designs into well-aligned data sources absolutely make the cost of the tool worth it.

    My advice would be to let things evolve over time. Start with the basics first. Just get into the ERD functions first and then start implementing some of the automated naming standards and things like that as you go. Otherwise, if you try to dive into the whole thing, you're just going to get overwhelmed because the product is so deep as far as features go. It's extremely intuitive. As far as the basics go, as far as getting your ERDs established, it's probably the easiest tool I've ever used. If you understand the basics of database design, it's extremely natural. If you have no clue about database design, then your learning curve is going to be large no matter what tool you pick. But erwin definitely cuts that learning curve down just because of its intuitiveness.

    Once you start diving into the automated feature sets like naming standards and things like that, the learning curve there is a little steeper, but it's still not too bad. For a brand new person, if you try to delve into the automated stuff and all the additional functionality, you're just going to get overwhelmed and feel that there is too much overhead. But you don't need to implement all those features right off the bat.

    I would rate erwin a ten out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Buyer's Guide
    Download our free erwin Data Modeler (DM) Report and get advice and tips from experienced pros sharing their opinions.
    Updated: July 2022
    Buyer's Guide
    Download our free erwin Data Modeler (DM) Report and get advice and tips from experienced pros sharing their opinions.