What is our primary use case?
We are using it for a very specific use case, and it works pretty well for us. We do all of our database modeling based on this tool, and it is a repository of all data models in our business intelligence ecosystem. The logical representation of our metadata and anything that is created in a database, such as tables, is in it.
It is an on-prem workgroup. We have a workgroup server that hosts our model.
How has it helped my organization?
We utilize it for its cross-database capability and logical representation of the data model. We have recently started to use its collaboration features, and we also use it to define all our relationship constraints and referential integrity within our data model. So, a lot goes out of it.
It has standardized our practices. For example, all customer-related entities and attributes have to follow a certain naming convention. It has helped in standardizing the process of creating our data models so that when we go and explore the data, we can combine them in a way in which we are confident of producing the right results. It has made a lot of difference in terms of naming standards, processes around our metadata, and the schema in which we create a database. We have a proper template to put the information through a well-structured data model. It helps users in getting the maximum value of the information that is available in the BI ecosystem. erwin Data Modeler makes it very simple and easy to navigate our very complex data.
Its visual data models are very good and helpful for overcoming data source complexity and enabling understanding and collaboration around maintenance and usage. We have a complex business environment where we have retail and supply chain space for distribution. There are a lot of cases where we use the models for customer promotions and events and loyalty systems. Different data modelers can do their own subject areas, and then they can bring them together in a workgroup workspace. It has allowed us to collaborate and distribute the data modeling work. Previously, it used to be very single-threaded. Now, a lot of different teams can run their own modelers, and then, later on, integrate them, which is very useful. It is also very useful in the database migration process. You can take a logical model and seamlessly transfer it over to the database. That's very useful as well.
We use its modeling support for Snowflake Cloud. We don't use it in any special way. We use it the way we use an existing on-prem database. It just needs to follow Snowflake conventions, which it does. We have a standard logical model that can then translate to a physical model for any database we choose, and that's where erwin has been very helpful. We can set those naming standards, and it also does logical to physical translation seamlessly. This support for Snowflake is helpful. We have enough help to port our model from DB2 to Snowflake in terms of model creation. It has proven very helpful that way.
It can create table structures across a wide variety of sources, which is very useful for us. It cuts the development time of our database code quite a bit. Otherwise, we would have to rely on Excel sheets. Currently, our average project size is anywhere from 3,000 to 4,000 hours, and out of that, we spend around 5% on data modeling. If we didn't have this tool, it will take almost twice more time for any project.
What is most valuable?
It allows us to create logical data models. We can represent a database model in business terms, which is very useful for us.
It supports a wide variety of databases, including the latest ones. We have chosen to go for a cloud-based database, and it supports that, which is very useful.
It is very useful for maintaining relationships between tables. We can put constraints and foreign key-primary key relationships into the model, and it gets translated into the physical database seamlessly.
Workgroup is another useful feature to store and share the models with the team for collaboration.
What needs improvement?
In terms of improvements, support could have been better in terms of installation, especially of workgroups. We struggled quite a bit to get it up and running. Collaboration could have been better from an installation perspective, but it is trivial as compared to what we use it for. Other than that, I don't have much feedback. It works pretty well, and the fact that we've been using it for more than a decade shows that it is quite solid.
In terms of new features, it would be great to have a cloud base. We should be able to put it on the cloud for better collaboration and data models sharing.
For how long have I used the solution?
I have been using this solution for more than a decade.
What do I think about the stability of the solution?
What do I think about the scalability of the solution?
It is fairly scalable. We really haven't pushed it to the limit with respect to scalability, but we haven't found any issues.
Currently, we have around 20 users. They are mostly data modelers and data engineers. We have plans to increase its usage as deploy additional systems in our business unit. So, there are plans to scale up, but not in the immediate future.
How are customer service and support?
I have interacted with their technical support. I would rate them an eight out of 10.
Which solution did I use previously and why did I switch?
I have been in this company only for two years, but from the licensing, I know it has been more than 10 years. I am not aware of any other tool being used previously.
How was the initial setup?
There wasn't a lot of stuff. When things didn't work, we had to go and figure out why this isn't working and which ports should we open. There was a lot of back and forth communication with their support, and they were very helpful, but it gets pretty difficult when something that could be done in one to two hours takes you longer than that. It took us a few weeks to get it right, but once it started working, it was pretty seamless.
There was no implementation strategy. You just download an installable and install it. The problem is that it requires a database, and it requires a particular configuration. All this is documented, but it doesn't work the way it is documented. So, it took time for us to figure out, "Hey, this thing is not working. Why is it not coming up?"
For maintenance, we don't have anyone. For the deployment of the workgroup, it took just one person. My data engineering lead just went and did it all by himself. It is a pretty simple product. It just took us a while to figure it out, especially the collaborative tool. Generally, it is supposed to take half an hour for one person.
What about the implementation team?
We installed it ourselves. We did not use anybody to install it, maybe that's why it took us longer.
What was our ROI?
I don't have the metrics, but I would say we have seen an ROI. It has brought down the cost of implementation in terms of manpower. It might have saved us thousands of hours. It could also be more than a hundred thousand hours.
The accuracy and speed of the solution in transforming complex designs into well-aligned data sources make the cost of the tool totally worth it.
What's my experience with pricing, setup cost, and licensing?
There are no costs in addition to the standard licensing fees.
What other advice do I have?
In general, for its purpose or use cases, it is the best tool in the market. It does its part in terms of metadata, but we have other challenges that erwin cannot resolve. We have a large pool of legacy data sources that are not labeled, and erwin really can't help there. I don't see any other tool filling that space unless we go for a catalog, which is a different product space altogether. erwin can process the legacy files, but we're just not using it for that because we don't have the bandwidth.
You need a skilled modeler to start off. It really depends on what kind of organization is implementing it: small scale, mid scale, or big scale, but collaboration really works. It is a very good tool, but proper training would be required to take full advantage of the tool. It helps to do a lot more on the job. You would need a lot of discipline before you start using the product. The standards and governance should be put up front before it can be utilized effectively.
The biggest lesson that I have learned from using this solution is that it cannot resolve governance issues. You need to have proper standards in place before you start using this tool. Bad processes lead to bad outcomes. The tool will help you shepherd those processes, but it doesn't solve them. So, you need to have proper process governance and standards. You need to make the tool enforce those processes and standards. You should have proper controls on the data inside in order to get the best results. Governance and process discipline are pretty important.
On the database side, I come from organizations where some people follow one standard, and other people follow another set of standards, and if we use the same database and tools, then you get a mess. That's where the process discipline comes in for unified governance, which has got nothing to do with the tool. It has everything to do with how the organization is structured. The tool will help you to control that.
I would rate erwin Data Modeler a nine out of 10. If it can be on the cloud without any installs, that would make it a 10.
Which deployment model are you using for this solution?
On-premises
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.