I and the DevOps architect think erwin Data Intelligence is a better product technically because it's more designed for a technical user. But it couldn't pass one penetration test. In the federal government, if there's one problem like that, they're not happy anymore. Also, really huge datasets, where the logical names or the lexicons weren't groomed or maintained well, were the only area where it really had room for improvement. A huge data set would cause erwin to crash. If there were half a million or 1 million tables, erwin would hang. And then, when the metadata came in, it would need a lot of manual work to clean it up.