Share your experience using Innovative Systems FinScan

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 83,000+ reviews on PeerSpot:

PeerSpot user
Data Quality Consultant at USAA
Real User
Top 10
Data rules, column analysis are valuable but the interface is not intuitive.

What is our primary use case?

Data profiling, data quality reporting 

How has it helped my organization?

Sometimes a project knows little about a particular set of data. IA is good at data profiling / data discovery. It can give insight into data about data type, format, uniqueness, completeness, frequency distribution, etc. The other powerful feature of IA is its ability to check data against business rules. It can give statistics on how many records violate the rule.

What is most valuable?

Data rules, column analysis, virtual tables

What needs improvement?

The interface is not the most friendly. Performance.

There are also these following features - documented in the user guide - but do not work:

1. Global Logical Variables (GLVs)

2. Migrating projects. Neither the internal method (Export/Import) nor the command line interface (CLI) method work 100%. They always error out.

3. When you open a data rule and do no modifications, when you close it, IA asks if you want to save the changes, even if you did not make any. A bit disturbing when you know you did not change anything yet you start to doubt what you think you know.

My wish list for new features:

1. Ability to use functions on data sources. I do not understand how IBM could miss this. Data sources are not visible when coding custom expressions. For example if you have a field called CUSTOMER.ACCOUNT_NUM, you cannot code TRIM(ACCOUNT_NUM). My workaround is to create a variable in the rule definition then bind it in the data rule. Functions can only be applied to variables, not directly to fields. I have a rule where I do things to about 12 fields - concatenate, substring, length, coalesce, etc - and I had to make up 12 lines in the definition that do nothing but refer to these variables. I had to invent a rule so I coded seemingly useless rule conditions like address1 = address1 just so I have a variable for the field I want to code functions for. Huge oversight on the part of IBM.

2. Copy a data rule and modify the copy. Right now only rule definitions can be copied, not data rules. Sometimes I need to create two or more versions of the same rule. IA forces me to generate each of them from scratch. This is annoying when version 2 is only slightly different from version 1. If it took me an hour to code the original, it would take me close to that amount of time to code the new version. If I could copy and modify, the effort would only take maybe 5 minutes.

3. The date of last modification. IA only shows the date of creation which is generally useless. The last modification date is far more important and needs to be available and visible.

4. A file manager, a la Windows Explorer. I may want to see the list of rules and sort them by date of modification.

5. Enhanced dedup on output. Currently, IA can only exclude duplicates based on the entire record. It should allow deduping on a select set of columns.

6. Feature to select one record from multiple matches in a join. For instance, in Oracle SQL, one can FETCH FIRST ROW ONLY or use ROWNUM or TOP 1.

7. Ability to sort the output.

8. New virtual tables take a while to appear. You create one and the list doesn't list the new table. Wait 15 minutes or so and maybe it will be listed. Or log out and log back in. 

For how long have I used the solution?

Since 2008. 

What do I think about the stability of the solution?

The tool sometimes crashes or freezes. But the latest version, 11.7, is more than stable than previous ones. 

How are customer service and support?

Customer Service:

Scale of 1 to 10: 8. While IBM is excellent at responding to inquiries, it is slow to implement much-needed software fixes. While that is common in the industry, I would still like to see IBM fix software bugs sooner.

Technical Support:

Same as customer service.

Which solution did I use previously and why did I switch?

No never had the chance.

How was the initial setup?

I have not been involved in setup but I understand it is very complex, not for the faint of heart.

What was our ROI?

Excellent!

Which other solutions did I evaluate?

I was not involved in the selection. 

What other advice do I have?

Get the latest version. Compare with competing products. Know that there are not many experts in the product and that they may pay a premium to hire them.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Data Architect at World Vision
Real User
Top 5Leaderboard
SSIS MatchUp Component is Amazing
Pros and Cons
  • "The high value in this tool is its relatively low cost, ease of use, tight integration with SSIS, superior performance (compared to competitors), and attribute-level advanced survivor-ship logic."
  • "The tool needs to provide resizable forms/windows like all other SSIS windows. Vendor claims its an SSIS limitation however all SSIS components are resizable so that isn't true. This is just an annoyance but needless."

What is our primary use case?

We use this tool for B2B and B2C customer de-duplication/matching, generating a golden version of our customers and for householding. 

How has it helped my organization?

We use Melissa Data Matchup for SSIS to de-duplicate our customer data on a daily basis so that we were able to reduce marketing costs and increase the quality of communication with customers.

It replaced a weekly primitive custom de-duplication (record level) matching process.

Its survivor-ship logic handles very complex column-level rules efficiently providing us with a first-time for a single version of truth for our customer data. It's inherent intelligence into name and address parsing provides a very accurate exact match with no false positives and no unexpected false negatives. We are continually impressed by its sophistication and ease of use. The tool does not requires a middle tier or specialized staff like every other tool on the market.

What is most valuable?

The high value in this tool is its relatively low cost, ease of use, tight integration with SSIS, superior performance (compared to competitors), and attribute-level advanced survivor-ship logic. There's no separate server needed and no separate application to maintain.

This vendor offers a large variety of components from on-prem to cloud SaaS as well as hybrid of cloud and on-prem. This review is specific to the "MatchUp for SSIS" component.

For us, this tool had very high value due to the fact that we didn't have to become experts in some overly complicated DQ tool. And because it is fully integrated with our EDW ETL rather than having to originate and integrate an external application.

We are using it for daily 1) direct matching, 2) column-level survivor-ship and 3) mail house-holding. We started with B2C customers and later added B2B customers. The tool supports unique matching specific to organization names and individual names (as well as a variety of other specialized types of data values) and works well in both cases. For example it can pull out nicknames and match on those.

One of the business and operational benefits for us is feeding the end result to Adobe Campaign for marketing automation. But the primary output is simply creating and managing an analytical golden record for our customer data. This has provided a very effective, holistic, maintenance-free, and extremely cost effective solution for us.

The initial POC was up and running in just a few days with no training needed. The plug-in into our ETL tool was seamless and fully integrated into our existing processes. Most of our effort was due to the need to identify customer survivor-ship requirements and validation. Any needed adjustment changes could be done very quickly allowing us to focus on business requirements instead of implementing technology.

What needs improvement?

- Scalability is a limitation as it is single threaded.  You can bypass this limitation by partitioning your data (say by alphabetic ranges) into multiple dataflows but even within a single dataflow the tool starts to really bog down if you are doing survivorship on a lot of columns.  It's just very old technology written that's starting to show its age since it's been fundamentally the same for many years.  To stay relavent they will need to replace it with either ADF or SSIS-IR compliant version.  

- Licensing could be greatly simplified. As soon as a license expires (which is specific to each server) the product stops functioning without prior notice and requires a new license by contacting the vendor. And updating the license is overly complicated. 

- The tool needs to provide resizable forms/windows like all other SSIS windows. Vendor claims its an SSIS limitation but that isn't true since pretty much all SSIS components are resizable except theirs! This is just an annoyance but needless impact on productivity when developing new data flows.

- The tool needs to provide for incremental matching using the MatchUp for SSIS tool (they provide this for other solutions such as standalone tool and MatchUp web service). We had to code our own incremental logic to work around this.

- Tool needs ability to sort mapped columns in the GUI when using advanced survivorship (only allowed when not using column-level survivorship).

- It should provide an option for a procedural language (such as C# or VB) for survivor-ship expressions rather than relying on SSIS expression language.

- It should provide a more sophisticated ability to concatenate groups of data fields into common blocks of data for advanced survivor-ship prioritization (we do most of this in SQL prior to feeding the data to the tool).

- It should provide the ability to only do survivor-ship with no matching (matching is currently required when running data through the tool).

- Tool should provide a component similar to BDD to enable the ability to split into multiple thread matches based on data partitions for matching and survivor-ship rather than requiring custom coding a parallel capable solution.  We broke down customer data by first letter of last name into ranges of last names so we could run parallel data flows.

- Documentation needs to be provided that is specific to MatchUp for SSIS.  Most of their wiki pages were written for the web service API MatchUp Object rather than the SSIS component.

- They need to update their wiki site documentation as much of it is not kept current. Its also very very basic offering very little in terms of guidelines. For example, the tool is single-threaded so getting great performance requires running multiple parallel data flows or BDD in a data flow which you can figure out on your own but many SSIS practitioners aren't familiar with those techniques.

- The tool can hang or crash on rare occasions for unknown reason. Restarting the package resolves the problem. I suspect they have something to do with running on VM (vendor doesn't recommend running on VM) but have no evidence to support it.  When it crashes it creates dump file with just vague message saying the executable stopped running.

For how long have I used the solution?

We have been using this product for over 7 years.  

What do I think about the stability of the solution?

No as long as you don't try to match on null last names or lots of duplicate (exact match) records or try to run it in the default 64 bit mode of SSIS (issue here is only with new versions).

What do I think about the scalability of the solution?

We can run 9 million customer record exact matches in 10 minutes using 5 partitions/parallel dataflows. Survivorship takes another 50 minutes. I'm sure you could run faster with dedicated hardware and running more parallel dataflows. The tool starts to exponentially slow down once you pass about 2 million customers in a single dataflow so its best to keep it at or under that number although mileage will vary depending on the complexity of your matching.  Its unfortunate that the vendor hasn't built in parallelism which would both eliminate the need to do this yourself.  They should be able to auto-scale it based on # of CPU's your running.

Even with that limitation this tool is magnitudes faster than the last matching tool I used and it wasn't a simple plug-in to an ETL tool. I recently heard of a competing tool that takes longer to match just a few thousand customers than this tool takes to run millions of them.

Note:

We probably run higher volumes than many organizations. For B2B and daily matching you could probably process a delta in a matter of a few minutes with this tool.  

Note:  I suspect an essential ingredient when considering scalability is whether you're calling a web service for matching or just on-prem. Their SSIS component is only on-prem but they offer a web service as well which we have not tested.

Combining survivorship and matching in the same data flow slows performance. We got much better performance by running in two separate dataflows - the first for just matching and then another for just survivorship (re-using the previous grouping numbers in the first match) to make it perform to our requirements.

How are customer service and support?

Customer Service:

Fairly typical vendor support. They are immediately attentive to problems and provide email notifications of software versions. The main technical contact we work with has been there for the last decade which is very refreshing!

Technical Support:

They regularly release new versions of the product with bug fixes and enhancements although just the matchup tool itself has changed very little in the past 5 years. 

However unless you can interact directly with the development team problems may not get resolved in a timely manner. I have usually been left coming up with my own solution in the time I was waiting for their support to provide answers from their support team.

Which solution did I use previously and why did I switch?

I have used Datamentors and SAS Dataflux in the past with good success although I would easily take this product over those products for just matching/survivorship purposes. We had tested Oracle's cloud-based Fusion product which wasn't actually a functioning product at the time. The MelissaData tool is light-years ahead of Datamentors, far easier to use and the price can't be compared. The SAS tool was very expensive.  All other matching tools require separate middle tier application verses this product which is just a plug-in to SSIS.

How was the initial setup?

Initial setup on the first install was VERY easy. Propagating the matching rules to the next server was easy IF you know which file to copy which isn't well documented. The tool is extremely easy to use when you know just a few little things which aren't documented. Their development staff were very helpful in providing simple tips on how to set it up.

What about the implementation team?

This was in-house implementation. The vendor was very responsive in answering questions.

What was our ROI?

I have no numbers for ROI but it's avoided having to spend 6 figures for similar functionality in another tool.  Plus since it's fully integrated with SSIS there is no need for separate server - more money saved. 

What's my experience with pricing, setup cost, and licensing?

This vendor has no equal in pricing for equivalent functionality. First no one else offers this level of integration with SSIS. Second other vendors with equal functionality all cost many times the cost of this tool. Third it doesn't require a separate server or large learning curve of new software. Fourth, this is one of the "go to" vendors for matching purposes as some master data and data quality tools are actually calling MelissaData Matchup object in the backend then charging you a lot for their pretty GUI to do this for you.

Which other solutions did I evaluate?

I evaluated Microsoft's DQS which could not scale over 100,000 customer records. DQS actually supported calling MelissaData Matchup in the old Microsoft Marketplace (no longer available) to use it's more sophisticated matching but it was a moot point if DQS can't handle the volume.  

What other advice do I have?

This tool is a dream compared to my previous experience with batch matching/de-duplication tools. And the pricing is incredible given its functionality and simplicity. High value and very lost cost. If you're an SSIS shop (they support other ETL tools also however) and you need to de-duplicate, household and/or do column-level survivorship then this tool can't be beat.

I highly advise running parallel threads by splitting your dataflow into multiple paths.  This allow parallel matching and increaes throuput significantly.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate