UnderwritingAI — a mortgage and insurance solution for the future
There is a new way of viewing data for the purpose of underwriting.
In the past, an underwriter would receive documents, and they would have to analyze them, check the integrity of the documents and the information in them and come to a determination on whether or not to offer credit or insurance to the applicant, based on the documentation.
Our view is that this method was effective in the documentation age, but in digital it doesn’t work, or rather, it doesn’t work when gauged against the options that now exist — this is in the same sense that travel by horseback doesn’t work — we know it still exists but it’s not the best solution any more.
So the approach that we take instead is to look at data in a different way. Rather than considering the validity of a document or the validity of the data in that document, we look at how it relates to other data points to give it relevance. In this way you can get a much deeper understanding of the data.
For instance, imagine you’re trying to verify a person’s name.
You could go get some system that is world class on interrogating the ID document provided, but even if it’s the best in the entire world, it doesn’t place that ID into a sequence of relevant contingencies so it may as well be a stand alone item with near zero context.
We know any underwriter will check for this, but what if there was a different approach?
What if you had a system that checks the name the person entered on their application against the the name or their identification documents? Ok, so you have one check — no biggie right?
Wrong, you keep going and this is what creates the multiple verification in our DAVV-Ops.
What we would do is say where is each instance of a name in this application, find each one and then refer each of them to each other, to determine the integrity, rather than looking at one in its own right.
In this instance it results in many cross checks. You have the name the person entered on the system, the name their ID, on the bank statements, the name that appears on a person’s utility bill. The name that appears on certificates, demonstrating a person’s income on tax returns.
On ever data document we will look for that name, and parse it against every other instance of that name in a set of documents. So instead of doing one or two checks, you can instead provide 50 different cross checks on just one data point, and this gives a far deeper interrogation of the data itself.
Once you have interrogated the data, you can be very sure of what it is and then you can then make quicker decisions with the same data.
This is where Davv-Ops meets ‘UnderwritingAI’. The creation of that large multi-verified set of data points gives you better data integrity and by knowing where to look and how to do it it means less time is spent on a case from start to finish and you look over the parts that don’t make sense while being confident due to the number of checks in the parts that do make sense.
Now when an underwriter gets a mortgage application, they know that out of 150+ cross checks that there might be two things that don’t make sense (for instance the machine wasn’t able to read an certain document) and one that seems ‘wrong’ (for instance, a person’s ID had a middle name included so you correct for that but the data is still good).
If you catch only the things that are questionable or wrong and the rest is prepared for analysis then you can underwrite on the back of that and this is where robo-decisioning becomes possible.
‘Possible’ is only the start though, with ongoing work it can also become ‘better’.