Artificial intelligence is a new tool in the toolbox for companies that want to make better predictions. There are pitfalls, however, says Olle Engdegård, head of predictive modeling at Swedish business and credit information company UC – Part of Enento Group.
In the wake of the corona pandemic we are in the middle of a two-part economic crisis. A good knowledge of customers’ creditworthiness is more important than ever.
“This is not the time for companies to just lean back. Instead they should be making sure that their credit processes are up to date,” says Olle Engdegård.
What risks can we expect?
Olle Engdegård, whose background is in research physics, specialises in advanced predictive modelling at UC. Much can be calculated, but not everything.
“We work with risks that you can calculate. But there is also a class of risk that you cannot arrive at through calculations. These are known as black swans – unexpected events that have a disproportionately large impact.”
Even for unexpected events, however, there also needs to be some kind of readiness. For those working in risk management, data-driven modelling makes it possible to get an overview of major trends quickly – be it dramatic changes in consumption patterns or sudden problems in specific parts of the economy.
We’re always hearing about AI – but what does it mean?
“Data-driven modelling based on large volumes of data can allow us to react more quickly when the unexpected occurs. Such models are sometimes referred to as AI, or artificial intelligence, but the term is problematic,” feels Olle Engdegård.
“It’s used about so many things – from relatively simple statistical predictive models to chatbots and advanced machine learning algorithms.”
AI is a popular buzzword right now. It has become part of consultants’ vocabulary, an argument for raking in money and selling expensive solutions. And as with many hyped terms, it is sometimes unclear what is meant by it. Olle Engdegård emphasises that it is important to clearly define the problems that you want to resolve.
“There is no reason why this particular technology should be hidden behind a smokescreen.”
For a while, large amounts of money were invested in AI projects without people really knowing what they were for. Now, however, things are being tightened up.
“People are going to be more specific about what they expect from it and what they are paying for.”
Effective AI needs large volumes of data
To create an effective AI system you need large amounts of data. Olle Engdegård thinks many people underestimate the work involved in preparatory data handling.
“A common perception is that the biggest job is developing the predictive model, but in practice you always put more resources into building the infrastructure. Data has to be collected and processed, cleansed and filtered before it can be used.”
And you have to understand your data – whether that is structured data points in Excel spreadsheets or unstructured data in the form of text, speech and images.
“When building a predictive model you can never get away from the fact that understanding the information is everything. You can’t just pour in your data and get out what you want.”
Important to know what the system will be used for
Advanced predictive models are not necessarily the best thing in a given situation. That’s why it’s important to be clear about what you actually want the system for. Do you really need a big machine learning system, or would it be enough to have a simpler model that supplements the statistical systems you already have?
“The simpler you make the systems, the easier it is to grasp them. The more complex the systems that you develop, the more important it is to use the information in the right way. Data is tricky and demands a high level of understanding.”
An advanced machine learning algorithm is a bit like a black box: difficult to see all the way into. And an AI system is only ever as good as the data fed into it. In the best of worlds AI systems are tools that help us make entirely unprejudiced, objective decisions – but the opposite can also be true. In the worst case, AI systems can reproduce discrimination.
“For example, one facial recognition system proved to be very good at recognising white men, but not so good at recognising women of colour. The white male engineers that created the system hadn’t thought about that. You need to be sure that the data you are basing your system on is representative. And you must always take responsibility for what the system does.”
Three trends in credit assessment
- Increased competition
The Swedish credit market is becoming crowded with more and more players. Many are sharpening their processes through credit scoring and advanced predictive modelling, sometimes using AI. The pandemic has forced many to review their strategies. - Greater understanding of personal data
GDPR has pushed everyone into a greater understanding of personal data and how it is used. More and more people are realising that consent is not enough – that there are various other rules that must be complied with, such as protecting the information and using it only for clearly defined purposes. - Increased use of predictive modelling in risk monitoring
As the lending market goes through a period of increased uncertainty, further control of portfolio risk is often needed. Continuous data-driven risk assessment at the pre-collection stage, in the form of predictive-modelling support, can be the key for reducing later costs.
Interested to talk more about this topic, please get in touch!