What is Explainable Artificial Intelligence (XAI) and why is it so important?
Almost all current entertainment platforms have a recommendation system. YouTube recommends new videos, Instagram recommends new profiles to follow and Spotify recommends new songs. All of this is thanks to artificial intelligence.
These artificial intelligence algorithms use metadata and study user behavior patterns. But how do these recommendations work? In reality, it is quite difficult to know how this artificial intelligence works, most companies use internal processes and algorithms that are never published, these same intangible assets being very important for these companies.
But, is it really a topic of interest to know how those recommendations are obtained?
It is quite likely that the ordinary user does not pay attention to the operation of artificial intelligence, but there are many algorithms that make decisions in our day to day that we are not aware of and have important effects on our lives. In the banking sector, companies like BBVA have AI algorithms that calculate whether a client will be able to afford a mortgage or other types of loans.
For this reason, it is important to understand how AI works in order to understand the decisions they make, and even more importantly, the reasons why they make them.
AI models and machine learning
Traditional algorithms depended on who determined how the different variables interact and, therefore, make decisions about the variables that return a result based on the programming that a person has done. When an algorithm is designed in this way, it is possible that the algorithm is biased, simply by human error.
To solve this problem, machine learning models come into play. In these models, predictions and decisions are obtained from learning processes that require little human intervention, drastically reducing the probability of a bias coming from a programmer. This is not without its problems either, as the algorithm can be trained with a data set that does not reliably represent the population for which it was designed.
In the previous example, in the process of granting a loan, when an algorithm analyzes the repayment capacity of the applicant, we must guarantee its impartiality and transparency. If we do not manage these biases correctly, we could obtain an algorithm that discriminates in an unethical way, such as by gender, race, age or place of residence.
How can we solve these problems? One of the possible solutions is to use the so-called Explainable AI or XAI, which stands for Explainable Artificial Intelligence.
What is explainable AI?
That an AI is explainable means that you understand how and why the algorithm makes decisions or predictions and that you have the ability to justify the results it produces.
There are two types of explanations: global and local.
First we have the global explanations, these are used to describe the behavior of the algorithm in general. In the case of a streaming platform like Netflix or Prime Video, their recommendation engine won’t come up with an excessively long movie just before bedtime. In the case of the loan, the algorithm will assume that a customer with a high debt will have difficulty making more payment commitments.
On the other hand we have local explanations, which in this case are used to explain the behavior of the algorithm for more specific and personalized cases. In this case, we will take into account the individual profile of each user, in which there may be variations compared to the global recommendations. In this case, our Netflix user may be a night-time consumer who has no problem watching a long movie until late in the morning, or in the case of the bank, that our client has a fairly stable and secure job, with which he would not have trouble paying the debt.
To take advantage of the opportunities that AI offers us, we must be able to explain the decisions behind an algorithm, so we can offer our clients unbiased advice that will help them overcome their difficulties and improve their quality of life.