Are we moving towards an Explainable Artificial Intelligence?
As we gain in predictive accuracy through the use of complex models we lose the ability to explain both the variables and the results. But there is no future if decisions cannot be explained and understood.
In highly regulated industries, such as finance, insurance, telcos, health, pharmaceuticals… the black box model can be very problematic because there is no transparency about why decisions are being made.
Interpretability is increasingly essential within Artificial Intelligence. This paper will look at the different methods and approaches to achieving interpretable or, at least, explainable models.
xAI can have a great impact on companies that are starting their journey towards AI, and even be fundamental to deepen the application of AI in companies experienced with more complex models, covering horizons not previously reached.