AI in Production: Does AI Owe You an Explanation?

Alexander Engels
January 21, 2022
4 Min
read

AI is increasingly used in production because it enables efficiency gains at the machine, process, and organizational levels. AI methods such as neural networks are increasingly being deployed. These can learn complex relationships particularly well on the one hand, but on the other hand they are unable to provide explanations for why they make a particular decision. This is a typical "Black Box" characteristic.

The traceability of decisions is particularly important in production, because incorrect decisions can lead to high costs or, in the worst case, to dangerous situations for people. In order to improve an AI effectively and fix errors, an AI must therefore present its results in a way that is understandable to users.

Based on the study "Explainable AI in Practice" published in March 2021 by the AI Progress Center in the "Learning Systems" series, this article provides an overview of how you can find suitable explanation methods for your AI use case.

Why should AI in production be explainable?

Every AI method belongs to one of two classes: white box or black box. In a white box method, all processes are transparently traceable, while a black box method requires additional effort to understand its results. In particular, the very popular neural networks are among the black box methods and cannot be easily understood.

In an economically or safety-critical application context, this is a problem. Here, companies need AI systems whose results are traceable. According to a study by IIT Berlin, explainability is of great importance in production. And in a Bitkom survey in 2020, 85% of participants spoke in favor of a thorough review of AI systems in Germany. Explainable AI can therefore significantly lower the hurdle for successful deployment of AI in production.

In production use, explainability can also help to extract the learned AI knowledge and the chains of effects in the application field and make them usable for users. This can lead to entirely new improvement opportunities.

Finding the right explanation method

Suppose your AI use case requires that the AI results not only be accurate but also traceable. Testing various AI white box and black box methods shows that a neural network cannot be beaten in terms of accuracy. An additional explanation method is therefore needed that presents the results of the neural network in an understandable way.

The selection of possible explanation methods is large. But there is a systematic approach that allows your AI developers to quickly narrow down the best candidates. In the first step, all methods are filtered out whose requirements for the type of data being processed (time series data, spatial data, etc.) or whose display options for explanations (weighting of individual influencing factors, influence probability of certain combinations of influencing factors, etc.) do not fit the use case.

In step 2, the remaining candidates are prototypically implemented and evaluated based on their practical suitability. For this purpose, both objective performance metrics are used and feedback is obtained from application experts. Final certainty is provided by user studies in the actual field of application. These steps can be time- and cost-intensive. But they minimize the significantly more painful risk that expensively developed AI works technically but creates no added value in day-to-day operations.

Explainability as a key factor in human-centered AI

When AI is newly introduced into an environment where people play the main role, those people typically decide whether the AI can ever realize its potential. In the worst case, AI is treated like a troublemaker and, figuratively speaking, "mobbed" — that is, ignored or even sabotaged. To head off a potential conflict, it can make sense to introduce human-centered AI into production. In human-centered AI, explainability of AI results is a central factor. 

In our experience, users do not assume responsibility for work results that are produced in collaboration with an AI whose functionality they do not understand or cannot fully control. The need for understanding is often role-dependent and thus places different demands on explainability. We align ourselves here with the recommendation of the study "Human-Centered AI Applications in Production" by the AI Progress Center, to involve users in the concept phase so that their needs for the AI application are appropriately considered early on.

Conclusion

Whether an AI owes its users an explanation certainly depends on the concrete circumstances. But it can be assumed that the need for explanation increases the deeper AI functionality intervenes in day-to-day work and the greater its influence on work success and work safety. The development and implementation of human-centered AI is therefore a key issue for sustainable value creation through AI in production and is high on the list at aiXbrain. 

Share this post
Alexander Engels