AI systems are becoming increasingly intelligent and better able to perform tasks with little or no support from humans. However, an increase of behavior complexity has resulted in a decrease in transparency: systems that perform well but are unable to explain their outputs. The field of eXplainable AI (XAI) studies and develops methods and techniques to equip intelligent systems with the ability to explain their behavior in a way that facilitates understanding and trust in a human. This leads to more effective human-AI teamwork.
In our HART-team, we develop and evaluate various methods to create intelligent systems that can explain their decisions within the domain of Healthcare and Defense. For example, methods that enable explanations in terms of causal relationships. That is, a human team member might ask a system that supports planning what will happen if particular situational circumstances are different than expected (e.g., the travel distance is much larger), to which it can respond by informing the human about the consequences of this new information for the current plan and by providing alternative plans.