Research on AI: New ways to improve comprehensibility in the explanation process!

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

The special research area “Constructing Explainability” at the UNI Paderborn investigates extended explanatory models for AI.

Der Sonderforschungsbereich „Constructing Explainability“ an der UNI Paderborn untersucht erweiterte Erklärungsmodelle für KI.
The special research area “Constructing Explainability” at the UNI Paderborn investigates extended explanatory models for AI.

Research on AI: New ways to improve comprehensibility in the explanation process!

On September 10, 2025, the Collaborative Research Center/Transregio 318 “Constructing Explainability” at the universities of Bielefeld and Paderborn will take stock after four years of intensive research. Under the direction of Prof. Dr. Katharina Rohlfing and Prof. Dr. Philipp Cimiano focuses on the goal of researching the understandability and explainability of artificial intelligence (AI). The involvement of users in the explanation process is also particularly relevant, an aspect that gives new impetus to research. The collaboration included a total of 20 projects and six synthesis groups and ends with the first funding phase at the end of the year.

A key result of the research is the finding that current “explainable AI” systems often view explanations as a one-way street. The process of understanding, on the other hand, is increasingly seen as a mutual exchange. Therefore, a new framework for “Social Explainable AI” (sXAI) was developed, which focuses on adapting explanations to user reactions in real time. These developments are driven by interdisciplinary teams that combine computer science, linguistics and psychology.

Analysis of the explanation process

Rohlfing's team examined real conversations to determine how the explanation process works. It turned out that explanations often begin with a monologue, but the explainees are actively involved by asking questions or signaling confusion. The analysis also considered the use of language and gestures to demonstrate understanding. These findings have shown that the concept of “scaffolding” – gradual support for learning – is helpful in optimizing the explanation process.

An example of such a development is the SNAPE system, which was designed in project A01. It reacts sensitively to the person's reactions and adapts the explanations individually to the respective situation. That's why researchers are increasingly focusing on cooperation, social appropriateness and the explanation process itself to make AI systems both more effective and more user-friendly.

Legal and technical challenges

The challenges are diverse, especially in the legal context. Increasing laws regarding the responsibility of companies for their AI systems, such as the General Data Protection Regulation (GDPR) in Europe, require transparent sharing of information about decision-making processes. Explainable Artificial Intelligence (XAI) aims to make these processes more understandable to promote trust and facilitate user interaction.

Techniques for achieving explainability, such as feature attribution techniques, counterfactual explanations and saliency maps, are already established in research. But the complexity of AI models remains a hurdle; Many of these models are often perceived as “black boxes,” which significantly limits the transparency of decision-making. In addition, the need for clear guidelines to promote the responsible use of AI is becoming increasingly clear.

Overall, research in the Collaborative Research Center shows that the explainability of AI not only has technical dimensions, but also legal and social dimensions. The view of explainability needs to evolve to encourage active user interaction and focus on the diverse needs of end users.