Trust in AI: This is how artificial intelligence becomes understandable and safe!
With the TRR 318, the University of Paderborn is researching understandable AI explanations to promote user trust and the ability to act.

Trust in AI: This is how artificial intelligence becomes understandable and safe!
Artificial intelligence (AI) has now found its way into many areas of life, be it in the form of language assistants or decision-making systems. However, despite their widespread use, the decision-making processes of these systems are often difficult to understand. The Collaborative Research Center/Transregio (TRR) 318 “Constructing Explainability” at the universities of Paderborn and Bielefeld is dedicated to intensive research questions in order to make explanation processes more understandable for users. In particular, the “Understanding” synthesis group in TRR 318 asks the central question of what information users need in order to better understand AI’s decisions ( uni-paderborn.de ).
Prof. Dr. Heike M. Buhl from the University of Paderborn emphasizes that the goal of such explanations is to give people a clear understanding. This includes the distinction between conceptual knowledge, which refers to the mere knowledge of information, and agency, which is the knowledge of how to apply this knowledge practically. The research also examines the dynamics of superficial and deep understanding, which depend heavily on the users' prior knowledge and interest ( uni-paderborn.de ).
The need for explainability
The concept of explainable artificial intelligence (XAI) is becoming increasingly important. XAI includes procedures and methods that enable users to understand the results of machine learning algorithms and increase their trust in these systems. In many cases, AI models are viewed as “black boxes” whose decision-making processes are often opaque, even for developers. Therefore, it is crucial that companies understand and continually monitor the decision-making processes of their AI systems to avoid bias and performance deviations. XAI also promotes the characterization of accuracy, fairness and transparency, which is essential for responsible implementation ( ibm.com ).
The benefits of explainability are many. It includes ensuring system functionality, compliance with legal requirements and the ability to appeal decisions. Important techniques within explainable AI include decision traceability and improving prediction accuracy. Methods such as Local Interpretable Model-Agnostic Explanations (LIME) and DeepLIFT are used to analyze neuronal activations and thus contribute to a better understanding of AI decisions ( ibm.com ).
Trust in AI systems
An equally central aspect in the discussion about AI is trust in these technologies. The German Research Center for Artificial Intelligence (DFKI) is currently working on methods for testing and certifying intelligent systems. The aim is to reconcile technological excellence with social responsibility. This is particularly important in security-critical areas such as healthcare or finance, where transparency is a cornerstone of trust ( dfki.de ).
The DFKI develops testing criteria and certification procedures that aim to ensure trustworthy AI models. Initiatives such as MISSION AI, which advances the strengthening of trustworthy systems through efficient auditing standards, complement these efforts. In addition, CERTAIN promotes standards for the validation and certification of AI systems in Europe. These developments are necessary to enable companies and institutions to implement trustworthy technologies and ultimately increase the acceptance of AI ( dfki.de ).
Overall, it is clear that the continuous development process of trustworthy AI requires both scientific excellence and social responsibility. Europe has the opportunity to become a global leader with a human-centered approach to AI development. Active participation in the design of these technologies is necessary in order to meet the challenges of the future and win the trust of users.