The secrets of AI: Why explainability is crucial!
Prof. Dr. Jürgen Bajorath from the University of Bonn examines the challenges and opportunities of explainable AI in research.

The secrets of AI: Why explainability is crucial!
On April 4, 2025, the importance of explainable artificial intelligence (AI) will be highlighted again in scientific circles. Computer algorithms capable of learning have made significant progress in recent years, but face a central problem: their lack of transparency. This particularly affects decision-making processes, which are often perceived as a “black box”. For example, when it comes to vehicle recognition using AI, it is often unclear which features the algorithms use in their decisions. Prof. Dr. Jürgen Bajorath, who heads the area of AI in the life sciences at the Lamarr Institute and the Life Science Informatics program at the University of Bonn, emphasizes that these models should not be blindly trusted.
Researching the explainability of AI is central to understanding when algorithms are reliable in their decision-making. Explainability describes the ability of an AI system to transparently explain which criteria are crucial for its results. In chemistry and drug discovery, these requirements are particularly challenging because chemical language models often suggest new molecules without explaining why these suggestions are made.
The concept of explainable AI
The Explainable AI (XAI) initiative aims to unravel the often complex decision-making processes of AI algorithms. Rejections of loan applications by an AI system often leave frustration and distrust because the reason for the decision remains unclear. XAI can help overcome these challenges. Important aspects are transparency and trust, especially in critical areas such as healthcare, banking and autonomous driving, where decisions can have serious impacts on people.
XAI methods include identifying influential features for predictions as well as local models that explain specific predictions (such as LIME). These methods are important for detecting and minimizing potential biases in the AI models. Current applications of XAI range from explanations for medical diagnoses to transparent decisions in manufacturing processes.
Challenges of explainability
Although progress has been made, there are a number of challenges that need to be overcome. There is a conflict of objectives between the desired model complexity, which often leads to a “black box”, and the explainability of the decisions. Large language models (LLM) are also affected here, as their complex structures can make a simple explanation impossible. Furthermore, development resources for user-friendly XAI tools need to be allocated to promote wider adoption.
Bajorath warns that the features that AI deems relevant do not necessarily have a causal influence on desired outcomes. To validate the proposed molecular structures, chemical experiments are essential to understand whether the features are actually relevant. Plausibility checks are therefore essential.
In summary, the explainability and transparency of AI systems is not just a technical requirement, but a necessity for responsible and ethical use in society. As the ranktracker.com notes, clear explanations promote trust and are crucial for compliance with legal standards.
The use of adaptive algorithms therefore has the potential to significantly advance research in the natural sciences. Nevertheless, this requires a deep understanding of their strengths and weaknesses to ensure that developments are both ethical and effective. Fraunhofer describes how relevant explainability methods can help not only to improve technologies, but also to ensure the transfer of responsibility in decision-making processes.
Given the diverse areas of application, the discussion about the explainability of AI decisions remains a central topic in the scientific community.