Students at TU Braunschweig are revolutionizing explainable AI!

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

The TU Braunschweig hosted the Deep Learning Lab in 2025, where students developed explanatory models for AI and implemented them in practice.

Die TU Braunschweig veranstaltete 2025 das Deep Learning Lab, wo Studierende Erklärmodelle für KI entwickelten und praxisnah umsetzten.
The TU Braunschweig hosted the Deep Learning Lab in 2025, where students developed explanatory models for AI and implemented them in practice.

Students at TU Braunschweig are revolutionizing explainable AI!

The Deep Learning Lab took place at the Technical University of Braunschweig in the summer semester of 2025, now the eighth edition of this event. In this context, students worked intensively on developing methods to explain deep learning models. A central goal was to make neural networks, which are often criticized as “black boxes,” more transparent, especially in application areas such as medical image analysis and autonomous driving, where the traceability of decisions is essential. Loud TU Braunschweig In particular, the visualization of objects was examined, with one example being the recognition of a bus through contours, windshield and front apron.

In the context of this discussion about the explainability of artificial intelligence (AI), the research field “Explainable AI” (XAI) is of crucial importance. Fraunhofer IESE highlights that the interpretability of models goes beyond their ethical implications as it supports the full exploitation of the potential of these technologies. The students in the Deep Learning Lab also worked on developing “saliency maps,” which act as heatmaps and highlight the relevant image areas that are important for classification. This technique is particularly relevant for diagnosing errors and promoting innovative approaches in AI.

Practical applications and successes

The participants in the Deep Learning Lab worked with the PASCAL VOC 2012 image data set and evaluated the models' explanations based on two criteria: the similarity to human explanations and the comprehensibility of the decisions made. The efficiency of the models was also important, and a special prize, the “Environmental Prize”, was awarded for particularly low computing requirements.

The winning team, consisting of Fabian Kollhoff, Jennifer Ly and Aruhan, won the main prize of 600 euros for their impressive results. A further prize of 450 euros was awarded to Mohammad Rezaei Barzani and Nils-André Forjahn for their low GPU computing time consumption while maintaining high performance. The final event took place on July 11, 2025, where the participants presented their results and exchanged ideas with sponsors and experts. Discussions about the future of explainable AI were another highlight of the event.

Insights into technology

Creating saliency maps requires a deep understanding of the underlying technologies. These maps measure the spatial support of a given class in images and are an essential tool for understanding the perception of convolutional layers in computer vision. Loud medium Saliency maps highlight crucial image areas and offer valuable insights into how the selected models work. An example of this is the development of a binary classification model to differentiate between cats and dogs, which achieved an accuracy of 87% thanks to sophisticated techniques.

Despite the challenges inherent in developing interpretable AI systems, these developments demonstrate not only advances in technology, but also a clear path to the ethical use of artificial intelligence and maximizing its areas of application. The students' creative approach to these problems illustrates the dynamic development within the research field and the constant search for new, understandable approaches in AI. The participants on the festival day demonstrated their deep commitment to this exciting and forward-looking topic through their presentations and discussions.