Revolution in neuroscience: AI decodes the human brain!
The FU Berlin is researching the connection between large language models and human visual understanding, published in "Nature".

Revolution in neuroscience: AI decodes the human brain!
A new study conducted by a team led by Prof. Dr. Adrien Doerig from the Free University of Berlin shows that large language models (LLMs) are able to predict how the human brain reacts to visual stimuli. This research was published in the renowned journalNature Machine Intelligencepublished and entitled “High-level visual representations in the human brain are aligned with large language models” reports the Free University of Berlin.
The investigation relates to the interaction between human visual perception and the representations generated by LLMs, such as those behind ChatGPT. Until now, there has been a lack of effective tools to analyze the highly abstract meanings that people derive from visual impressions. The research team achieved this by extracting “semantic fingerprints” from normal scene descriptions, which were then used to model functional MRI data collected while viewing everyday images.
Innovative methods for data analysis
The images examined include scenes such as “children playing Frisbee in the schoolyard” and “a dog standing on a sailboat.” The representations generated by LLMs led to accurate predictions of brain activity, allowing inferences to be drawn about what was seen. These methods proved to be more effective than many current image classification systems, highlighting the importance and potential applications of LLMs in neuroscience.
Additionally, the ability of computer vision models to predict semantic fingerprints directly from images was investigated, which could further advance research. These findings are of great relevance not only for neuroscience, but also for the development of intelligent systems. Loud Fraunhofer IKS The importance of AI-based cognitive systems is becoming increasingly clear as these technologies are indispensable in various application areas, including autonomous vehicles.
The challenges in AI and high security requirements
However, the complexity of large language models and the persistent incomprehensibility of such systems pose significant challenges. David Bau, a computer scientist at Northeastern University, describes how traditional software allows problems to be identified, while AI often acts as a “black box” whose exact functionality is difficult to understand. The research field of explainable AI (XAI) is therefore becoming increasingly important to better understand the internal logic and decision-making of AI systems reported Spektrum.de.
Since LLMs are used for complex tasks such as medical consultations or programming, it is essential that their decisions are understandable. The need to provide explanations for AI systems is essential, especially in high-risk applications. Bau points out that companies like OpenAI keep their source code secret, which hinders efforts to conduct transparent research and therefore limits the development of safe, explainable AI systems.
In the future, the synthesis of insights from brain research and AI development, as demonstrated in the current study, could be instrumental in bridging the gap between human and machine understanding. These synergies open up new perspectives for both scientific disciplines and are on the threshold of groundbreaking advances in the development of intelligent systems.