AI against disinformation: Dr. Vera Schmitt's fight for the truth!
The TU Berlin conducts research under Dr. Vera Schmitt on AI-supported tools to combat disinformation in order to strengthen democratic processes.

AI against disinformation: Dr. Vera Schmitt's fight for the truth!
In a world where disinformation is spreading more and more easily, the research group XplaiNLP at the TU Berlin represents a crucial step in combating this challenge. Under the leadership of Dr. According to Vera Schmitt, the group has made it its mission to strengthen society's resilience to disinformation. With over 4 million euros in third-party funding, her research is aimed at developing tools that help both journalists and the public better understand disinformation narratives.
The group's work focuses on two main projects: VeraXtract and news-polygraph. The VeraXtract project aims to analyze complex disinformation narratives to not only identify false information but also decipher the narratives behind them. An AI-supported “narrative monitoring tool” should play a central role here. This technology provides understandable explanations for its analyzes and thus offers transparency to users.
Current projects and their significance
The second project, news-polygraph, is aimed specifically at journalists and is designed to find false information in texts, images, audio and video content. The first demo version of the tool is announced for April. Given the challenges that fact checks often occur too late and disinformation remains in the collective memory, the development of such technologies is becoming increasingly urgent.
The XplaiNLP research group currently has 24 team members, making it one of the leading groups in the field of disinformation detection in Germany. Complementing the group's goals is the need to critically examine the role of AI in combating disinformation. Dr. Schmitt emphasizes the relevance of AI: It can act as a bridge between people and help break through filter bubbles.
The challenges of disinformation
The omnipresent spread of disinformation threatens trust in democratic institutions and can deepen social divisions. The use of AI in this context highlights both the risks and the opportunities. In addition to identifying misinformation, the team also analyzes stylistic characteristics to identify emotional and charged content. These approaches are particularly relevant in the period leading up to elections and conflicts, when disinformation is increasingly used strategically.
The group also works with partners such as Deutsche Welle and the German Research Center for Artificial Intelligence to further develop deepfake detection and content analysis methods. Initiatives such as the International Fact-Checking Signatories and local fact-checkers are also actively involved in countering misinformation. The Digital Services Act (DSA) could also play an important role in regulating social media platforms when it comes to content moderation.
An important format for exchange are events such as the discussion about fact checks with AI, which is planned for December 10, 2024 at the TU Berlin. Current challenges, legal framework conditions and the need for human fact checkers will be discussed here in order to improve society's use of AI.
The research group's vision extends to 2030, when AI should be able to detect and contextualize disinformation in real time. This is intended to restore trust in information and ultimately promote social cohesion. The challenges are unmistakable, but the approaches of the XplaiNLP research group offer new perspectives in the fight against the flood of disinformation.