Watermarks in the digital age: security or illusion?
Researchers at the University of Bochum present new security gaps in semantic watermarks for AI-generated content.

Watermarks in the digital age: security or illusion?
The rapid development of artificial intelligence (AI) has sparked a boom in the creation of content that is often difficult to distinguish from real work. Especially when generating images and texts, there is an urgent need to clearly identify their origin. Research teams around the world are working on solutions to identify AI-generated works and combat the spread of misinformation.
A central topic in this discussion is the use of watermarks. These technologies could help prove whether an image was generated by an AI. Visible and invisible watermarks are used in image files, with semantic watermarks being considered particularly robust. They are deeply embedded in the process of creating an image and are therefore considered more difficult to remove. But recently, cybersecurity researchers at Ruhr University Bochum discovered security gaps in these semantic watermarks. The results were presented at the Computer Vision and Pattern Recognition (CVPR) conference on June 15, 2025 in Nashville, USA. Andreas Müller, part of the research team, explains that attackers can forge or remove semantic watermarks using simple means. His team has developed two new attack options that threaten these technologies.
Digital watermarks for texts
In parallel to developments in the field of image processing, scientists from the Faculty of Media, the Fraunhofer Institute for Digital Media Technology IDMT and Artefact Germany are concentrating on digital watermarks for written texts and spoken language. Your project “Lantmark” aims to make automated content recognizable in order to strengthen transparency and trustworthiness in digital communication spaces. The research focuses primarily on text watermarking technology, which hides markings in text in order to make the origin and possible changes traceable.
A key goal of this project is to modify large language models (LLMs) to carry a digital signature. It should be possible to distinguish these branded language models from non-branded ones in order to detect unauthorized reports at an early stage. The project is funded with around 1.07 million euros by the Federal Ministry of Education and Research and is part of measures to develop secure technologies in an increasingly networked world.
Technological context and developments
The need for watermarks is growing, especially as the lines between reality and fiction become increasingly blurred. Technologies such as C2PA and SynthID are becoming increasingly important to better recognize AI-generated content. C2PA records the origin and processing of images in the metadata and is already supported by renowned camera manufacturers such as Leica and Canon. Meta, the company behind Facebook, also plans to label AI-generated images on its platforms.
In addition, Google is working on SynthID, a process for invisible labeling of AI-generated images that is woven directly into the pixel structure. These technologies aim to promote content authenticity and are supported by initiatives such as the Content Authenticity Initiative (CAI), launched by companies such as Adobe and Microsoft.
The development of these technologies is crucial, especially as legal disputes surrounding the use of AI-generated content increase. For example, Getty Images sued Stability AI over its use of over 12 million images from its database. Similar allegations were made against OpenAI by the New York Times, underscoring the urgency of establishing clear labels for digital content.