AI risks in focus: Workshop in Bielefeld highlights security topics!
Workshop on AI security at Bielefeld University: Experts will discuss risks and solutions from September 24th to 26th, 2025.

AI risks in focus: Workshop in Bielefeld highlights security topics!
Next week, an important workshop on artificial intelligence (AI) will take place at the Center for Interdisciplinary Research (ZiF) at Bielefeld University. From September 24th to 26th, experts from various disciplines will come together to discuss the risks and security aspects of AI. Dr. Alexander Koch, a renowned cryptography expert and IT security consultant, is leading the exchange together with Dr. Benjamin Paaßen and Elizabeth Herbert. The aim of the workshop is to provide a practical overview of the challenges associated with modern AI systems, such as unfair decisions or confusing AI-generated content.
The workshop brings together 15 researchers from eight countries, coming from areas such as machine learning, neuroscience, political science, and ethics. Together they want to create a comprehensive map of the AI risk landscape. This should not only map important dangers, but also show possible measures to reduce risks. The goal is to help individuals and institutions make informed decisions regarding AI security. Dr. Koch and Dr. Paaßen are fellows of “Young ZiF”, a network that has been promoting interdisciplinary exchange in the field of research since 2002.
Artificial intelligence in everyday life
The AI industry has made rapid progress in recent years, particularly through developments such as the ChatGPT software application. Companies are increasingly relying on AI-supported chatbots that can communicate in natural language. These systems not only play a key role in IT security by detecting and averting threats more quickly, but are also used in building security. There they help to minimize break-ins and other dangers such as fire.
However, the use of AI also brings risks. False alarms can cause stress, and in corporate security, AI systems can be poorly implemented, which can lead to financial losses or legal issues. It emphasizes that robust security measures and regular audits are necessary to minimize the risks of AI. The transparency of the algorithms is crucial because decisions made by AI systems must be understandable.
Safety and ethical considerations
The discussion about AI is not only a technical one, but also an ethical one. The results of AI systems depend significantly on the data used and their design. Distortions in the data can have unintended but serious consequences, such as discriminatory decisions in job hiring or lending. Methods like so-called “mathwashing” can make AI appear fact-based even though it is flawed or biased.
The concerns about data protection and privacy cannot be ignored either. Technologies such as facial recognition or online tracking can endanger civil rights. The new law on AI therefore requires complete and accurate data sets for training AI systems and regulates the use of certain applications that could potentially be harmful.
At a time when autonomous decisions through AI are increasingly finding their way into everyday life, it is crucial that both the public and company stakeholders are informed about the opportunities and risks of AI systems. The importance of responsible use and regulation is strongly emphasized by experts. This is the only way to ensure that AI developments serve the well-being of society and do not become uncontrollable threats.
A comprehensive understanding of the AI landscape and its risks is therefore essential to promote informed and responsible use of technologies. The upcoming workshop at ZiF will represent a further step in this direction. With the aim of finding practical solutions, he will bring both security and ethical issues in the field of AI to the fore.
Bielefeld University reports that the workshop represents an essential forum for exchange on AI security. Gerhard Link highlights the importance of security mechanisms while European Parliament analyzed the ethical dimensions of AI.