Focus on AI development: Experts demand clear rules and ethical standards!

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

On August 18, 2025, experts at UNI Bucerius will discuss AI regulation, ethical questions and innovation challenges.

Am 18.08.2025 diskutieren Experten an der UNI Bucerius über KI-Regulierung, ethische Fragen und Innovationsherausforderungen.
On August 18, 2025, experts at UNI Bucerius will discuss AI regulation, ethical questions and innovation challenges.

Focus on AI development: Experts demand clear rules and ethical standards!

On August 18, 2025, central aspects of the regulation of artificial intelligence (AI) were discussed in a lively panel discussion. Innovation researcher Dietmar Harhoff, former chairman of the Federal Government's Research and Innovation Expert Commission, pointed out that the innovative power in the field of AI raises numerous distribution questions. In particular, he criticized the concentration of power in a few large companies, especially in Silicon Valley, which is seen as more dangerous than the idea of ​​autonomous superintelligences. This power asymmetry requires urgent action.

Mathias Risse, professor at the Harvard Kennedy School, complemented the discussion with a moral-philosophical criticism of developers' acceptance of risk. He noted that in AI development there is an acceptance of a 6% risk of a "catastrophe", which would not be acceptable in other areas, such as aviation. This perspective highlights the ethical challenges in the development and application of AI.

Fighting against regulatory gaps

A central statement in the discussion was the need for politics, law and civil society to become more capable of acting. Harhoff advocated for a European solution to regulate AI that does not inhibit innovation and sets clear rules on transparency, liability and access to data. The discussion continued and it emerged that no country has yet adopted a comprehensive legal framework for AI, resulting in a patchwork of regulations. These inadequate legal requirements are an obstacle to effective regulation.

The regulatory questions are particularly explosive because artificial intelligence brings with it complexity and involves processing large, often confusing amounts of data. How bpb.de explains, existing regulations are inadequate to minimize the associated risks and exploit opportunities. There are no international guidelines for AI regulation, which makes it difficult to respond effectively to the challenges and risks of AI practice.

Global perspectives and standards

Another central topic of the discussion was the international dimension of AI regulation. With its recommendation on the ethical use of artificial intelligence, UNESCO has created an international legal basis that aims to promote human rights and fundamental freedoms. This recommendation provides a global frame of reference that includes values ​​such as privacy, transparency, explainability and non-discrimination unesco.de.

The connection between AI and sustainable development is also particularly important. It is required that when using AI, human rights must not only be respected, but that they should also be actively promoted. In this context, the precautionary principle is highlighted: if there are reasonable concerns about negative consequences, certain applications of AI should not be pursued further.

Ethical judgment, political will and social debates are crucial for the future use of AI. After the official discussion, further thoughts on the future of AI were exchanged over pretzels and wine. It became clear that education and digital literacy as well as an active public debate about the challenges of AI are essential.