The dilemma decision: self-driving cars in a moral dilemma!
Researchers at the University of Oldenburg are investigating the moral decisions of self-driving cars in dilemma situations.

The dilemma decision: self-driving cars in a moral dilemma!
On June 24, 2025, researchers from the Department of Psychology at the University of Oldenburg presented an important study on self-driving cars, which was published in the journal Scientific Reports was published. Her research aims to increase the acceptance of autonomous vehicles, particularly when complex moral dilemmas are involved. In this context, dilemma situations were examined in which autonomous cars have to make decisions that can save or endanger lives.
A well-known example is the trolley problem, which was originally formulated by Karl Engisch in 1930. This thought experiment has gained new facets through technical developments. In their study, the Oldenburg scientists adapted MIT's Moral Machine test. Test subjects were confronted with the decisions made by artificial intelligence in order to make the discrepancies between human decisions and those of machines visible. The measurements of the subjects' brain waves showed that complex cognitive processes can be recognized after around 0.3 seconds. These processes reflect the emotional and moral evaluation of the decision-making situation.
Moral dilemmas in focus
One of the key dilemmas the researchers assessed involves deciding whether a self-driving car should avoid a dog or run over a dog when it gets loose from its leash. Such dilemma situations make it clear that there is often no clearly correct decision and that technical algorithms for programming autonomous vehicles require clear moral guidelines. Different cultural influences also affect moral decisions: While in Western cultures “women and children first” are often prioritized, in other cultures a different value could prevail, for example in Japan with a preference for older people.
The study shows that when there are divergences between the decisions of humans and machines, there is up to two microvolts higher brain arousal. This discrepancy lasts for several milliseconds, indicating the emotional conflict that arises in such dilemmas. To further promote the acceptance of self-driving cars, scientists are researching methods to analyze how users deal with the decisions their vehicles make in everyday life and whether these decisions correspond to their own moral convictions.
Ethics and Bias in AI
However, the ethical challenges associated with the development of artificial intelligence (AI) go far beyond the issues of autonomous vehicle control. According to an article by Cosmo Center AI can analyze human moral judgments and make predictions, but it is subject to significant biases that call its objectivity into question. Ethical decisions made by AI are often based on historical and cultural biases hidden in the training data.
To address these challenges, it is recommended to diversify the training data and implement fair control mechanisms. For example, using a human-in-the-loop approach could ensure that human considerations and responsibilities are taken into account. The need for more transparency in AI decision-making processes is also emphasized in order to make ethical judgments understandable and to contribute to the conscious design of algorithms.
In summary, the development of self-driving cars and the programming of AI raises both technological and ethical questions. Kalaidos FH points out that collaboration between humans and machines is essential in order to overcome the challenge of moral dilemmas when using AI and at the same time question and define one's own values.