Revolution in the classroom: How AI is shaping the education of the future!

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

Bucerius Law School is actively involved in the AI ​​discussion in Europe, promoting educational transformation and legal standards.

Bucerius Law School beteiligt sich aktiv an der KI-Diskussion in Europa, fördert Bildungstransformation und rechtliche Standards.
Bucerius Law School is actively involved in the AI ​​discussion in Europe, promoting educational transformation and legal standards.

Revolution in the classroom: How AI is shaping the education of the future!

Bucerius Law School is actively involved in the current discussions about artificial intelligence (AI) and digital transformation in the European Union's education policy. Lezel Roddeck, Director of Foreign Language Communication at Bucerius Law School, contributed significantly to the report “Explainable Artificial Intelligence (XAI) in Education: Fostering Human Oversight and Shared Responsibility.” This report is a result of the efforts of the European Digital Education Hub (EDEH) and the European Commission to promote trustworthy AI in education. Bucerius Law School reports that Roddeck was part of a working group that developed these guidelines.

In her chapter “Navigating Compliance with the EU AI Act, the GDPR, and Related Digital Laws,” Roddeck highlights how educational institutions and developers can find legal guidance regarding the EU AI Act and the General Data Protection Regulation (GDPR). The report emphasizes transparency, fairness and the protection of learner rights as key principles for the use of AI in the education sector.

The European regulation of AI

According to the EU AI Act Many AI applications in education are considered “high-risk systems”. The AI ​​Act sets different rules for providers and users of AI systems depending on the level of risk. Many of these systems need to be evaluated before they are brought to market and throughout their life cycle. Unacceptable risks are particularly sensitive, such as the cognitive behavioral manipulation of vulnerable groups or the social evaluation of individuals.

Additionally, high-risk AI systems are divided into two categories: those used in products covered by EU product safety legislation and those with jurisdiction in specific areas such as education and health. Educational institutions have the right to lodge complaints about AI systems with relevant national authorities and must adhere to the latest transparency requirements.

Practical approaches and future challenges

At the EDEH community workshop from October 17th to 18th, 2024 in Brussels, which was also attended by representatives of the Bucerius Law School, practical approaches to implementing explainability in educational contexts were developed. The results of the workshop showed the development of standards for data explainability and AI transparency in the education sector as well as the establishment of regulatory “sandboxes” to test AI applications.

These initiatives are important because the use of AI in education could significantly change learning, teaching and working. However, many use cases are currently still in the testing phase. In order to enable widespread implementation, the technical infrastructure, necessary skills for teachers and pedagogical adaptation of the learning curricula are required, among other things. Deloitte highlights that data protection and ethics are central issues that will shape the future educational landscape.

In order to meet the requirements of the EU AI Act, educational institutions are required to ensure compliance, integrate AI curricula and conduct pilot projects to test AI applications in practice. At the same time, promoting AI skills among teachers and learners is crucial to driving digitalization in the education sector.