Education Research Current About VU Amsterdam NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Biodiversity at VU Amsterdam
Israël and Palestinian regions Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

Moral deliberation on AI at Vrije Universiteit Amsterdam

Share
2 December 2025
Over two afternoons, VU education staff engaged in moral deliberations on Artificial Intelligence (AI): one session focused on AI tutor bots and one on using AI for grading. No pitches about tools or lists of do’s and don’ts, but a deliberate slowing down of the conversation: “AI raises technical and didactic questions, but above all moral ones.” 

“What do we actually mean when we say: ‘we need to do something with this’?” asks Arjen Heijstek, process manager for Teaching and Learning Support, who co-organised the moral deliberations. “What does AI mean for equality, professional ethics and our academic values? AI raises technical and didactic questions, but above all moral ones.” 

In groups of about fifteen education staff members – from different faculties, services and roles – VU Amsterdam colleagues explored these questions together. 

“Many participants expected a kind of committee that would simply say ‘yes’ or ‘no’,” says discussion leader and education specialist Lucho Rubio Repáraz from the VU Centre for Teaching & Learning (CTL). “But a moral deliberation works differently. You examine together what matters to us and what good education requires.

Moral deliberation 1: AI tutor bots

The first session opened with the question: should we deploy an AI tutor bot? Such a digital assistant can help students 24/7 with questions about content or their studies. But the conversation quickly shifted to broader issues: who is responsible when we allow AI into our education? And where does our responsibility begin and end? 

Lecturer, policymaker, or the university? 
Each participant brought their personal perspective on what responsibility means, and the interpretations varied widely. For one person, it meant caring for students and preventing them from becoming dependent on AI. For another, it was about safeguarding quality: how do we ensure that feedback, assignments, and assessments remain meaningful in the AI era? And who ultimately carries that responsibility: the lecturer, the policymaker, or the university as a whole? 

Experimentation vs. dependence on Big Tech 
For some, responsibility meant daring to experiment with AI. “We need to prepare students for a future in which AI plays a role, so we must experiment with it and learn to work with it,” one participant said. Others were more critical, pointing to risks such as biased algorithms, growing inequality and the possibility that education becomes dependent on Big Tech. “Progress isn’t automatically good,” said one participant. “If we simply go along with a hype without thinking it through, the consequences can be significant.” 

Moral deliberation 2: AI and assessment

The second session began with a familiar dilemma: a lecturer needs to review 200,000 words of essays in ten days. Are they allowed to use AI to ease the workload, and do they even want to? 

More time for human contact 
For many lecturers, giving feedback is part of their professional ethics: “I see it as craftsmanship and my responsibility as a supervisor,” one lecturer said. They want to offer feedback that is personal, substantive, and responsible. At the same time, workloads are high. “And if AI helps me work more efficiently, I gain time for human contact.” 

Student perspective
Students brought yet another angle: “If the feedback is substantively correct, I don’t mind who gives it. But the lecturer must stand behind it as the final responsible party.” Some students currently receive no feedback at all due to time constraints. “In that light, AI might not be the biggest risk, no feedback is.” 

The discussion broadened: what does feedback actually mean for the role of the lecturer? What counts as good feedback? What is the role of writing now that AI co-writes? How do we prepare students for a future we ourselves don’t yet fully understand? And are we overestimating the impact of AI?

Opposing views, or not quite? 
What initially looked like opposing perspectives often turned out to be rooted in shared concerns. “Participants thought at the start that they stood completely opposite one another,” says Lucho Rubio. “Until they heard what keeps the other awake at night. Keep having these conversations, also beyond the AI domain. Not to reach full agreement, but to jointly explore different perspectives and keep reflecting on what good education is.” 

Plan a moral deliberation

Are you considering a moral deliberation on AI or another complex topic within your team or programme? The VU Centre for Teaching and Learning is happy to support you with preparation and facilitation. Email: ctl@vu.nl.

Teaching at VU Amsterdam

VU EduNews & Stories

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas Digital accessibility

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Safety Web Colophon Cookie Settings Web Archive

Copyright © 2025 - Vrije Universiteit Amsterdam