To keep a grip on the challenges and clarify the scope, the taskforce focuses primarily on education. Therefore this does not include, for example, AI in business operations and research. The taskforce is supported by a VU working group and a UvA working group on AI in education.
Scientific integrity
Universities educate people to become critical thinkers who will function in society at large. A society in which AI is there to stay. Universities attach great importance to integrity and the ethical aspects of scientific research. This means, among other things, that universities pay systematic attention to the correct way of doing research in the training programmes for students and researchers.
One of the pillars of scientific integrity is transparency. Transparency is pre-eminently the key word for the use of (generative) AI in education. Below is a brief explanation of the taskforce's premise for each target group.
VU-UvA taskforce AI in education
Premises VU-UvA taskforce AI in education
-
Transparant to the outside world
The premise is that the VU Amsterdam and the UvA communicate clear policies and positions regarding generative AI in education, both internally and externally. The policy and positions move with developments and are communicated and published regularly and timely, in a place that is findable for everyone involved in education (and research) at the institutions, as well as externally.
-
Transparent policy for the organization
From within the organization, there is a need for a clear policy. The policy is not static because developments in AI are tremendously rapid. The taskforce advises both institutions on adjusting and sharpening the policy by interpreting the room for maneuver given the most recent developments. Based on the policy, the taskforce advises the institutions on a set of usage principles.
-
Transparency about expectations to students, and transparency from students about their use of generative AI
Institutions expect transparency from students about the application of generative AI in their own learning and work. The principles of scientific integrity also apply to students and they are expected to take responsibility for their own learning and work and to be transparent about it. To this end, it is important that students learn about the possibilities and limitations of using generative AI in work, science and society. In addition, students learn to be transparent about when and how they use generative AI. Rules and guidelines are designed to encourage this transparency.
-
Transparency of software suppliers
Academic institutions such as the VU Amsterdam and the UvA should continue to emphasize the importance of transparency and exert pressure to get software vendors to offer more transparency. The starting point is a set of criteria, delivered by the taskforce, that software should meet to ensure scientific integrity. In addition, the carbon footprint of building and using the models should be made explicit to users.
-
Constitution of the taskforce
The taskforce consists of eight experts. The taskforce includes experts from different fields.
From the VU Amsterdam participate:
Dr. Peter Bloem (BETA) - Artificial Intelligence
Dr. Ilja Cornelisz (FGB) - Educational Sciences
Prof. Felienne Hermans (BETA) - Computer Science
Prof. Dr. Piek Vossen (FGW) - Computational Lexicology
From the UvA participate:
Prof. Dr. Peter van Baalen (FEB) - Innovation Models
Prof. Natali Helberger (University Professor of Law and Digital Technology) - Artificial Intelligence
Dr. Marjolein Lanzing (FGW) - Philosophy of Technology
Dr. Jelle Zuidema (FNWI) - Natural Language Processing