Education Research Current About VU Amsterdam NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Biodiversity at VU Amsterdam
Israël and Palestinian regions Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

Aligning generalisation between humans and machines

Share
18 September 2025
To ensure AI and humans collaborate effectively, it is important to carefully consider how AI generalises, according to a group of researchers, including Filip Ilievski and Frank van Harmelen of Vrije Universiteit Amsterdam.

New AI technologies can be very useful for humans, for example, in assisting scientific research. However, AI technology can also be used to undermine democracies, for example, through deepfake videos. Therefore, it is crucial to align AI so it does what we want it to do. Specifically, to align the ways in which humans and AI generalise their skills or knowledge to new situations, according to Ilievski and co-authors in a new study in Nature Machine Intelligence.

Generalisation
A key difference between humans and machines is how they generalise (see also this visualisation by the authors). People, for example, have common sense and can generalise from a few examples. AI is particularly adept at generalising based on large amounts of data. According to the researchers, this requires a clear understanding of its strengths and weaknesses in terms of generalisation.

The authors explore what generalisation means, how it's done, and how we can evaluate it. Their conclusion: no single approach covers everything. Deep learning models, based on artificial neural networks, offer scale and accuracy. Symbolic methods, based on symbolic (human-readable) representations, offer explanatory properties. Instance-based methods, in which an AI model generalises based on the similarity between new data and previous examples, offer robustness, and incremental learning. Thus, a promising path forward is marrying the strengths from these approaches through hybrid, or neurosymbolic, AI.

The scientists state that collaborative and explainable mechanisms are essential for effective AI-human teaming. When misalignment occurs—for example, when AI predicts a type 1 tumor and the doctor diagnoses a type 3 tumor—it is crucial to develop effective error-correction mechanisms. AI models must not only provide the correct answers but also generalise in ways that humans can understand, verify, and correct.

Seminar
The article is based on a Dagstuhl seminar "Generalisation by Humans and Machines," which was attended by some 30 top researchers (experts) from around the world, representing various areas in AI and Cognitive Science. The seminar was organised by Ilievski and Van Harmelen together with Sascha Saralajew (NEC Labs Europe) and Barbara Hammer (University of Bielefeld). 

Contact the VU Press Office

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas Digital accessibility

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Safety Web Colophon Cookie Settings Web Archive

Copyright © 2025 - Vrije Universiteit Amsterdam