Education Research Current Organisation and Cooperation NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Energy in transition
Israël and Palestinian regions Women at the top Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

Report on the FBMS AI-event Thursday 13 June 2024

On Thursday 13 June 2024 an interactive meeting took place with alumni, researchers and societal partners of the Faculty of Behavioural and Movement Sciences (FBMS). The topic of this meeting was the impact of Artificial Intelligence (AI) on the domain of behaviour and health, both in research practice and in the field, with particular attention to the healthcare -, education -, and parenting context.

The topic of Artificial Intelligence and its influence on education, research and healthcare is introduced by Maurits van Tulder (dean of FBMS) and Ilja Cornelisz (vice dean valorisation & impact FBMS). Both emphasize that the rapid rise of AI naturally raises questions regarding its inclusiveness and equitability, also when it comes to the terminology used in discussions on AI's diverse impact, and its challenges and opportunities. This meeting is meant as starting point for a broader and in-depth dialogue that our faculty wishes to have with our alumni, partners and researchers. We feel that the issues introduced at the thematic tables and those brought up in the panel discussion can only be addressed collaboratively, for different expertise and perspectives are needed to benefit from AI and to cope with its challenges. 

Panel discussion
Barbara Braams, associate professor of Clinical, Neuro-, and Developmental Psychology, introduces the panellists. Theo Bakker (Clinical Developmental Psychology), Aline Honingh (Clinical Child and Family Studies), Lianne Bakkum (Developmental Pedagogy and Educational Sciences), Sina David (Biomechanics), Erik van Zwol (AmsterdamAI), and Dirk Pelt (Biological Psychology). Together with Dick Lunenborg (Bartimeus), the aforementioned also lead the thematic tables after this panel discussion. They each introduce their field of expertise and how they come across AI in their professional life. 

The panellists discuss their hopes and fears regarding the use of AI in healthcare. Opportunities are seen, such as those for people with disabilities; there are AI tools to make life easier, like an app to simplify difficult readings for those with an intellectual disability. And AI generated audio description, where narration is used to explain visual components of video for people with visual impairments. Healthcare professionals could also be aided by AI for it potentially saves them time on administrative tasks and provides for time to tend to patients. AI is already used in research and offers many possibilities. Predictive models are developed within healthcare settings that can potentially lead to individual predictions, which of course also comes with its own challenges. 

Although panellists see many opportunities, they do not deny the potential risks involved regarding the use of AI in healthcare. First, there's a problem of transparency since AI is largely in the hands of big tech companies. What data is being used, how safe is this data in their hands, and what algorithms are behind these products are often unknown to the public. We expect current users of (generative) AI to be able to assess if an outcome is probable and valid. Is this possible and desirable expect AI users to do so, for example when it concerns users of AI with intellectual disabilities? 

Someone in the audience notes that you wonder how safe your data is in the hands of these parties, especially when it comes to sensitive medical information. Different panellists share their experiences with using medical data and how they make sure to keep this data safe. Despite these efforts however, algorithms that are used by generative AI can be hidden or are under development. This leads to issues of trust and questions about who actually owns the data and how is the data being handled? The humanization of AI is discussed following a question from the audience. AI can produce texts that seem to have be written by a human, and we contribute to the humanization of AI when we respond to it using our human skills and behaviour traits, such as texting politely to AI; it becomes clear when the question asked of who is polite to ChatGPT and more than half of the audience raises their hand. 

The panel discussion ends by expressing hopes for the usage of AI. The panellists agree that it should make life easier, that we can all benefit from AI, but that it is important to be attentive to not losing people on our way. We need to learn people how to use AI whilst using their natural intelligence and inner compass.

Thematic tables
Subsequently, participants split into five roundtable discussions to delve deeper into the impact of AI in a specific context. Topics were informed by FBMS's partners and by research within our own faculty. Participants at each table were a mix of alumni, researchers and people who work in either healthcare or education with an interest in AI or already working with AI. A brief overview of the discussions that took place at each table are found below.

AI in education: quality and equity of opportunity
Higher education is the great equalizer and can be the engine for emancipation for students from different backgrounds, but access to it is not always equally distributed. This applies to a Generative AI tool such as ChatGPT. Students have different starting situations which has an influence; access to a paid version of GAI can create inequality, or the understanding of how to use such a tool wisely. 

If these things are safeguarded, GAI can be used to consider the unique situation of a student: with translations, learning a language such as Dutch or English, checking written texts for style and language errors, or tailor-made lesson programmes that match the knowledge level of students. 

There are also concerns. Do students still learn when using GAI? What is still original work by the student and what is done by the tool? Teachers need to revise their teaching and want to be able to experiment with new forms of education, supported by GAI. However, it is important that the systems they use provide for this and that they're adequately supported creating these kinds of solutions. 

Finally, the table discusses the struggles that comes with this new field. Is this aspect of learning – the effort that must be made to acquire knowledge and develop new knowledge – still sufficiently present if we rely too much on AI? The participants conclude that the struggles with GAI and education should remain under discussion, and this conversation is just a beginning.

AI in research: ethics and privacy
During this discussion, participants highlight their concerns regarding the use of AI in healthcare. Although patients are usually willing to share data, they are also hesitant to share information about specific conditions due to privacy concerns. There are also concerns about the effectiveness of data anonymization, as e.g. unique patterns (walking patterns) could still be used to identify subjects. 

The risk of misuse of patient data by health insurance companies and the question of how data will be handled in the future is also discussed. Participants report a lack of knowledge about long-term storage for personal data. The table discussed researchers' concerns regarding secure data storage and archiving, which does not seem to be happening adequately at all times. Participants expressed that this might be due to IT and research support not always adequately understanding what is needed in these areas. 

Lastly, participants discuss that AI lacks emotional intelligence and is strongly biased, particularly mentioning that this could exclude low-income population from datasets due to the fact that these people are hardly included in datasets and considered as a hard to target population . The table discusses ethical dilemmas of research predictions based on AI, the need for improvements on privacy and ethical concerns and the challenge of finding solutions to these issues.

AI in healthcare: support and 'explainable AI'
This table discusses the use of AI to predict outcomes for individual clients and patients. Several questions arise, such as; how can professionals working in health care make good use of predictions from machine learning models?; will interpreting these predictions become part of their job?; when will machine learning models become medical devices? The participants agree that AI can be used to improve health care. Not only aimed at improving care as such but also to make organizational processes more efficient. For example, AI is already being used in the field of radiology, to improve diagnostic processes. The participants agree that the use and focus of AI is more valuable when used for prevention purposes.

AI and inclusivity: opportunities and risks for people with disabilities
Several participants of this table work with people with disabilities, and there's a specific interest in visual, hearing and intellectual disabilities. First, the possibilities of AI in these health care settings are discussed. Because of the high workload of care professionals, AI-tools or robots could take over specific parts of the care. A conversational robot can cope easily with repetitive questions and answers from a person with dementia. At times, the use of such a robot might ease the burden on a busy care professional. This example leads to all sorts of moral and ethical questions. Participants wonder what the implications are of having some sort of relationship with an AI-tool? Is it misleading to create human like care robots? How does this differ from spending time with an online friend? As do all, those with a disability have the same need for social connection, but is communication with an AI-tool a replacement for human interaction? These are relevant questions for all, not only for people with a disability. 

Several topics are addressed at this table; AI usage by people with an intellectual disability, and if and how AI could provide for more inclusiveness. Can we trust a generative AI-tool, since generative AI produces new output, often based on an unknown dataset. How can we prevent a system producing something (potentially) harmful? All agreed that regulations need to be developed to avoid such a situation and that the output of AI-tools for people with disabilities should be monitored closely.

AmsterdamAI: 'responsible AI' and health
The discussion started with the open question: What could we do with data if privacy was of no concern? That got the participants talking about the potential of using medical data for prevention, diagnostics and treatment of patients. 

The discussion then went from the sensitivity of personal data, privacy and bias in models to more practical matters like how the participants could use AI in their daily work.For example, to personalize treatment plans or to monitor improvement with a patient. 

The participants leave with ideas to talk to their supervisor at work about the possibilities in AI and wanting to be involved in the developments of AI in the field of healthcare. The discussion also leads to questions on the moral and ethical implications of using AI. The table agrees it remains important to keep a personal and human connection with each other. Go to the website: AmsterdamAI.

In conclusion
In conclusion, AI as a thematic topic presents many challenges and opportunities across the fields of education, research and healthcare. This event served as the beginning of an in-depth dialogue that our faculty aims to promote with alumni, partners and researchers. We cannot navigate these challenges alone; FBMS acknowledges that itis essential to collaborate and consider different perspectives to benefit from AI opportunities while addressing its challenges. We invite all participants to engage actively in this dialogue, to reach out with ideas, questions and potential collaborations. 

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2024 - Vrije Universiteit Amsterdam