Education Research Current About VU Amsterdam NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar The power of connection
Israël and Palestinian regions Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

How to make AI more ethical

“Train people, not just the system.”
We place a great deal of trust in AI – but is that trust justified? AI can also operate unethically and even lead to racism, says Emma Beauxis-Aussalet, assistant professor of Ethical Computing at VU Amsterdam. So how do we achieve ethical AI? “Feeding the system more data is not the solution.”

According to Beauxis-Aussalet, we need to take a critical look at AI. The acronym stands for artificial intelligence: algorithms and methods that perform complex tasks that used to require human intelligence. Think of chatbots, facial recognition on your smartphone, or generative AI such as ChatGPT. Beauxis-Aussalet emphasises the artificial aspect of AI: it imitates the intelligence of living beings, but does not possess any real understanding itself. 

Artificial intelligence versus real intelligence

“Aeroplanes and birds both fly, but aeroplanes do not actually understand how to fly. The same goes for artificial intelligence. Until recently, you could talk to ChatGPT about ‘sheep eggs’, while any human knows that’s nonsense. AI systems don’t have that knowledge; they estimate the correct response based on statistics – the likelihood that an answer is correct.”

The risks of AI

The rise of artificial intelligence also comes with risks. These lie in how people use it, says the ethical computing specialist. “We rely too heavily on AI systems. We overestimate them, seeing them as all-knowing oracles, when in fact their outcomes need to be critically examined. We don’t yet fully understand that those outcomes come with a certain degree of uncertainty.” AI’s statistical reliability is also low: if an AI system detects fraud, for instance, the chances that the accused is still innocent can actually be very high, Beauxis-Aussalet explains.

Unethical systems that discriminate

What’s more, AI systems can even act unethically, as in the childcare benefits scandal in the Netherlands. In that instance, the Belastingdienst (the Dutch Tax and Customs Administration) used an automated risk selection system to flag which benefits applications needed extra checks, says the VU Amsterdam professor. “Dual nationality” was one of the selection criteria, for example – unbeknownst to the tax officials themselves. Applicants with a second nationality were therefore more likely to be singled out by the system – a bias that led to discrimination. The fact that this approach was unethical was not recognised until it was too late, according to Beauxis-Aussalet.

Is big data the solution to the risks posed by AI?

What is the solution to AI systems operating unethically? “Of course, you can apply filters to a chatbot so that it no longer uses discriminatory language. Or ensure that racist stereotypes do not appear in AI-generated images,” says Beauxis-Aussalet. “But the problem is not a lack of data. Feeding the system more data does not automatically lead to improvement. What we need is better quality data: data that’s factually accurate, representative, up-to-date and obtained in an ethical manner.”

Teaching AI ethical behaviour

“A second way to teach AI systems to behave ethically is to train people. Everyone should learn about technology from an early age. Teach children to think like scientists from when they’re young. AI comes with error margins. Give people insight into the statistical uncertainty of an AI system: what kind of errors the results may contain, and how many. In short: how reliable it is. Just like ingredients are listed on food packaging.”

Does more “error assessment” (and greater statistical literacy among users) lead to AI systems that can make ethical decisions independently? “That remains risky,” says Beauxis-Aussalet. “If you let an AI model make an ethical judgement, you need to have considered every aspect of the situation. But you can never be sure whether you have. You might also behave unethically without intending to – for example, if you fail to consider certain important elements of the situation.”

Humans are crucial to ethical AI

In short: ethics remains a human responsibility, because human judgement is always at the root of it. “When it comes to ethical decisions involving artificial intelligence, humans are crucial,” says the VU Amsterdam expert in ethical computing.

On the one hand, it’s a message of optimism: people have a great deal of influence over one of the biggest and most exciting technological developments of our era. But whether they will use that influence wisely? “That remains to be seen.”

"Everyone should learn about technology from an early age."

See also

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas Digital accessibility

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2025 - Vrije Universiteit Amsterdam