Text: Marjolein de Jong | 3 March 2026
De Hingh was originally trained as an archaeologist. Where she first dealt with agricultural technology from prehistoric times, she now researches legislation on algorithms and AI. Yet, according to her, the same questions return: does technology inevitably impose itself on society? And who sets the rules of the game?
How objective do you think our current technology is?
'Actually not at all. It starts in the tech sector itself where 80 per cent of computer specialists are male. Women and people of colour are heavily underrepresented in development teams and that really needs to change. Otherwise, innovations, design and also applications of AI will remain inherently non-inclusive. Big tech is not called a 'broligarchy' (after tech-bro and oligarchy, ed.) for nothing.
We see the negative effects of this over-representation of men everywhere. Think, for instance, of facial recognition technology as used by the police, at Schiphol Airport, or for surveillance in online exams. We know from research that that technology works much less reliably with black women than with white men, because the systems are trained mainly on the latter group. This can lead to someone not being recognised or, on the contrary, being wrongly identified and apprehended.
This bias also applies to generative technology, such as ChatGPT. That is trained with existing source material and then the rule is: Garbage in, garbage out. If you ask an LLM model (Large Language Model, ed.): explain electricity to a girl, you will get an explanation with an example about dolls. When asking the same question to a boy, a racing car serves as an example. Similarly, when asking a doctor, it is often automatically assumed to be a 'he' and when asking a nurse a 'she'. Technology is a mirror of our society and generative AI confirms the prejudices and stereotypes that exist in our society.'
When does such confirmation of stereotypes become legally problematic?
'That depends on the context and the consequences, but it becomes legally relevant when inequality is created or fundamental rights are affected. Consider, for example, recruitment and selection algorithms that allow companies to scan CVs. Bias easily creeps in there. Women may be deemed less suitable for certain positions by technology, for instance because their CVs differ from the material the algorithm is trained with: the CVs of male employees. So you get a reinforcement of existing inequality.'
Who is responsible if an algorithm discriminates?
'Since 2024, we have a new law: the European AI Act. With this, the EU tries to limit certain risks and harmful effects of artificial intelligence on the front end. The greater the risks of certain AI applications, the stricter the rules are. When it comes to AI systems that potentially threaten fundamental rights or discriminate, the developers of those systems, as well as the organisations and companies using such systems, have to comply with all kinds of conditions and obligations.
They are thus responsible under the AI Act for protecting citizens and can, in principle, face sanctions if they do not comply. So you can never say: the discriminatory algorithm is to blame. But whether the AI Act actually provides sufficient protection, practice will have to show.'
One currently much-discussed example of AI that discriminates is deepfake technology. What does the AI Act say about this?
'Deepfake technology and Nudify tools in particular are currently the subject of much debate. Deepfake is used in practice almost exclusively to create pornographic material in which one person's head is placed on another person's naked body. Nudify tools allow photos of a clothed person to be turned into nude photos. Women are the main victims of this. They can also be used to create sexual images of minors, which is punishable without question.
Deepfakes fall under the AI Act because they are AI-generated or manipulated images. But strangely enough, these are classified therein as a limited risk, placing much less onerous obligations on creators and users, even though research now shows that the damage to victims of deepnudes can be very great. This was not considered when formulating that law. And the undressing apps were probably not even known when the AI Act came into force.'
Can legislation keep up with fast-moving technology at all?
'That is indeed tricky. Technology develops quickly, sometimes faster than the law, and legislating takes time. New laws have to be created carefully and democratically. In fact, law is almost by definition behind the times, because laws are often made for situations we knew at the time.
In Dutch criminal law, it has for some time been punishable to make and distribute sexual images - a nude photo or video of an adult - if the person has not given consent. But deepfakes suddenly involved the victim's face and someone else's naked body. Does that still involve making and distributing sexual images of the victim? If so, the court must assess whether such an existing provision applies here as well. Indeed, a few years ago, courts ruled that putting deepfake videos online also fell under that criminal provision.
In short, the law can correct quite a lot, including when it comes to gender inequality and harm caused by technology, but it begins with the choices we make early on.'
Want to hear more about this? Visit the symposium 'Tech is not neutral' at the Auditorium of VU Amsterdam on Thursday, 19 March. The keynote will be delivered by Sandjai Bhulai, professor of Business Analytics. Anne de Hingh is one of the speakers, along with Marilieke Engbers and Theo Bakker. Read more and register