Education Research Current Organisation and Cooperation NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Energy in transition
Israël and Palestinian regions Women at the top Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

Recognising rioters using AI: convenient or unethical?

Share
2 December 2024
Recognizing violence on surveillance footage of large crowds? It can be done, with the help of AI. But we need to think carefully about the question whether we want to implement such a model, and in what way, concludes Emmeke Veltmeijer. 'It is more relevant than ever to develop ethically sound models and to stay in conversation about this.'

AI researcher Emmeke Veltmeijer looked at how to automatically analyze large groups of people using artificial intelligence. This can come in handy, for example, for security personnel and crowd managers, who want to recognize riots based on large amounts of data.

Riots and noise
'I did this by automatically dividing crowds into smaller subgroups,' says Veltmeijer. 'You can analyze those subgroups using images, video footage, audio and posts on social media.' She developed AI models on the computer and applied them to the real world. 'With these, I could automatically recognize fighting groups of people on video footage, and trace which parts of Amsterdam are rioting in based on social media messages. I could also figure out, based only on group-level noise, whether the subgroups in a crowd were expressing themselves positively or negatively.'

Veltmeijer focused on applicable systems that could make the work of security guards and other people easier. One example is a system that alerts security guards if a group is behaving violently. Another possible application is analyzing the sound of rival supporters at a sports game. A third application is analyzing social media messages about a particular region to assist security agencies in detecting riots or other unexpected events.

Ethics
Whether it is desirable that such detailed information can be retrieved from a large crowd is a second question. Veltmeijer therefore also examined the impact of the European AI Act on automatic group analysis as in her study. 'Artificial intelligence is playing an increasing role in our society. It is more relevant than ever to develop ethically sound models and stay in conversation about it.'

Ethical analysis revealed that practical and ethical concerns remain even when applying the AI Act. Suggestions for improvement include focusing on data collection without collecting personal data, consideration of the context in which you collect data, and the individual responsibility of scientists.

The results of her research are therefore not only relevant for people involved in crowd management, but for everyone who is part of a large crowd at some point. Veltmeijer: 'It is important to know what your rights are, and in what way AI systems can contribute to your physical safety, without violating your privacy.' 

Veltmeijers PhD defence takes place on December 12.

Contact the VU Press Office

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2024 - Vrije Universiteit Amsterdam