Education Research Current About VU Amsterdam NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Woman at the top
Israël and Palestinian regions Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

How can governments use AI responsibly?

The way public organisations operate is undergoing a fundamental transformation. And that’s because of the digitalisation of government. Federica Fusi, associate professor of Public Administration at Vrije Universiteit Amsterdam, researches how governments use digital technologies – and the challenges that come with them. She shares her insights on the impact of AI on governance and policy, the role of social sciences in AI research, and the risks of data-driven decision-making.

Digitalisation is shaping public administration
Fusi studies how digitalisation is changing the role of civil servants. “Governments are increasingly using AI systems to support decision-making,” she explains. “That influences how civil servants take decisions: alongside their professional judgement, they now also rely on algorithmic input.” This shift can have both positive and negative consequences as the adoption of other digital tools have demonstrated.

On the one hand, digital tools can increase efficiency and transparency at work; on the other, their use can lead to additional workload and reduced autonomy for civil servants. A good example is the rise of open data portals. “Twenty years ago, it was fairly uncommon for governments to make their data publicly available,” says Fusi. “Now, civil servants are expected to publish, document and make datasets accessible to the public. This improves transparency, but also brings extra work and responsibility.”

Data-driven decision-making carries risks
One of the biggest concerns about the use of AI in governance and policymaking is the quality of the underlying data. Fusi emphasises that AI systems only work well if the input data is complete and accurate. “Many countries still lack a solid data infrastructure,” she explains. “If the underlying data is incomplete or biased, the AI model will also make distorted decisions.” She points out notorious examples, such as the childcare benefits scandal in the Netherlands, where AI systems disproportionately targeted certain population groups based on flawed or discriminatory datasets.

Context – or a lack thereof – also plays a crucial role. “Data is never neutral,” Fusi states. “It tells a story shaped by how it was collected and interpreted. Without a proper understanding of the social and political factors behind the data, AI can lead to decisions that are neither fair nor effective.”

Social sciences are crucial to AI research
According to Fusi, social scientists can make a key contribution to AI research. “Many AI proposals come from computer scientists and engineers, but they might lack substantial knowledge of the policy context in which their technology is deployed,” she says. “Social scientists are trained to take into account social complexity: how civil servants make decisions, how data is socially constructed, and how human behaviour influences policy.”

An interdisciplinary perspective, Fusi says, is crucial. ‘AI projects require people from different disciplines at the table,’ she says. 'But interdisciplinary collaboration can be challenging. Some colleagues and I were discussing how many people to survey to understand preferences over a data dashboard. The UX design scholar was thinking that interviewing 20 to 25 people was enough, as a social scientist I was thinking in hundreds, and the computational engineer in hundreds of thousands. Interdisciplinary collaboration forces us to rethink and challenge our research practices. Yet without proper collaboration between disciplines, AI systems can make fundamentally wrong assumptions. 

While engineers and computer scientists have stronger expertise in data and models, social scientists can interpret outcomes on the basis of social theories. This could help assessing spurious correlations, such as using past healthcare spending as a proxy for health needs. We know that low income individuals in the US spend less on healthcare because they cannot afford it - not because they need it less. Similarly, we spent a lot of time refining air quality models and data collection practices, whereas the interviewed communities were more concerned about the location of air quality sensors. They wanted to know air quality in sensitive locations – e.g., next to schools – rather than in industrial corridors. Interdisciplinary collaborations can encourage stronger links not only inside the academia but with outside members of the public, who benefit from (or are targeted by) AI models.

Balancing technology with human judgement
Fusi argues for a careful and well-considered use of AI within governments. “AI is not always the best solution,” she says. “Some decisions require human judgement and experience. Instead of blindly rushing into AI-driven automation, we need to ask: what tasks is AI suited for, and where is human insight indispensable?”

She stresses the importance of transparency and oversight. “There must be enough checks in place to prevent AI systems from making decisions without human intervention. Governments should view AI as a tool – not as a replacement for human governance.” Lack of transparency inhibits evaluation. We have seen this with the ‘toeslagenschandaal’ in The Netherlands. Bureaucrats were not able to understand the indicators used to build the risk profile of the applicants, which made it impossible to distinguish fraud from minor administrative errors. More broadly, transparency should apply at every stage of AI development to enable oversight both from organizational members and the public – from open access to the data used to train and evaluate the model to openly release the model code. Too often government agencies refuse access to AI-related information. It took over three years to investigative journalists to access information about an algorithm implemented by the Swedish social security agency. (source)

A responsible digital government
The digitalisation of government and the use of AI offer opportunities, but they also come with risks attached. Fusi highlights the importance of interdisciplinary research and a critical perspective on the data that feeds AI systems. “By involving social scientists in AI developments, we gain a better understanding of the influence of technologies on our society—and how we can use these technologies responsibly.”

Her research at VU Amsterdam focuses on finding that balance: how can AI contribute to more efficient governments without losing sight of the human element? It’s a question that’s becoming ever more urgent as digitalisation and AI continue to play a growing role in public administration.

Specialisation of the master Public Administration: Artificial Intelligence and Governance

Security camera at airport

Value(s) of AI and big data

How can algorithms be employed ethically?

Discover the stories

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas Digital accessibility

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2025 - Vrije Universiteit Amsterdam