That question was at the heart of the latest KINTalk “Can AI be ethical in practice—not just on paper?”, hosted by the KIN Center for Digital Innovation at Vrije Universiteit Amsterdam. Students, researchers, and professionals from inside and outside VU came together to explore how Tim Vermeulen at Alliander is applying AI across the Dutch power grid—and what it takes to translate “AI ethics” from principles into day-to-day decisions. With rising electrification and everyone needing capacity at the same time, the grid faces peak pressure, while the organisation is simultaneously modernising its digital backbone, rolling out sensors, and strengthening cyber resilience.
Grid under pressure: physical expansion meets digital acceleration
A key theme in Vermeulen’s story was the compound challenge of the energy transition. Alliander must expand and maintain the physical grid—laying cables, upgrading capacity—while simultaneously adding a growing digital and administrative layer: planning, prioritisation, monitoring, and coordination at unprecedented scale.
Digitising the grid creates enormous potential. Vermeulen showed how smarter planning, better forecasting, and faster decision-making can ease pressure on the system. At the same time, he emphasised the vulnerability that comes with this transformation. As critical infrastructure, the grid is an attractive target for cyber attacks, making robustness and security non-negotiable design principles.
AI in practice: from copilots to forecasting and field work
Rather than speaking about “AI” as a single entity, Vermeulen mapped out a portfolio of applications already embedded across the organisation:
- Everyday productivity support (e.g., Copilot), combined with the responsibility to enable employees and foster responsible usage habits.
- Complex calculations for grid design, usage optimisation, and long-term planning decisions.
- Field applications that support mechanics and operational teams in executing work safely and efficiently.
- Forecasting use cases, such as weather-based models to anticipate peaks in solar generation.
AI at Alliander is not one model making one decision. It is a growing stack of tools—some relatively simple, others tightly integrated into operational processes..
Control measures in an “AI stack” of agents
As AI systems become more interconnected, governance becomes more complex. Vermeulen discussed how advanced setups may involve agents that support other agents—for example by advising on which data to use or what can be accessed under specific conditions.
This requires clear control measures: defined access boundaries, careful permission structures, and mechanisms to ensure accountability. Especially when systems evolve from standalone models into a broader AI ecosystem, oversight cannot be an afterthought.
When the model is “right” but still not acceptable
Some of the most compelling moments came from cases Vermeulen shared where the technically correct answer was not the organisationally or socially acceptable one.
In one example on investment prioritisation within integral grid planning, a model indicated that wealthier neighbourhoods should be upgraded first. The logic was data-driven: higher purchasing power correlates with earlier adoption of EVs and electrification. The prediction may be accurate—yet it clashes with public values and societal goals around fairness and equal access.
Another example concerned optimisation bias in route planning. If an algorithm learns that it is “easier” to lay cables in certain areas, it may repeatedly recommend those areas, creating a systematic skew over time unless actively corrected.
Through these cases, Vermeulen illustrated a broader shift: where decisions previously involved human deliberation by default, AI-generated recommendations may not automatically trigger ethical reflection. Making that reflection explicit becomes part of responsible deployment..
Building an Ethics Board: from shared morals to structured oversight
Faced with decisions that were technically sound but ethically complex, Alliander created a structured forum for discussion. The Ethics Advisory Board, Vermeulen explained, did not start as a policy idea—but as a response to a concrete case.
A meter cabinet recognition model had classified certain homes into an “old cabinet” category. While technically functioning as intended, the classification unintentionally led to longer waiting times for residents in older housing stock. It was an early signal that ethical issues can arise even when models meet formal performance criteria.
In response, Alliander established a formal Ethics Advisory Board. Bringing together internal stakeholders (including compliance and related roles) and external perspectives, the board provides structured reflection and advice rather than taking over decision-making. It meets four times per year and, after two years, has discussed dozens of cases.
Translating mission statements into values you can operationalise
Vermeulen repeatedly returned to one challenge: translating high-level values into concrete decision criteria. A mission such as “equal access to reliable, affordable and sustainable energy” provides direction—but cannot be directly coded into an algorithm. Vermeulen described how values must be translated into operational principles and decision criteria, supported by tools such as an ethical dilemma canvas. This helps teams identify trade-offs early, make them discussable, and document their reasoning.
Even seemingly harmless tools can raise ethical questions. Meeting recording and summarisation features, for instance, may enable sensitive inferences about colleagues or internal dynamics. The goal, as Vermeulen emphasised, is not to reject innovation—but to continuously ask: what is possible, what is intended, and what is appropriate?
Bridging research, teaching and practice
The KINTalk series provides a platform connect academic insights on digital innovation with real-world practice. This session made clear that responsible AI in critical infrastructure is not only a technical challenge—it is an organisational capability. It requires governance structures, value translation, transparency, and continuous dialogue across disciplines. Participants left with a sharper understanding of how AI ethics becomes operational: not through slogans, but through concrete cases, structured oversight, and the willingness to surface uncomfortable trade-offs before systems scale.