AI ethics is widely debated, but far less attention is paid to what it takes to apply ethical principles in everyday practice. How do organizations embed fairness, accountability, and transparency into operational AI systems—particularly when those systems underpin critical infrastructure?
Drawing on real-world examples from Liander’s operations, Tim Vermeulen will examine how algorithmic bias emerges, how it evolves over time, and what it means once AI systems are deployed at scale.
The KINTalk will explore topics such as:
- Efficiency vs. fairness: With over 60 AI models currently in production, including models that assess 1-to-3 phase connections without the need for a physical home visit, how can fairness and explainability be ensured?
- Strategic asset managemen: When AI informs where cables are laid, how do we avoid reinforcing existing inequalities, such as consistently favoring areas with higher EV adoption?
- The human element: Where should boundaries be set when AI is used to monitor, support, or evaluate employees?
To address these challenges, Liander has established a dedicated Ethics Board. Tim will share insights into how this governance structure functions in practice and what is required to make ethical oversight effective beyond policy documents.
This KINTalk is particularly relevant for professionals working in AI, governance, compliance, or infrastructure.
Program
17:15 - 17:20 Introduction KINTalk and KIN Center for Digital Innovation
17:20 - 18:20 Presentation by Tim Vermeulen
18:20 - 18:30 Audience questions
18:30 - 19:00 Networking and drink
Location
This KINTalk is held at Vrije Universiteit Amsterdam, Main Building,HG-Agora 3, 1081 HV Amsterdam
Register for the KINTalk by February 9th to secure your spot.