Researchers Jan Popma and Wes Damen from Vrije Universiteit Amsterdam (VU) argue that while we should embrace AI, we must also regulate it carefully to ensure job satisfaction, privacy, and safety in the workplace.
AI in practice: A double-edged sword
In 2024, the European AI Act came into force, aimed at steering AI development and deployment in the right direction. Systems that pose high risks to workers - such as AI used to assess employee performance - must meet strict requirements. Developers and providers are expected to conduct risk assessments and take measures to mitigate harm. Employers, however, are merely required to follow supplier guidelines.
This may seem logical, but according to Damen, it often leaves workers unprotected. ‘The question is whether this approach provides sufficient safeguards for employees. Will supplier guidelines truly tackle the real risks and safety concerns, or will there be attempts to avoid liability? If we fail to properly link the AI Act to existing occupational health and safety legislation, there’s a real risk that workers will be exposed to the downsides of poorly regulated workplace technology,’ he warns.
Works councils: essential for protecting workers
While the AI Act places heavy demands on developers, it offers little concrete guidance for employers. ‘Employers cannot afford to take a back seat,’ says Popma. ‘Works councils (ORs) must be involved in the decision-making process around AI implementation. Their right to advise is a powerful tool to assess the impact on workers and to ensure responsible AI use.’
However, the legislation lacks clear criteria for evaluating AI’s impact on employees. Popma and Damen are therefore calling for concrete guidelines tailored to works councils. ‘It’s crucial that ORs know what to look out for when AI is introduced in the workplace,’ Damen stresses. ‘This requires close cooperation between employees, employers, and supervisory bodies such as the Dutch Data Protection Authority.’
AI and employee wellbeing: where are the guidelines?
In addition to the AI Act, employers must comply with occupational health and safety regulations that require them to provide a safe and healthy work environment. But according to Popma and Damen, specific guidance on the impact of AI on employee wellbeing is still lacking.
‘AI can increase stress or lead to burnout if it is poorly integrated into work processes,’ Popma explains. He suggests using existing assessment tools - such as the WeBA methodology, which measures work pressure and wellbeing based on factors like autonomy and task variety - to identify potential risks early on. ‘We need to proactively assess the health implications of AI at work,’ he says.
The future of AI in the workplace
Although AI presents enormous opportunities, Popma and Damen warn that without clear rules, it may come at the expense of job satisfaction and workers’ rights. ‘We need to stay in control of how AI is used in the workplace,’ says Popma. ‘The technology is advanced - but as a society, we must be even smarter. It’s essential that workers and works councils make their voices heard and play an active role in shaping an AI-positive work environment.’
With AI legislation still evolving, businesses and employees alike have an important role to play in developing responsible usage guidelines. ‘When a new AI system is introduced, it’s vital to ask the right questions: How does it work? What data is being used? What risks might it pose?’ Damen adds. ‘It’s also important to remember that AI systems are not static - they evolve. That means we need regular evaluations, and adjustments where necessary, to ensure AI remains safe, ethical, and in service of the people who use it.’
The article 'Employee Participation in the Introduction of Algorithmic Management' by Popma and Damen will be published in April in the journal Recht en Arbeid.