The Digital Services Act (DSA) aims to bring about change. Sarah Eskens, expert in digital legislation at VU Amsterdam, explains what this law entails and why it’s so important.
What does the DSA regulate?
The DSA is a law that imposes rules on online platforms (such as online marketplaces, social networks and video platforms) to create a safer and more transparent online environment. 'Its goal is not for the EU to dictate what people can or cannot say,' explains Eskens. 'But the law requires platforms to act fairly and consistently when moderating content.' This means, for example, that users who repeatedly share illegal content can be temporarily suspended, and that large platforms must follow clear procedures when removing posts.
For “very large online platforms” (like Facebook, Amazon and TikTok), additional rules apply. 'They must annually assess whether their services pose risks, such as to elections or the mental health of young people,' says Eskens. 'If such risks exist, they must take measures.'
The DSA does not require platforms to remove all harmful or misleading content. 'The law only includes a few rules regarding illegal content,' Eskens emphasises. 'And not everything that is harmful or misleading is illegal. Moreover, the DSA does not impose an immediate obligation to remove illegal content. If a platform receives notice of this and fails to remove the content, it may be held liable under national law – not under the DSA itself. The DSA functions more as an incentive than a strict obligation.'
Protection against harmful content and AI moderation
An important difference from previous legislation, such as the General Data Protection Regulation (GDPR), is that the DSA is not focused on privacy, but on content moderation. 'The law does not determine which content is banned, but forces platforms to be transparent about how they moderate it. This is particularly relevant now that AI systems are increasingly being used to automatically detect and remove content.'
'If platforms use AI for moderation, they must be transparent about how it works,' Eskens explains. 'Users and other third parties must be able to verify whether AI is being used fairly. Furthermore, the DSA requires platforms to consider freedom of expression in content moderation. AI should not lead to the disproportionate removal of content posted by certain groups. Platforms must therefore actively prevent their systems from being discriminatory.'