By Sophia Kramer
Edited by Sina Olfermann, final editing by Lara Lamie
For the PDF version, click here.
Abstract
This blog provides an overview of the current major developments in AI regulation, including an analysis of the EU’s recently adopted AI Act, as well as developments within one of the world's most dominant players in the realm of AI: China. In a global race for innovation, countries struggle to balance regulation with innovation. This blog explores the types of regulatory approaches to AI, their strengths and weaknesses, and how they relate to one another. The relevance of this blog is underlined by the understanding that analyzing different legal frameworks helps us assess and improve our own.
Key words: Artificial Intelligence (AI), European Union (EU), AI Act, China, AI regulation, AI labelling
Introduction
AI development has accelerated rapidly in recent years, especially in the United States (US), China, and the EU. Although the US is a major driver of AI innovation, this blog focuses on the EU and China, as recent regulatory developments in the US provide less material for comparative legal analysis at present. Growing awareness of the risks associated with this new technology has led to a demand for regulation. In response, legislative authorities have begun drafting and implementing their own regulatory mechanisms. Each legislative system embraces a different approach to regulating AI. The EU has adopted a risk-based approach, forming an overarching standard for AI systems within the EU. In contrast, China manages a vertical approach in which different administrative bodies regulate AI in varying contexts. Before addressing these regulations, this blog first takes a closer look at the risks of unregulated AI - particularly the consequences for society and fundamental rights.
Dangers of unregulated AI
Despite demands for regulation, many legal and tech scholars are in favour of keeping AI unregulated, or at the very least less regulated. They defend the importance of innovation and claim it is too soon for the regulation of such a new and rapidly developing force. The US is often used as an example of a state that has decided to prioritize innovation over regulation. It is, however, undeniable that the emergence of AI has brought certain risks along with it.
One well-known concern is the environmental sustainability of AI. The proliferation of AI is closely related to the debate surrounding climate change. In the context of governments and international organisations like the EU seemingly failing to meet global climate targets, it seems clear that we are already amidst an environmental crisis. With AI rising in demand, the use of large language models (LLMs) is only adding fuel to the fire. A study conducted by researchers at the University of Massachusetts estimated that the training of a large AI model can result in the emission of nearly 300,000kg of CO₂.[1] Similarly concerning is its water and energy consumption.[2]
Furthermore, AI is confronted with various ethical drawbacks. Especially in fundamental sectors such as law enforcement, healthcare, and the labour market, it is essential that AI models do not operate based on discriminatory biases. AI works by sorting information that it has collected through large-scale data input to find patterns.[3] It subsequently uses these patterns to make predictions and generate new content. Of concern is the risk that employees might face when their performance is being evaluated by AI systems. Generally, such use of AI in the workplace is deployed because of its advantageous nature. The implementation of AI evaluation systems is cost-effective, can reduce human error and is highly efficient. On the other hand, one of AI’s most noticeable strengths in this regard also has the potential to be its biggest weakness. AI has the potential to eliminate human bias, maintaining fairness in the workplace. AI inherently possesses the quality that it is an objective instrument. However, when the training data is biased, that “objective nature” ends up being undermined.[4] This can also be a problem in the realm of recruitment, where discriminatory biases are also prevalent.[5]
Finally, there is something to be said about the connection between democracy and AI. It has been argued that AI poses risks to the democratic integrity of governments. A 2024 report from UNESCO[6] underlines the essential nature of transparency in a democratic society. AI can contribute to the spread of misinformation and hate speech. Additionally, AI algorithms can be taken advantage of to influence elections and destroy public trust. Such issues existed long before the emergence of AI, albeit on a much smaller scale. However, with the inexorable surge in AI developments, the reach and consumption of artificially generated content has the potential to have a much greater effect on socio-political decision-making. The use of AI in political debates and electoral processes, therefore, continues to pose a threat to the core democratic values that many states subscribe to.
EU regulation
Having identified the risks of unregulated AI, the question arises what the current legislation aims to protect and to what extent it succeeds in doing so. With the rapid development of AI over the past few years, various regulatory bodies have made attempts at taking legislative measures. For instance, in June of 2024 the EU adopted its AI Act. Having yet to fully come into effect,[7] many tech experts are waiting to see how effective it will be in practice, while others are already sceptical – arguing that it will only set the EU back in competition with the US and China.[8]
This regulation takes a risk-based approach, distinguishing four risk levels: unacceptable-risk, high-risk, limited-risk and minimal/no-risk. This categorisation has implications for the applicability of its provisions. Article 5 of the regulation, for example, entails a total ban on AI models that fall under the unacceptable-risk category. This includes, inter alia, social scoring systems, real-time remote biometric identification systems in public spaces, and harmful manipulation and exploitation tactics through the use of AI.
AI systems are labelled as ‘high-risk’ when they can negatively impact the health, safety or fundamental rights of persons. The difference with unacceptable-risk AI is that these systems – if properly assessed and controlled – have the potential to positively impact society.[9] The aforementioned recruitment and performance evaluation systems fall under this category. Because of the risks associated with such AI systems, providers must meet certain requirements before releasing their system to the public. For example, they must establish a quality management system and draw up EU conformity declarations. Deployers (professional users of an AI system) must comply with their respective obligations.
The limited-risk category contains AI systems that must meet the transparency requirements set out in article 50 of the AI Act. The most notable obligation is that providers of AI systems must disclose to its consumers that the consumed content is AI generated. Examples of systems that fall under the scope of this provision are generative AI models and AI Chatbots, such as ChatGPT.[10]
Finally, the minimal-risk category, which consists of AI-enabled video games, grammar assistants, spam filters, etc.[11]This category is not subject to binding obligations, only voluntary codes of conduct.
What sets the AI Act apart from other AI standards, is that it forms an all-encompassing legal framework. This horizontal approach allows the act to be applicable to all forms and applications of AI, irrespective of the sector or type of AI.
Chinese regulation
A less discussed topic is that of China’s AI governance. China, alongside the US, is at the top of the AI trend and has recently introduced its “AI Plus Plan”. This plan is supposed to promote the integration of AI in various fields and sectors, such as science and technology, consumer services and public welfare,[12] with the end-goal of a fully intelligent economy and society by 2035.[13]
China also has its share of regulatory developments. In September 2025, China mandated that all AI content must wear a “made by AI” label.[14] This way China cracks down on generative AI content, protecting its users from exploitation and misinformation. Similarly to the AI Act, this policy puts transparency at the forefront. It expands further on regulations such as the ‘Provisions on the Administration of Deep Synthesis of Internet-Based Information Services’, which form the primary instrument for deepfake regulation. Compared to the EU’s transparency obligations, these provisions are much stricter, effectively tackling the harmful effects of deepfakes.[15]
As opposed to the EU, China’s AI regulation is based on a vertical approach, meaning that AI is regulated sporadically per sector and type of AI. These administrative regulations are created by several different state agencies, such as the Cyberspace Administration of China (CAC) and the Ministry of Science and Technology (MOST). China, thus, seems to lack a comprehensive legal framework for AI and has effectively withdrawn any plan for such a framework from its 2025 agenda, deciding to focus on its current sectoral regulation instead.[16]
Legislative comparison
When comparing the two systems it’s crucial to keep in mind that both approaches have very different goals and intentions. Therefore, neither approach should hastily be qualified as “the better approach”. This blog focuses on finding a balance between risk prevention and promoting innovation.
China’s vertical approach can lead to regulatory fragmentation. When several departments are regulating various different aspects of AI, the legal status of AI systems becomes unclear. Thus, it is the lack of a general standard that leads to the fragmentation of AI law. When these different frameworks conflict, it can have negative outcomes for companies based in China.[17] Companies will often end up having to make unnecessary costs due to legal uncertainty or might even choose to withdraw from the market altogether.[18] A positive aspect of Chinese regulation is its adaptivity and room for innovation, which is fitting for the rapidly evolving nature of AI. This also happens to be something the AI Act lacks with its horizontal approach. The legislative process is long and burdensome, which may not be able to keep up with the rapid development of AI. Furthermore, a comprehensive standard can become too rigid. In this sense it can fail at capturing the various nuances between different sectors and types of AI. Additionally, the AI Act’s main purpose is to protect fundamental rights, whereas China is more focused on innovation and state-control. Because of this, it is debated that the EU leaves little room for innovation.[19]
In theory, an ideal approach would be a hybrid one, incorporating both vertical and horizontal elements of AI regulation. This could take the form of a foundational legal framework for unacceptable-risk AI, while further elaborating on this framework through sectoral regulation. This sectoral regulation would be context specific, allowing the nuances in each field to be addressed. Whether this would be effective in practice remains unknown.
For AI that crosses borders, international cooperation is needed – however unlikely it may be within the current competitive climate of AI development. The current big players in AI are working hard to surpass each other in AI innovation. Nevertheless, we can remain hopeful that with China’s “AI+ International Cooperation Initiative”, the prospect of international cooperation can become a reality.
Conclusion
It has become apparent that the proliferation of AI carries many risks, from its various ethical drawbacks to the adverse effects it has on the environment. This has proven that regulation is vital for an artificially intelligent future, but that it needs to be done carefully, ensuring that innovation is not hindered in the process. Currently, it is still difficult to say with certainty what the best course of action is. Whether through a horizontal (risk-based) or vertical approach. In any case, the steps that have been taken by the EU and China respectively are a step in the right direction. The EU’s risk-based approach offers legal certainty and effectively protects the rights of legal and natural persons alike. China’s adaptive sectoral approach paves the way for innovation whilst keeping stringent control over AI systems that might not align with the government's principles.
Ideally, succeeding regulatory efforts will comprise a future where the big players in AI come together to create an overarching international standard, allowing room for countries to expand on nationally. Until then, every country should feel urged to analyse different AI regulations. This will allow countries and international organisations to learn from one another and eventually set a global standard for AI regulation.
Sophia Kramer (2005, she/her) is a Dutch-Mexican honours student at Vrije Universiteit Amsterdam, pursuing a bachelor’s in law (LL.B.). Her interests include but are not limited to corporate law, property law, and international business law. She strives to make legal matters more accessible by offering in-depth analyses on the latest legal developments.
Bibliography
Alex de Vries-Gao, ‘Artificial intelligence: Supply chain constraints and energy implications’ (2025) 9(6) Joule <https://www.cell.com/joule/abstract/S2542-4351(25)00142-4?_returnURL=https%3A%2F%2Flinkinghub.e sevier.com%2Fretrieve%2Fpii%2FS2542435125001424%3Fshowall%3Dtrue>
Barbara Li, ‘China releases ‘AI Plus’ plan, rolls out AI labeling law’ (iapp, 5 September 2025)<https://iapp.org/news/a/china-releases-ai-plus-plan-rolls-out-ai-labeling-law> accessed 27 December 2025.
Ben Hu and Adam Au, ‘China resets the path to comprehensive AI governance’ (2025)<https://eastasiaforum.org/2025/12/25/china-resets-the-path-to-comprehensive-ai-governance/#:~:text=In%20Brief,China%27s%20path%20to%20AI%20governance> accessed 28 December 2025.
Cole Stryker and EA Kavlakoglu, ‘What is artificial intelligence (AI)?’ https://www.ibm.com/think/topics/artificial-intelligence accessed 14 January 2026.
Colorado State University Global, ‘How Does AI Work?’ (2025) https://csuglobal.edu/blog/how-does-ai-actually-workaccessed 15 January 2026.
Cyberspace Administration of China, ‘Notice on issuing the “Measures for Identifying Artificial Intelligence-Generated and Synthetic Content”’ (14 March 2025 <https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm>
Daniel Innerarity, ‘Artificial intelligence and democracy’ (2024) UNESCO <https://unesdoc.unesco.org/ark:/48223/pf0000389736> accessed 26 December 2025.
Ed Sander, ‘China is leaving the west behind in regulating deepfakes’ (2022) https://www.chinatalk.nl/europe-is-falling-behind-china-in-regulating-deepfakes/ accessed 15 January 2026.
Emma Strubell, Ananya Ganesh and Andrew McCallum, ‘Energy and Policy Considerations for Deep learning in NLP’ (2019) <https://aclanthology.org/P19-1355.pdf> accessed 28 December 2025.
European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (2023)<https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2024)762323> accessed 25 December 2025.
European Parliamentary Research Service, ‘Addressing AI risks in the workplace’ (2024)<https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/762323/EPRS_BRI(2024)762323_EN.pdfhttps://www.europarl.europa.eu/RegData/etudes/BRIE/2024/762323/EPRS_BRI(2024)762323_EN.pdf>accessed 26 December 2025.
Filippo Lancieri, Laura Edelson and Stefan Bechtold, ‘AI Regulation: The Politics of Fragmentation and Regulatory Capture’ (2025) https://blogs.law.ox.ac.uk/oblb/blog-post/2025/06/ai-regulation-politics-fragmentation-and-regulatory-capture accessed 15 January 2026.
Irina A. Filipova, ‘Legal Regulation of Artificial Intelligence: Experience of China’ (2024) 2(1) Journal of Digital Technologies and Law <https://doi.org/10.21202/jdtl.2024.4> accessed 28 December 2026.
Ministry of Economic Affairs, ‘AI Act Guide’ (2025) <https://www.government.nl/documents/publications/2025/09/04/ai-act-guide> accessed 25 December 2025.
Ministry of Economic Affairs, ‘Europese set regels over AI treedt in werking’ (2024) https://www.rijksoverheid.nl/actueel/nieuws/2024/08/02/europese-set-regels-over-ai-treedt-in-werking accessed 20 January 2026.
Pascale Davies, ‘EU AI Act reaction: Tech experts say the world’s first AI law is ‘historic’ but ‘bittersweet’’ (2024) https://www.euronews.com/next/2024/03/16/eu-ai-act-reaction-tech-experts-say-the-worlds-first-ai-law-is-historic-but-bittersweet accessed 20 January 2026.
Software Improvement Group, ‘A comprehensive EU AI Act Summary [Aug 2025 update]’(2025) https://www.softwareimprovementgroup.com/blog/eu-ai-act-summary/#:~:text=The%20EU%20AI%20Act%20risk,regulatory%20implications%20on%20each%20system accessed 15 January 2026.
[1] Emma Strubell, Ananya Ganesh and Andrew McCallum, ‘Energy and Policy Considerations for Deep learning in NLP’ (2019)<https://aclanthology.org/P19-1355.pdf> accessed 28 December 2025.
[2] Alex de Vries-Gao, ‘Artificial intelligence: Supply chain constraints and energy implications’ (2025) 9(6) Joule <https://www.cell.com/joule/abstract/S2542-4351(25)00142-4?_returnURL=https%3A%2F%2Flinkinghub.e sevier.com%2Fretrieve%2Fpii%2FS2542435125001424%3Fshowall%3Dtrue> accessed 29 December 2025.
[3] Colorado State University Global, ‘How Does AI Work?’ (2025) https://csuglobal.edu/blog/how-does-ai-actually-work accessed 15 January 2026.
[4] European Parliamentary Research Service, ‘Addressing AI risks in the workplace’ (2024) https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2024)762323 accessed 26 December 2025.
[5] Cole Stryker and EA Kavlakoglu, ‘What is artificial intelligence (AI)?’ https://www.ibm.com/think/topics/artificial-intelligence accessed 14 January 2026.
[6] Daniel Innerarity, ‘Artificial intelligence and democracy’ (2024) UNESCO <https://unesdoc.unesco.org/ark:/48223/pf0000389736> accessed 26 December 2025.
[7] Ministry of Economic Affairs, ‘Europese set regels over AI treedt in werking’ (2024) https://www.rijksoverheid.nl/actueel/nieuws/2024/08/02/europese-set-regels-over-ai-treedt-in-werking accessed 20 January 2026.
[8] Pascale Davies, ‘EU AI Act reaction: Tech experts say the world’s first AI law is ‘historic’ but ‘bittersweet’’ (2024) https://www.euronews.com/next/2024/03/16/eu-ai-act-reaction-tech-experts-say-the-worlds-first-ai-law-is-historic-but-bittersweet accessed 20 January 2026.
[9] Ministry of Economic Affairs, ‘AI Act Guide’ (2025) <https://www.government.nl/documents/publications/2025/09/04/ai-act-guide> accessed 25 December 2025.
[10] European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (2023)<https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#ai-act-different-rules-for-different-risk-levels-6> accessed 25 December 2025.
[11] Software Improvement Group, ‘A comprehensive EU AI Act Summary [Aug 2025 update]’(2025) https://www.softwareimprovementgroup.com/blog/eu-ai-act-summary/#:~:text=The%20EU%20AI%20Act%20risk,regulatory%20implications%20on%20each%20system accessed 15 January 2026.
[12] Barbara Li, ‘China releases ‘AI Plus’ plan, rolls out AI labeling law’ (iapp, 5 September 2025) <https://iapp.org/news/a/china-releases-ai-plus-plan-rolls-out-ai-labeling-law> accessed 27 December 2025.
[13] Ibid
[14] Cyberspace Administration of China, ‘Notice on issuing the “Measures for Identifying Artificial Intelligence-Generated and Synthetic Content”’ (14 March 2025 <https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm>
[15] Ed Sander, ‘China is leaving the west behind in regulating deepfakes’ (2022) https://www.chinatalk.nl/europe-is-falling-behind-china-in-regulating-deepfakes/ accessed 15 January 2026.
[16] Ben Hu and Adam Au, ‘China resets the path to comprehensive AI governance’ (East Asia Forum, 25 December 2025)<https://eastasiaforum.org/2025/12/25/china-resets-the-path-to-comprehensive-ai-governance/#:~:text=In%20Brief,China%27s%20path%20to%20AI%20governance> accessed 28 December 2025.
[17] Ibid
[18] Filippo Lancieri, Laura Edelson and Stefan Bechtold, ‘AI Regulation: The Politics of Fragmentation and Regulatory Capture’ (2025) https://blogs.law.ox.ac.uk/oblb/blog-post/2025/06/ai-regulation-politics-fragmentation-and-regulatory-capture accessed 15 January 2026.
[19] Irina A. Filipova, ‘Legal Regulation of Artificial Intelligence: Experience of China’ (2024) 2(1) Journal of Digital Technologies and Law <https://doi.org/10.21202/jdtl.2024.4> accessed 28 December 2026.