As artificial intelligence (AI) continues to transform various sectors—from healthcare and finance to transportation and beyond—the need for effective regulation has become increasingly urgent. The European Union (EU) is at the forefront of this regulatory landscape with the introduction of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (EU AI Act), published in the Official Journal of the EU on 12 July 2024.
This landmark legislation seeks to establish a comprehensive framework to ensure that AI technologies are developed and used responsibly, balancing innovation with the protection of fundamental rights and societal values. The EU AI Act shall apply from 2 August 2026.
However, some parts of it are already effective starting 2 February 2025, namelyChapters I (General Provisions) and II (Prohibited AI Practices), as well as starting 2 August 2025 – Chapter III Section 4 (High- Risk AI Systems, Notifying Authorities and Notified Bodies), Chapter V (General-Purpose AI Models), Chapter VII (Governance), Article 78 (Confidentiality) and Chapter XII (Penalties) with the exception of Article 101 (Fines for Providers of General -Purpose AI Models).
Article 6 (1) – Classification Rules for High- Risk AI Systems– and the corresponding obligations in EU AI Act shall apply from 2 August 2027.
Overview of the EU AI Act
Introduced by the European Commission in April 2021, the EU AI Act aims to create a harmonized legal framework for AI across the EU member states. It is part of the broader digital strategy of the EU, which includes the Digital Services Act and the Digital Markets Act. The AI Act emphasizes ethical AI development and aims to mitigate risks associated with AI technologies while promoting innovation across the region.

Key Objectives
The EU AI Act seeks to safeguard citizens’ rights, ensuring that AI systems are designed and employed in ways that respect human dignity, privacy, and non-discrimination. In addition by establishing clear rules and standards, the EU aims to promote and build public trust in AI technologies. Transparency, accountability, and reliability are central to this goal.
The EU AI Act shall also foster innovation and competitiveness. While it aims to mitigate risks, it seeks to create an environment that encourages responsible innovation. By providing a clear regulatory landscape, the Act intends to support businesses in developing and deploying AI technologies.
Risk- Based Classification
One of the distinguishing features of the EU AI Act is its risk-based approach to regulation. AI systems are categorized into four risk tiers:
- Unacceptable Risk: AI systems that pose significant threats to safety, fundamental rights, or societal values will be prohibited. This includes applications such as social scoring systems and certain types of biometric surveillance.
- High Risk: AI systems classified as high-risk are subject to stringent requirements, including risk assessments, data governance, and transparency obligations. This category includes AI used in critical infrastructures, education, employment, law enforcement, and healthcare.
- Limited Risk: AI systems that pose a limited risk must comply with specific transparency obligations. For example, users must be informed when they are interacting with an AI system, such as chatbots.
- Minimal Risk: AI systems that present minimal risks will not be subject to specific legal requirements under the EU AI Act. However, the EU encourages voluntary compliance with ethical guidelines in this category.
Key Provisions
The EU IA Act mandates that high-risk AI systems must be transparent about their capabilities and limitations, fostering informed decision-making among users. High-risk AI systems must be designed to ensure human oversight, allowing individuals to intervene in automated processes where necessary. Next, the act stipulates strict data quality requirements to ensure the reliability and accuracy of AI systems, encompassing aspects like bias mitigation and system validation.
With respect to compliance and enforcement, national authorities will be responsible for overseeing compliance with the EU AI Act, conducting assessments, and applying penalties for violations. The already established and functioning European AI Board facilitates cooperation among member states. The AI Board includes representatives from each EU Member State and is supported by the AI Office within the European Commission, which serves as the Board’s Secretariat. It is chaired by one of the EU Member States. The Board plays a crucial role in the governance framework set out by the EU AI Act, ensuring its effective implementation across the inion. The European Data Protection Supervisor,and EEA-EFTA countries participate in the board meetings as observers. The last meeting (sixth in row) was held on 4 December 2025 and the AI Board discussed the Digital Simplification Package and review current priorities for implementing the EU AI Act.
Implications for Businesses
Businesses operating in the AI space will need to adapt to the provisions of the EU AI Act in terms of compliance costs as companies may face increased such costs associated with regulations for high-risk systems, particularly in areas like documentation, testing, and monitoring. At the same time new market opportunities may arise: by establishing clear guidelines, the EU AI Act can help legitimize AI applications, potentially boosting market demand and fostering growth in responsible AI innovation. Worh noting, the EU AI Act is poised to set a global standard for AI governance, influencing regulatory approaches in other jurisdictions and establishing the EU as a leader in ethical AI development.
Conclusion
The EU AI Act represents a significant step toward regulating artificial intelligence in a manner that prioritizes safety, rights, and innovation. As the landscape of AI technology evolves, the Act will likely require continuous adaptation and updates to address emerging challenges. By fostering a balanced approach between regulation and innovation, the EU aims to lead the world in creating an ethical and trustworthy AI ecosystem, ultimately benefiting individuals and society as a whole. The implementation of the EU AI Act will be closely watched, potentially serving as a blueprint for other regions contemplating similar regulations.