The European Union has made history with the introduction of the AI Act, which focuses on regulating high-risk areas of AI technology usage. EU Commissioner Thierry Breton has hailed this legislation as a groundbreaking step towards introducing a risk-based approach to overseeing AI technology.
This act emphasizes high-risk areas such as government use of AI for biometric surveillance and platforms like ChatGPT, which must adhere to transparency requirements before being released to the market. Following a December 2023 political agreement, the act has undergone meticulous text tailoring for legislative approval.
The recent agreement marks the conclusion of negotiations, with a vote by permanent representatives of all EU member states held on Feb. 2. This sets the stage for the act to progress through the legislative process, culminating in a vote by a key EU lawmaker committee on Feb. 13, followed by an anticipated vote in the European Parliament in March or April.
The AI Act’s approach centers around the principle that developers bear greater responsibility for riskier AI applications, particularly in critical areas like job recruitment and educational admissions. Margrethe Vestager, Executive Vice President of the European Commission for a Europe Fit for the Digital Age, emphasizes the focus on high-risk cases to ensure AI technologies align with EU values and standards.
The implementation of the AI Act is expected in 2026, with specific provisions taking effect earlier to facilitate a gradual integration of the new regulatory framework. In addition to establishing a regulatory foundation, the European Commission actively supports the EU’s AI ecosystem through the creation of an AI Office responsible for monitoring compliance with the Act, with a focus on high-impact foundational models that pose systemic risks.
The AI Act will be the world’s first comprehensive AI law, aiming to regulate the use of artificial intelligence in the EU to ensure better conditions for its deployment, protect individuals, and promote trust in AI systems. It operates on four levels of risk and will be enforced through national competent market surveillance authorities, supported by a European AI Office within the EU Commission.
Furthermore, the EU has put forth a proposal to categorize cryptocurrencies as financial instruments and impose stricter regulations on non-EU crypto firms to ensure fair competition and standardize regulations for entities operating within the EU. These measures include restrictions on non-EU crypto firms serving customers in the bloc and align with existing EU financial laws requiring foreign firms to establish branches or subsidiaries within the EU.
In alignment with this initiative, the European Securities and Markets Authority (ESMA) has introduced additional guidelines to regulate non-EU-based crypto firms, emphasizing the importance of regulatory clarity and investor protection. This move by the EU is part of a broader effort to establish regulatory clarity in the crypto space, protect investors, and foster the growth of crypto services within the EU.