top of page

A Comprehensive Overview of the EU’s AI Act: New Era in AI Governance – A Bold Step or a Regulatory Minefield?

Abstract

This article examines the EU’s AI Act, the world’s first AI legislation, which came into force on 01.08.2024. As the first comprehensive legal framework for AI regulation, the Act represents a significant milestone in technology governance. Adopting a risk-based approach, it aims to balance innovation with the protection of fundamental rights. This analysis explores the Act’s key provisions, including its classification framework, obligations for AI providers and deployers, and its distinct emphasis on general-purpose AI models. Written in February 2025, this overview provides timely insights into the Act’s implementation challenges and its potential to shape global AI regulatory standards.

Keywords: AI Act, EU law, AI systems, GPAI, Risk-based approach.

Introduction

As we enter 2025, the shift from the Fourth to the Fifth Industrial Revolution[i] is marked by a transition from automation to collaboration between humans and machines. This era is defined by rapid advancements in Artificial Intelligence (‘AI’) and related technologies, transforming sectors such as healthcare, finance, transportation, and entertainment. However, these advancements also bring significant challenges and risks, necessitating regulatory oversight. The widespread application of AI in autonomous vehicles, robotic surgeries, and AI-generated content requires careful governance to prevent unchecked development and ensure responsible use.

In response to these challenges, the European Union (‘EU’) introduced its first legal framework for AI regulation in July 2024. Regulation (EU) 2024/1689, known as the Artificial Intelligence Act (‘AI Act/Act’)[ii], aims to establish a regulatory structure that balances AI’s benefits with its associated risks. The Act encourages the development of human-centric and trustworthy AI while ensuring high standards of protection for health, safety, and fundamental rights, as outlined in the EU Charter[iii]. Employing a risk-based approach, the Act classifies AI systems into different risk levels—unacceptable, high, minimal, and specific transparency or limited risks—each subject to distinct regulatory requirements[iv]. This comprehensive framework seeks to mitigate potential negative impacts while fostering AI innovation across Europe.

The EU has historically led the way in technology regulation, as demonstrated by the evolution of the 1995 Data Protection Directive into the globally influential General Data Protection Regulation (‘GDPR’). This raises the question of whether the AI Act could similarly establish a new benchmark for modern technology regulation. To assess this possibility, it is essential to examine the Act’s key provisions, historical background, fundamental definitions, and scope of application. Additionally, analysing its impact on various stakeholders will highlight its significance in shaping AI governance.

Road to the Enactment of the AI Act

AI has immense potential to enhance human life, often appearing as a magical solution to wipe out all challenges and worries. However, as with any powerful technology, AI also carries significant risks. Effective regulation is therefore necessary to ensure its safe and ethical use while upholding fundamental rights.

In April 2019, policymakers at the European Commission (‘Commission’) pledged to adopt a human-centric approach to AI regulation, ensuring that AI development aligns with EU values and benefits the European public[v]. Initially, the Commission pursued a soft law approach by publishing two non-binding guidelines: Ethics Guidelines for Trustworthy AI[vi] and Policy and Investment Recommendations for Trustworthy AI[vii]. The following year, it released a White Paper on AI, committing to promoting AI adoption while addressing associated risks[viii].

By 2021, the Commission shifted towards legislative regulation with the publication of Fostering a European Approach to AI[ix], which called for harmonized rules governing AI development, distribution, and use. In 2020, the Commission conducted extensive public consultations and published an Impact Assessment on AI Regulation[x], incorporating stakeholder feedback. This assessment identified several challenges in AI system development and use, including opacity (difficulty in understanding system operations), complexity, continuous adaptation, unpredictability, autonomous behaviour, and reliance on data quality.

Following further deliberations and the adoption of a common position by the EU Council, the AI Act was officially published in July 2024 and came into force on 01.08.2024.

Terminologies & Definitions

A. 3 of the AI Act defines key terms essential for understanding and enforcing its provisions[xi]. An ‘AI system’ is defined as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”[xii]

This definition excludes simpler traditional software systems or programming applications. It emphasizes a degree of autonomy and adaptiveness in a machine-based system, thereby excluding systems where complete control remains with humans and the machine’s output is entirely predictable.

The Act also defines a general-purpose AI model (‘GPAI’) as:

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”[xiii]

It classifies GPAI models as a distinct category of AI systems because of their broad capabilities and potential systemic risks.

A ‘provider’ is defined as:

“a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.”[xiv]

A ‘deployer’, on the other hand, refers to:

“a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal, non-professional activity.” [xv]

The Act also defines ‘placing on the market’ as:

“first making available of an AI system or a general-purpose AI model on the Union market”[xvi]

Meanwhile, ‘making available on the market’ refers to:

“supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”[xvii]

Additionally, the term ‘intended purpose’ is described as:

“the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.”[xviii]

Clearly defining an AI system’s intended purpose before placing it on the market is crucial for risk assessment. Another important term is ‘reasonably foreseeable misuse’, defined as:

“the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.”[xix]

Recognizing the evolving nature of AI, the Act also introduces the concept of ‘substantial modification’, described as:

“a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed.”[xx]

The Act strikes a balance in its definitions – neither too narrow nor overly broad – leaving ample room for interpretation. This is a welcome measure for fostering AI innovation while ensuring adaptability to future developments. Given the dynamic nature of AI technology, overly rigid definitions could impede the advancement of AI development and create enforcement challenges.

Understanding the Scheme of the Act

The Act primarily applies to providers and deployers of AI systems and GPAI models that are either established in the EU, operate within the EU, or have AI outputs used within the Union. It applies to both private organisations and public authorities. However, it excludes AI systems used for military, defence, or national security purposes, as well as systems developed and used solely for scientific research and development. Generally, the regulation does not cover AI systems before they are placed on the market or put into service, though sandboxing rules may apply. This framework allows the EU to regulate AI comprehensively while providing exceptions for specific sectors and development stages, striking a balance between innovation and safety in AI deployment.

The Act aligns with existing EU laws on data protection and privacy and does not apply to individuals using AI for personal, non-professional purposes. Additionally, open-source AI systems are exempt unless classified as high-risk or falling under specific provisions[xxi]. The regulation consists of 13 chapters, with key provisions summarised below:

  • Ch. II, A. 5: This provision prohibits certain uses of AI, including systems that manipulate decision-making or exploit individuals’ vulnerabilities, as well as those that evaluate or categorize people based on social behaviour or personal characteristics. It also bans AI systems that collect facial images from online sources or CCTV, infer emotions in workplaces or educational settings, or classify individuals using biometric data. However, exceptions exist for law enforcement activities, such as locating missing persons or preventing terrorism, provided they respect individual rights. The Act does not cover AI used for scientific research, systems not yet on the market, or personal use by individuals[xxii].

  • Ch. III, A. 6: This article outlines the classification of high-risk AI systems. An AI system is deemed high-risk if it functions as a safety component of a product or is itself a product subject to EU regulations. Such systems must undergo third-party evaluations before being marketed or deployed. Certain AI systems are automatically classified as high-risk unless they do not pose significant threats to individuals’ health, safety, or rights. Herein, a provider asserting that their AI system is not high-risk must document their assessment before market entry. Further classification guidelines will be issued by the Commission[xxiii].

  • Ch. III, As. 8-17: These provisions specify the requirements for high-risk AI systems. Providers must implement a comprehensive risk management system throughout the AI’s lifecycle. They are required to ensure effective data governance by using training, validation, and testing datasets that are relevant, adequately representative, and as error-free and complete as possible for their intended purposes. Additionally, providers must prepare technical documentation to demonstrate compliance and facilitate regulatory assessments. AI systems must be designed to automatically record relevant events for identifying national-level risks and significant changes over time. Providers must also supply usage instructions to deployers, enable human oversight, and ensure that their AI systems meet appropriate standards of accuracy, robustness, and cybersecurity. Finally, they must establish a quality management system to maintain ongoing compliance[xxiv].

  • Ch. IV, A. 50: This provision mandates that companies notify users when they are interacting with an AI system, except when it is self-evident or used for legal purposes, such as crime detection. AI systems that generate synthetic content, including deepfakes, must be clearly labelled as artificially generated. Companies must also notify users if AI is used for emotion recognition or biometric categorization, except for legal purposes. Additionally, if AI generates or modifies content, companies must disclose this information unless it pertains to legal, artistic, or satirical content[xxv].

  • Ch. 5, A. 51: This article establishes the criteria for classifying a GPAI model as posing systemic risk. A GPAI model is considered to have systemic risk if it demonstrates high-impact capabilities, as determined through technical tools and benchmarks, or if the Commission deems it to have similar capabilities or impact. Additionally, if a GPAI model requires significant computational resources for training, it is presumed to have high-impact capabilities. The Commission retains the authority to modify these regulations in response to technological advancements[xxvi].

  • Ch. 5, A. 53: Companies developing GPAI models must maintain comprehensive records of their development and testing processes. They are required to share this information with other companies that intend to use their AI while safeguarding intellectual property. However, open-source AI models freely available to the public are exempt unless they pose systemic risks. Developers must collaborate with the Commission and national authorities and can demonstrate compliance by adhering to approved codes of practice until a standardized framework is established. The Commission has the authority to update these regulations as technology evolves.[xxvii]

  • Ch. 5, A. 55: Providers of GPAI models deemed to have systemic risk must comply with specific regulations. They must evaluate their models using standardized protocols, identify and mitigate systemic risks, and report significant incidents to the AI Office and relevant national authorities. Additionally, they must ensure the security of their AI models and underlying infrastructure. Compliance can be demonstrated through approved codes of practice until a standard is established. If providers do not follow an approved code or standard, they must find alternative means to prove compliance. All information obtained during this process must remain confidential[xxviii].

Classification of Risk Categories

The Act establishes a unified regulatory framework across all Member States, employing a forward-looking AI definition and a risk-based approach. It classifies AI systems into four risk levels:

  • Unacceptable Risk: This category includes AI applications that are particularly harmful and violate fundamental EU values and rights. AI systems that exploit individuals’ vulnerabilities, manipulate behaviour using subliminal techniques, or enable social scoring for public or private purposes are prohibited. Additionally, predictive policing based solely on profiling, untargeted scraping of facial images from the internet or CCTV for database creation, emotion recognition in workplaces and educational institutions (except for medical or safety reasons), biometric categorization to infer sensitive personal attributes (except for law enforcement purposes), and real-time remote biometric identification in public spaces by law enforcement (with limited exceptions) are also banned.

  • High Risk: This category includes AI systems that could significantly impact safety or fundamental rights. It covers AI applications in critical sectors such as healthcare, employment, law enforcement, and the operation of robots, drones, or medical devices. AI systems classified as high-risk must undergo a third-party conformity assessment before being placed on the market. The rationale behind this is to ensure additional scrutiny for AI components integrated into products already subject to stringent safety regulations. Examples include AI-powered diagnostic tools and AI-driven credit evaluation systems.

  • Specific Transparency Risk: AI systems that pose a risk of manipulation must meet transparency requirements. Users must be informed whenever they interact with AI, particularly in applications like chatbots and deepfake technologies, to prevent misuse.

  • Minimal Risk: Most AI systems fall into this category and can be developed and used under existing legislation without additional obligations. However, providers may voluntarily adhere to codes of conduct to ensure trustworthy AI practices.

The Act also addresses systemic risks associated with GPAI models, particularly large generative AI models. These versatile models serve as foundational technologies for numerous AI applications across the EU and globally. The Act recognizes their potential risks, including cybersecurity vulnerabilities and bias amplification.

Obligations for the Providers of AI Systems

Providers of High-Risk AI Systems

The Act establishes a clear methodology for identifying high-risk AI systems, ensuring regulatory clarity for businesses. The classification follows a dual approach based on the AI system’s intended purpose and alignment with existing EU product safety laws, deeming an AI system high-risk in two scenarios:

  • When it serves as a safety component in products covered by existing EU legislation (Annex I) or is itself such a product, like AI-based medical software.

  • When it is designed for specific high-risk applications listed in Annex III, including education, employment, law enforcement, and migration.

This classification system considers both the function performed by the AI system and its specific purpose and mode of use, providing a structured framework for regulatory compliance.

Providers of high-risk AI systems must conduct a conformity assessment before introducing their systems to the EU market or putting them into service. This assessment would demonstrate compliance with mandatory AI requirements, including data quality, documentation, transparency and traceability, human oversight, accuracy, and cybersecurity. Any substantial modification to the system or its purpose requires reassessment.

AI systems that serve as safety components in products covered by various EU sectoral laws are automatically classified as high-risk when subject to third-party conformity assessment under those regulations. Additionally, all biometric AI systems, regardless of application, require third-party conformity assessments.

Providers must implement quality and risk management systems to maintain ongoing compliance and minimize risks for users and affected individuals. Public authorities or their representatives deploying high-risk AI systems must register them in a public EU database, except for law enforcement and migration-related systems, which are recorded in a restricted-access section available only to supervisory authorities.

To ensure compliance throughout the AI system’s lifecycle, market surveillance authorities will conduct regular audits and post-market monitoring. Providers may also voluntarily report serious incidents or breaches of fundamental rights obligations. In exceptional cases, authorities may grant exemptions for specific high-risk AI systems. The Act also grants national authorities access to relevant information for investigating compliance in case of a legal breach.

Providers of GPAI Systems

The Act establishes a comprehensive framework for GPAI models, emphasizing transparency and risk management. Standard GPAI providers must share essential information with downstream system providers integrating their models to ensure safety and compliance. They are also required to implement copyright compliance policies and share model training summaries. Providers of GPAI models posing systemic risks must additionally conduct risk assessments and mitigation, perform state-of-the-art model evaluations, promptly report serious incidents, and maintain robust cybersecurity measures. At present, GPAI models trained using computing power exceeding 10^25 FLOPs[xxix] are classified as systems posing a systemic risk.

Providers of Generative AI Systems

The Act introduces rules to maintain transparency in Generative AI content, addressing risks related to misinformation, manipulation, and deception. Providers must mark AI-generated outputs in a machine-readable format to ensure their detectability as manipulated or artificially generated content. They are also required to deploy technical solutions that are effective, interoperable, reliable, and robust while considering content limitations, implementation costs, and prevailing technical standards.

Deployers of Generative AI also have specific disclosure obligations under the Act. They must clearly disclose when AI-generated or manipulated image, audio, or video content constitutes deepfakes. Similarly, when AI-generated text informs the public on matters of public interest, deployers must indicate its artificial nature. However, these disclosure requirements do not apply if the AI-generated content undergoes human review or editorial control or if a natural or legal person assumes editorial responsibility for the published content. This approach ensures transparency in AI-generated content.

Present Landscape

The Act will become applicable two years after it enters into force, i.e., on 02.08.2026. However, certain provisions will take effect earlier:

  • Prohibited AI systems must be discontinued within 6 months.

  • GPAI-related provisions and associated penalties will apply after 12 months.

  • High-risk AI system requirements will take effect in 24 months, with an extended implementation period of 36 months for AI systems already covered under existing EU product legislation.

Companies that fail to comply with the Act face fines of up to €35 million or 7% of their global turnover. However, in the absence of detailed technical guidelines, technology companies may struggle to demonstrate compliance. As a result, some companies may opt not to deploy their AI systems in the EU[xxx].

To address this challenge, ETH Zurich, Bulgaria’s Institute for Computer Science, AI and Technology (INSAIT), and the Swiss startup LatticeFlow AI have developed a tool to evaluate Generative AI system compliance with the Act. The tool, a large language model (‘LLM’) checker, assigns AI models a score from 0 to 1 based on factors such as cybersecurity, privacy, data governance, and environmental well-being. Initial testing indicates that many models fail to meet safety standards, particularly regarding cybersecurity and discrimination prevention[xxxi].

Given the benefits of AI and the evolving technological tides of time, a law that discourages companies from supplying AI systems in the EU may undermine the Act’s core purpose – preventing foreseeable AI-related harms while encouraging innovation and technological development. To facilitate compliance, the Commission is working on harmonising the AI Act’s underlying standards. However, concerns have been raised regarding the disproportionate involvement of large tech companies in drafting these standards.

A recent report identified that only 9% of the 143-member Joint Technical Committee on AI (JTC21), established by the European standardisation bodies, represents civil society, while 80% of its members come from corporate entities, including major tech giants like IBM, Microsoft and Google[xxxii]. This raises concerns that fundamental rights and environmental safeguards could be shaped primarily by AI system providers rather than independent regulatory bodies.

Conclusion

The Act, while groundbreaking in its comprehensive approach to AI regulation, faces several critical implementation challenges and potential shortcomings. Its alignment with existing sectoral regulations remains incomplete, potentially creating unnecessary bureaucratic burdens and regulatory overlaps. The substantial compliance costs – estimated at €400,000 for implementing quality management systems alone – could particularly burden small and medium-sized enterprises, potentially stifling innovation. Additionally, the 10^25 FLOPs threshold for classifying systemic risk models appears arbitrarily high, allowing powerful AI systems to escape proper oversight potentially[xxxiii]. The Act’s approach to generative AI and chatbots may also be insufficient to fully address their specific risks. While it mandates transparency and watermarking for AI-generated content, concerns remain about the standardization process, particularly regarding stakeholder representation and fundamental rights impact assessments.

The international dimension presents further challenges, particularly in harmonizing AI governance between the EU and other nations. Critical areas such as dual-use AI applications and common terminology require further attention, which may be addressed as the need arises. Additionally, the AI Act may require complementary legislation to address emerging concerns, including the environmental impact of AI training, workers’ rights, and competition in generative AI markets. Notably, the Act does not provide a direct mechanism for compensating individuals who suffer harm due to AI-related incidents.

Nevertheless, in response to rapid developments in AI technology, the AI Act is a welcome measure. It strikes a balance – being broad enough to encourage further AI development while also ensuring protection against the imminent risks posed by the technology. Given the dynamic and evolving nature of AI, an overly rigid legislative framework could quickly become obsolete. By maintaining flexibility, the Act allows room for future adaptation and refinement. Furthermore, rather than operating in isolation, the Act is designed to aid and assist the preexisting legislation in the EU by harmonizing with them. After all, we would not want it to become a ‘law of the horses’[xxxiv].

Only time will tell whether it will achieve the same global benchmark status as the GDPR. But it certainly has the potential.




End Notes

[i] Fourth Industrial Revolution: It focused on the automation of technologies and represents the convergence of digital, physical, and biological technologies, characterized by breakthroughs in artificial intelligence, robotics, the Internet of Things (IoT), and biotechnology. Fifth Industrial Revolution: It represents a transformative shift from the Fourth Industrial Revolution, which focused on the automation of technologies, to a more balanced approach that emphasizes human-machine collaboration.

[ii] Regulation 2024/1689. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)http://data.europa.eu/eli/reg/2024/1689/oj

[iii] Article 1: Subject matter and scope. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act)https://artificialintelligenceact.eu/article/1/

[iv] European Commission. (2021, April 21). Questions and answers: Fostering a European approach to artificial intelligence [Press release]. European Commission. https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683 

[v] European Commission. (2019, April 8). Artificial intelligence for Europe [Report]. European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58496 

[vi] European Commission. (2019). Ethics guidelines for trustworthy AI [Guidelines]. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[vii] High-Level Expert Group on Artificial Intelligence. (2019). Policy and investment recommendations for trustworthy AI [Report]. European Parliament. https://www.europarl.europa.eu/cmsdata/196378/AI%20HLEG_Policy%20and%20Investment%20Recommendations.pdf

[viii]European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust [White paper]. European Commission. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf 

[ix] European Commission. (2021). Communication: Fostering a European approach to artificial intelligence [Communication]. European Commission. https://digital-strategy.ec.europa.eu/en/library/communication-fostering-european-approach-artificial-intelligence.

[x] European Commission. (2021). Impact assessment of the regulation on artificial intelligence [Report]. European Commission. https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence.

[xi] Article 3: Definitions. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/article/3/

[xii]  Article 3 (1), Ibid.

[xiii] Article 3 (66), Ibid.

[xiv] Article 3 (3), Ibid.

[xv] Article 3 (4), Ibid.

[xvi] Article 3 (9), Ibid.

[xvii] Article 3 (10), Ibid.

[xviii] Article 3 (12), Ibid.

[xix] Article 3 (13), Ibid.

[xx] Article 3 (23), Ibid.

[xxi] Article 2: Subject matter and scope. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/article/2/ 

[xxii] Article 5: Requirements for high-risk AI systems. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/article/5/ 

[xxiii] Article 6: Data and data governance. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/article/5/

[xxiv] Chapter 3: Conformity assessment and market surveillance. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act).. https://artificialintelligenceact.eu/chapter/3/

[xxv] Article 50: Transparency obligations for certain AI systems. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act)https://artificialintelligenceact.eu/article/50/ 

[xxvi] Article 51: Classification of general-purpose AI models with systemic risk. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act)https://artificialintelligenceact.eu/article/51/ 

[xxvii] Article 53: Obligations for providers of general-purpose AI models. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act)https://artificialintelligenceact.eu/article/53/ 

[xxviii] Article 55: Additional obligations for providers of general-purpose AI models with systemic risk. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act)https://artificialintelligenceact.eu/article/55/ 

[xxix]Floating-Point Operations Per Second (FLOP) is a proxy for model capabilities. It is a measure of computer performance that quantifies how many floating-point arithmetic calculations a processor can perform within one second. Based on technological advancement, the exact FLOP threshold can be updated upwards or downwards by the Commission.

[xxx] Euronews. (2024, October 16). Are AI companies complying with the EU AI Act? A new LLM checker can find out. Euronews. https://www.euronews.com/next/2024/10/16/are-ai-companies-complying-with-the-eu-ai-act-a-new-llm-checker-can-find-out 

[xxxi] Ibid.

[xxxii] Euronews. (2025, January 9). Big tech too influential over AI standards, warns report. Euronews. https://www.euronews.com/next/2025/01/09/big-tech-too-influential-over-ai-standards-warns-report 

[xxxiii] See, P. Hacker.  Comments on the final trilogue version of the AI act, 2024

[xxxiv] See, Frank H. Easterbrook, "Cyberspace and the Law of the Horse," 1996 University of Chicago Legal Forum 207 (1996).






Authored by Shivangi Bhardwaj, Advocate at Metalegal Advocates. The views expressed are personal and do not constitute legal opinions.

Metalegal Advocates is a litigation-based law firm based in New Delhi and Mumbai, providing litigation and advisory services in the fields of economic offences, tax (income-tax, GST, black money, VAT and other taxes), general corporate advisory, FEMA, commercial laws, and other related business and mercantile laws to businesses and individuals in a wide array of industry verticals. 

NEW DELHI

A-7, Lower Ground Floor,
Nizamuddin East,
New Delhi - 110013

F-13, First Floor,
Jangpura Extension,
New Delhi - 110014

MUMBAI

401, Trade Avenue,
Suren Road, Andheri (E),
Mumbai - 400093 

Copyright © 2021-2025. All rights reserved. Metalegal Advocates. 

  • Instagram
  • LinkedIn
  • X
bottom of page