IDSA COMMENT

You are here

The EU Artificial Intelligence Act: AI in the Balance

Ms Meghna Pradhan, Research Associate--LAWS Project, MP-IDSA
  • Share
  • Tweet
  • Email
  • Whatsapp
  • Linkedin
  • Print
  • September 05, 2024

    There has been a rising alarm regarding the scale at which Artificial Intelligence (AI) technology now permeates every aspect of modern life. Critical sectors like military, healthcare and finance, which once relied on human judgement, are now increasingly relying on AI systems for decision-making. As these systems grow more autonomous, there are growing litany of concerns regarding accountability, transparency and fairness of their decisions. The unpredictable nature of AI systems has flagged probable issues such as reinforced biases, ‘black boxed’ decisions, algorithmic dehumanisation and infringement of privacy, among others, which has pushed regulation of AI from consideration to necessity.

    In cognizance of this, many nations have attempted to create a code of conduct with respect to AI. The May 2024 Artificial Intelligence Act (henceforth ‘AI Act’) by the European Union (EU) is, therefore, an important milestone in the context of AI regulatory landscape. The recently enforced act is the first law of its kind, with legally binding provisions to ensure ethical and legal usage of AI products that are marketed and/or utilised within EU, irrespective of whether the provider or user is physically based in the EU.  The law sets a new regulation standard on the global stage, to provide a balancing act between innovation, governance and fundamental rights.

    The Regulatory Framework for AI

    The AI Act was first proposed in April 2021, and underwent multiple discussions before being approved on 21 May 2024 by an overwhelming majority. The Act will be implemented in a phased manner, spanning over 24 months of first coming into effect (which was on 1 August 2024).1

    The AI Act defines Artificial intelligence in product-based context as:

    …a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.2

    The legislation has approached AI regulation primarily with the intent of consumer safety. It follows a graded, ‘risk based’ categorisation for various products that use AI. Risk, here, refers to a ‘combination of the probability of an occurrence of harm and the severity of that harm’.3 Essentially, this implies that different AI products have different levels of scrutiny, based on the ‘risk’ that they pose. There are primarily three levels of risk, viz.

    • Unacceptable Risk

    These risks include development or usage of AI systems that manipulate people into making harmful choices they otherwise wouldn't make.4 These systems may also include use of AI for untargeted scraping of facial images, certain predictive policing practices and social scoring. These AI systems are presumed to pose significant threat to humans and are thus banned by the EU AI Act.

    • High Risk

    These include use cases of AI for healthcare, education, electoral processes, job screening and critical infrastructure management. These particular AI tools are subject to specific regulatory requirements.

    • Limited/Minimal Risk

    These include certain general purpose AI tools that have low risk, such as AI for music recommendations, etc. These are not under fewer to zero regulatory requirements, and can generally stay under voluntary code of conduct adopted by their respective companies.

    The scope of the AI Act includes every AI system that is placed on the market, put into service, or used within the EU. Much like the General Data Protection Regulation (GDPR), the EU data privacy law passed in 2016, the applicability of the new law is not just limited to companies that function within the EU. This means that any provider (refers to any entity that develops, or commissions, an AI system with the intention of placing it on the market) or deployer (any entity that uses an AI system under their authority, barring personal use) that has put an AI system on market or in use within EU is covered by the law, even if they belong to a non-EU state. It should be noted that military and research are excluded from the law’s purview.5

    The AI Act also places graded punitive measures against violations to the Act. Non-compliance would incur fines of up to EUR 15,000,000 or 3 per cent of worldwide annual turnover, whichever is higher. Interestingly, for small and medium sized companies and start-ups, the AI Act specifies a different set of rules, with lower fines. This is likely to ensure compliance without stamping down the possibility of innovation, which is largely being driven within the start-up ecosystem.

    Implications

    The new AI Act has been shaped with the idea that AI innovation and use must be fostered without humans suffering collateral damages. The regulation lends a certain degree of oversight to the development and deployment of AI, thereby ensuring some degree of accountability and transparency to the EU citizens.

    However, there are certain issues that may limit the applicability of the law. For the definition of AI has a fairly reductionist view of both the current, as well as the future potential of the technology. The basis of using the term ‘intelligence’ in AI is a tacit acknowledgement to how there is a possibility of complex behaviours and/or capability that were not part of the initial input. The definition assumes that objectives—whether explicit or implicit—are stable or predictable based on inputs it receives.  In practice, however, AI systems might ‘learn’ or infer objectives that can lead to unintended consequences. Thus, presuming that an AI will only act based on what it infers from ‘input’ is a reductionist understanding of AI.

    While the product-based approach has been tried and tested for EU, AI is not like a common product, but a continuously growing, changing one. This issue was felt acutely during the discussions by the European Parliament as well, especially when generative AI apps such as Chat-GPT emerged. The use case of this app ranges from benign, low risk recommendations, to generating misinformation. User tests have also revealed that with specific prompts, these generative AI may turn from benign answering machines, to generating hateful speeches within a matter of minutes, despite the controls attached to them by their programmers. According to Pieter Arntz, a senior researcher at the cybersecurity firm Malwarebytes,

    Many of the guidelines are based on old-fashioned product safety regulations which are hard to translate into regulations for something that’s evolving. A screwdriver does not turn into a chainsaw overnight, whereas a friendly AI-driven chatbot turned into a bad-tempered racist in just a few hours.6

    There are also concerns regarding the regulatory costs leading to stifling of competition and innovation in the field of AI. According to a study by The Computer & Communications Industry Association (CCIA Europe), there is a potential for limited access to data, partnerships between large companies and smaller ones, leveraging behaviour and predatory takeovers by big companies, which may stifle smaller companies and start-ups, particularly in the field of Generative AI.7

    The exceptions provided to AI for research, national security and military purposes also need a reconsideration. In terms of research, while regulatory framework need not be stringent, there is a need to have some oversight or general code of conduct to be implemented for ethical use, lest a case of misappropriation, illegal acquisition and unethical application of data (like the Cambridge Analytica scandal) arises.

    There are also concerns that the blanket exception to national security may lead to EU giving free reign to countries to implement a surveillance state, particularly against the marginalised communities. Under the EU AI Act, these technologies are categorised under ‘unacceptable risk’ only within the context of workplace and education settings. There is an easy loophole to be exploited as justification of use of AI for social scoring, biometric surveillance and facial scraping by law enforcement and migration authorities, contrary to the Act’s professed aim of securing fundamental human rights. While the exclusion on grounds of national security being sovereign right is justified, there could have been a provision to encourage framing of general guidelines (non-binding) by states to utilise AI in military with an ethical and human-sensitive approach.

    Conclusion

    The proliferation of AI in our lives has come with its own set of boons and banes. There is little doubt that critical decision-making sectors have certainly benefitted from AI in terms of efficiency. Yet, eliminating the human factor in decision-making, even as AI train on human-generated data, has led to fears of biases that may threaten human lives and dignity. There are also questions of accountability, with no actual will or emotions to guide the decisions of a machine.

    In this context, the new AI Act, therefore, fills a major regulatory gap for AI usage today. It assigns robust compliance standards for AI and ensures that accountability may be assigned where it is due. The technologically agnostic approach to define AI also enables futureproofing irrespective of how AI may grow. The EU law may also be attempting to recreate GDPR’s ‘Brussels effect’—a term oft used to denote the EU’s ability to influence global regulatory standards and policies through its own regulations—o that newer regulatory frameworks may be put in place to ensure compliance with EU laws, thereby taming the relative Wild West situation of AI to some extent.

    Yet, it should be acknowledged that AI remains a continuously evolving technology. It still has room to grow, and the element of unpredictability remains a constant in not just its functions, but also in its developmental trajectory. Although the EU lawmakers have mentioned that the law will be subject to amendments as the AI landscape evolves, one does wonder who will outpace the other first—AI systems, or the laws attempting to regulate them.

    Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.

    Top