Skip to main content

EU AI ACT OVERVIEW

Last week we looked at the newly-passed EU Artificial Intelligence Act which aims to establish a unified legal structure governing the creation, distribution, and utilisation of AI goods and services within the EU. It’s designed to accomplish several key goals:

  • guarantee the safety and compliance of AI systems introduced to the EU market while upholding existing EU regulations
  • provide clear legal guidelines to encourage investment and advancements in AI technology
  • strengthen governance and ensure robust enforcement of EU laws regarding fundamental rights and safety standards for AI systems.
  • foster the growth of a unified market for lawful, safe, and reliable AI applications, while preventing fragmentation within the market

The Act also has a strong focus on addressing the risks associated with the development and deployment of AI. This time, we are going to look at these risks in more detail, and how they will impact the use of AI in the United Kingdom. We’ll also explore the synergy between the EU AI Act and the GDPR in more detail.

RISK-BASED APPROACH

The EU Commission devised AI regulation employing a risk-based methodology, which involves categorising risks into a pyramid structure with various risk levels: unacceptable risk, high risk, limited risk, and low or minimal risk. These can further be categorised as follows:

  • Unacceptable risk – prohibited practices
  • High risk – high levels of regulation
  • Limited risk – transparency
  • Low and minimal risk – no obligations

AI RISK – UNACCEPTABLE

The proposed AI Act explicitly prohibits harmful AI practices that pose a clear threat to people’s safety, livelihoods, and rights due to the “unacceptable risk” they present. As a result, it would be illegal to introduce to the market, implement in services, or use in the European Union:

  • AI systems employing harmful manipulative “subliminal techniques.”
  • AI systems that exploit specific vulnerable groups, whether due to physical or mental disability.
  • AI systems utilised by public authorities, or on their behalf, for social scoring purposes.

More specific examples of banned AI systems include police face scanning in public using biometrics, except for some crimes like terrorism, emotion recognition systems in the workplace or schools, and the focused collection of facial images from the internet or camera systems for face recognition. This is obviously a violation of human rights and our right to privacy.

AI RISK – HIGH

AI systems categorised as high risk due to their negative impact on safety or fundamental rights will be further divided into two categories. Firstly, the AI systems used in products governed by the EU’s product safety legislation. These include a wide range of products including toys, aviation equipment, automobiles, medical devices, and lifts. The second categorisation is AI systems falling within eight specific areas that will necessitate registration in an EU database:

  • biometric identification and categorisation of natural persons
  • management and operation of critical infrastructure
  • education and vocational training
  • employment, worker management, and access to self-employment
  • access to and enjoyment of essential private services, public services, and benefits
  • law enforcement
  • migration, asylum, and border control management
  • administration of justice and democratic processes

AI RISK – LIMITED

AI systems categorised as limited-risk face fewer regulatory constraints compared to high-risk systems. However, they still must adhere to specific transparency obligations to maintain accountability and trustworthiness. Developers and operators of limited-risk AI systems must provide clear explanations of how the system operates, what data it utilises, and how it makes decisions. This transparency is crucial for building and maintaining public trust in AI systems and ensuring their ethical and responsible use. For example, when interacting with a chatbot, individuals must be informed that they are engaging with an AI system, empowering them to decide whether to proceed or request human assistance instead. This transparency helps users make informed decisions about their interactions with AI technologies.

AI RISK – LOW

These applications are already extensively used and represent the majority of AI systems we engage with daily. Examples range from spam filters and AI-enhanced video games to inventory-management systems. As we mentioned, low or minimal-risk AI systems carry on regulatory obligations.

OBLIGATIONS

The EU AI Act mandates specific obligations on providers. For instance, providers of high-risk AI systems must register their systems in an EU-wide database administered by the EU Commission before market placement or service implementation. Furthermore, high-risk AI systems must adhere to a set of requirements, focusing on risk management, testing, technical robustness, data training and governance, transparency, human oversight, and cybersecurity. Providers based outside the EU must appoint an authorised representative within the EU to ensure conformity assessment, establish post-market monitoring systems, and take corrective actions as necessary, among other responsibilities.

PENALTIES

Under the Act, administrative fines are determined on a sliding scale, contingent upon the gravity of the violation. The maximum fine can amount to €30 million or 6% of the total worldwide annual turnover, whichever is greater, for the most severe breaches. EU member states are tasked with establishing more specific regulations concerning penalties. This includes determining whether and to what extent administrative fines may be imposed on public authorities.

  • Use of unacceptable AI systems – violations can result in hefty fines of up to €30 million or 6% of the company’s worldwide annual turnover, whichever is higher. It’s crucial to note that the fine is calculated as a percentage of the organisation’s annual revenue, rather than being a flat amount.
  • High-risk AI systems – providers of high-risk systems are granted a 24-month transition period from the adoption of the Act to achieve full compliance. Failure to meet the requirements within this timeframe may result in penalties of up to €20 million or 4% of the annual worldwide turnover, whichever is higher.
  • Limited risk AI systems – AI systems that interact with humans or generate deep fakes must ensure that individuals are notified when they are engaging with an AI. These transparency regulations also come into effect 24 months after the adoption of the Act. Violations of these obligations may incur fines of up to €20 million or 4% of the annual turnover, whichever is higher.
  • Low-risk AI systems – Providers of non-high-risk AI are encouraged to voluntarily adopt codes of conduct aimed at promoting trustworthy practices, such as mitigating bias risks. Unlike high-risk AI systems, there are no specific timelines or penalties associated with compliance for non-high-risk AI providers.

HOW WILL THE EU AI ACT IMPACT THE UK?

The EU AI Act applies to UK entities deploying AI systems within the EU, offering them on the EU market, or engaging in activities governed by the Act. These UK organisations must ensure compliance with the AI Act to avoid financial penalties and reputational harm. However, the Act’s implications may reach further, potentially influencing UK organizations operating solely within the UK market. Similar to the GDPR, the AI Act is likely to establish a global standard in the field. This could lead to two outcomes: first, UK companies embracing and complying with the AI Act may distinguish themselves in the UK market, appealing to customers who prioritise ethical and accountable AI solutions. Second, as a “Golden Standard,” the AI Act may prompt evolution in the UK’s domestic regulations to align with its principles. Moreover, the EU AI Act promotes voluntary compliance, even for companies initially beyond its scope. Consequently, UK companies providing AI services in the EU or using AI technologies within the region are likely to feel the Act’s impact. Given that many UK businesses have extensive market reach beyond the UK, the EU AI Act holds particular relevance for them.

WHAT ABOUT GENERATIVE AI?

Businesses must consider the implications of the EU AI Act for companies currently developing apps utilizing Generative AI Large Language Models (LLMs) and similar AI technologies, such as ChatGPT, Google Bard, Anthropic’s Claude, and Microsoft’s Copilot. While these companies may not be directly targeted by the legislation, they should be mindful of its potential impact on their operations. Taking necessary steps to ensure compliance with the AI Act, even if not mandated, can help these companies stay proactive and maintain a strong presence in the dynamic AI landscape. The Act aims to establish standards in AI safety, ethics, and responsible AI use, while also emphasizing transparency and accountability. Additionally, it outlines obligations concerning the utilization of Generative AI. By aligning with the principles and requirements outlined in the EU AI Act, companies developing apps with generative AI technologies can demonstrate their commitment to ethical AI practices and foster trust among users and stakeholders. This proactive approach can also help mitigate regulatory risks and position these companies as leaders in the responsible deployment of AI technologies. The Act states specifically:

“Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications.”

GDPR COLLABORATION

Last week we saw that the EU AI Act and GDPR are significantly different in their scope of application. Nevertheless, a synergy does exist between the two. The EU AI Act complements the General Data Protection Regulation (GDPR) by safeguarding fundamental rights and aligning with principles such as transparency and fairness. Recognizing that AI often involves the processing of personal data, compliance with both regulations is necessary, potentially requiring data protection impact assessments (DPIAs). While high-risk AI systems face stricter regulations under the EU AI Act, GDPR requirements for personal data processing remain pertinent. Provisions in the Act regarding AI testing also align with GDPR principles, emphasising accountability and the need for organizations to demonstrate compliance and maintain records. Substantial fines and penalties for non-compliance are common consequences under both regulations, depending on the severity of the violation. Together, they aim to ensure responsible and lawful AI development while protecting data privacy. Navigating this complex compliance landscape requires organisations to continuously meet their obligations under the GDPR. For instance, in cases involving AI systems that solely rely on automated processing and yield significant effects, you must demonstrate the appropriate lawful basis and provide individuals with necessary safeguards, such as the right to human intervention, along with transparent processing activities.

GOVERN IT BETTER

Data and privacy protection and regulatory compliance are the name of the game for any business that wants to stay in the mainstream and be profitable. That’s where Zhero can lend a big helping hand. Our Govern IT Better offering will help your business meet the privacy and security requirements of your market, your customers, and the government. You’ll also be staying on the right side of the law when it comes to the EU AI Act and the GDPR. Speak to one of our super knowledgeable humans today and see how we can reduce your risk by bringing Govern IT Better into your business.



 



Next Post

Leave a Reply