Skip to main content

AI โ€“ DANGERS AND COMPLIANCE

In our post last week, AI Dangers and Compliance, we first examined some of the potential dangers of this fledgling technology, succinctly and somewhat dramatically framed by the Godfather of AI himself, Geoffrey Hinton:

โ€œThese things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening.โ€

While some say that a takeover by AI is an outlandish notion, many cyber experts in the know recognise job loss, deepfakes, privacy violations and much more as some of the dangers of automation and AI. As such,ย  we also saw that AI compliance has recently taken centre stage in the international cybersecurity arena. Put simply, AI compliance is a process aimed at ensuring that AI-powered systems adhere to all relevant laws and regulations. Taking this one step further is the new European Union Artificial Intelligence Act.

WHAT IS THE EU AI ACT?

The AI Act represents a first-of-kind proposed European regulation on AI. Under this Act, AI applications are classified into three risk categories:

  • Applications and systems that pose an unacceptable risk, such as government-operated social scoring akin to practices in China, are prohibited
  • High-risk applications, like CV-scanning tools used for job applicant ranking, are subjected to stringent legal obligations.
  • Applications not categorised as high-risk or explicitly banned are predominantly left unregulated.

Nearly three after the first proposal, the EU AI Act was approved by the European Parliament on 13 March this year, with 523 votes for and 46 votes against. Italian MEP Brando Benifei, a co-rapporteur of the act, said:

“Today is again a historic day on our long path towards regulation of AI. The law will become a law of the European Union – the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”

Despite its adoption, the EU’s new rules for AI development and use still have important steps to take before having full effect.

WHY IS THE EU AI ACT IMPORTANT?

The AI Act aims to instil trust among Europeans in the capabilities of AI. While many AI systems present minimal or no risk and can actively contribute to addressing societal issues, specific AI systems carry risks that necessitate mitigation to prevent adverse consequences. Similar to the EU’s General Data Protection Regulation (GDPR) of 2018, the EU AI Act has the potential to establish a global benchmark, shaping the extent to which AI influences lives positively rather than negatively, regardless of where you are.

WHAT DOES THE EU AI ACT PROPOSE?

According to the European Commission, the legislation will

  • address risks specifically created by AI applications
  • prohibit AI practices that pose unacceptable risks
  • determine a list of high-risk applications
  • set clear requirements for AI systems intended for high-risk applications
  • define specific obligations for deployers and providers of high-risk AI applications
  • require conformity assessment before an AI system is put into service or placed on the market
  • implement enforcement mechanisms after an AI system is placed on the market
  • establish a governance structure at both European and national levels

FOUR LEVELS OF RISK

As you can see, the EU AI Act hones in on risk, and in particular, unacceptable and high-risk systems. These are the four levels of risk defined by the regulatory framework:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk

UNACCEPTABLE RISK

The use of applications incorporating subliminal techniques, exploitative systems, or social scoring systems by public authorities is strictly forbidden. Any real-time remote biometric identification systems utilised by law enforcement in publicly accessible spaces are also prohibited.

HIGH RISK

High-risk AI systems include applications in transport, education, employment, and welfare, and much more. Before bringing a high-risk AI system to market or service within the EU, companies are required to undergo a comprehensive “conformity assessment” and adhere to a detailed set of requirements to guarantee the system’s safety. As a practical step, the regulation mandates the European Commission to establish and maintain a publicly-accessible database. In this database, providers are obliged to furnish information about their high-risk AI systems, ensuring transparency for all stakeholders.

LIMITED RISK

Limited risk refers to the potential risks stemming from the lack of transparency in the utilisation of AI. The AI Act introduces specific transparency obligations to ensure that humans are adequately informed when necessary and trust is fostered. For example, when we use AI systems like chatbots, everybody should be informed that they are interacting with a machine, enabling them to make informed decisions about whether to proceed or disengage. Providers are also mandated to ensure that AI-generated content is identifiable. AI-generated text intended to inform the public on matters of public interest must also be clearly labelled as artificially generated. This requirement extends to audio and video content, including deep fakes.

MINIMAL RISK

The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

WHAT HAPPENS NEXT?

Following clearance by the EU Parliament, the AI Act text will undergo translation into multiple languages, with any errors being rectified. The establishment of the European AI Office, tasked with governing the act, is the subsequent step. Its immediate priorities include establishing advisory bodies, formulating benchmarks for evaluating capabilities, and drafting codes of practice. A board will guide the Act’s implementation and offer opinions when challenges arise. Although the exact commencement date of the office’s operations remains uncertain, a website announcing its creation went live in January. Stakeholders are encouraged to engage proactively and need not wait until approaching deadlines. They can participate by joining standardisation bodies and reaching out to the European Commission, which is now responsible for developing guidelines and formulating delegation and implementation acts, the so-called secondary legislation. Kai Zenner, an AI expert from the European Parliament, said:

“The AI Act can still improve and can be made more specific, and so on.”

DOES THE EU AI ACT APPLY TO THE UK?

According to City A.M., the EU AI Act is anticipated to exert a considerable influence on UK businesses, as adherence to its regulations will be imperative for those seeking to engage in international business, as do the counterparts in the United States and Asia. Any UK business selling AI systems in the European market or deploying AI systems within the bloc will be subject to its provisions. It is deemed “crucial” for businesses to institute and uphold robust AI governance programs to ensure compliance with the Act. Forrester principal analyst, Enza Iannopollo, elaborated:

โ€œOver time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements โ€“ or lack thereof.โ€

GDPR AND AI COMPLIANCE

Since its enactment on 25 May 2018, the General Data Protection Regulation (GDPR) has empowered individuals with greater control over their personal data and set forth guidelines for organizations regarding the collection, processing, and storage of data. As AI systems frequently depend on processing extensive volumes of data, including personal data, to improve their performance, it is essential that when designing and implementing AI systems, these GDPR principles, rights, and provisions must be adhered to:

  • lawful basis for data processing
  • data minimisation and purpose limitation
  • anonymisation and pseudonymisation
  • accuracy and storage limitation
  • right to information regarding automated decision-making
  • privacy by design and privacy by default
  • data protection impact assessments (DPIAs)
  • security and accountability
  • cross-border data transfers
  • rights of individuals

AI is evolving rapidly, emphasising the importance for businesses to grasp the implications of AI usage on data processing while ensuring compliance with the GDPR.

HOW DO THE EU AI ACT AND GDPR DIFFER?

The EU AI Act and GDPR are significantly different in their scope of application. The Act casts a wide net, encompassing providers, users, and various participants along the AI value chain, including importers and distributors, operating within or targeting the EU market, regardless of their physical location. In contrast, the GDPR primarily pertains to controllers and processors handling personal data within the EU offering goods/services to EU residents or monitoring their behaviour. Consequently, while AI systems devoid of personal data processing or targeting non-EU individuals may fall within the purview of the AI Act, they may not necessarily be subject to the GDPR. The GDPR is also grounded in the fundamental right to privacy, empowering data subjects to assert their rights against entities processing their personal data while the EU AI Act concentrates on AI as a product. Despite aiming for a ‘human-centric approach,’ the Act predominantly employs a product regulation framework. Consequently, individuals indirectly benefit from protection against flawed AI systems under the EUย  AI Act, without being assigned an explicit role. In other words, while the AI Act addresses stopping unlawful AI systems from utilising personal data, the exercise of data subjects’ rights relating to their personal data is governed by the GDPR.

GOVERN IT BETTER

Data and privacy protection and regulatory compliance are the name of the game for any business that wants to stay in the mainstream and be profitable. Thatโ€™s where Zhero can lend a big helping hand. Our Govern IT Better offering will help your business meet the privacy and security requirements of your market, your customers, and the government. Youโ€™ll also be staying on the right side of the law when it comes to the EU AI Act and the GDPR. Speak to one of our super knowledgeable humans today and see how we can reduce your risk by bringing Govern IT Better into your business.

Leave a Reply