Earlier this year the EU Commission published their proposal to regulate the use of Artificial Intelligence, which will be the first ever legal framework targeting this area of technology.

AI is defined as “software that is developed with one or more specified techniques and approaches… that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decision influencing the environments they interact with”.

While the Commission recognises the advantages of AI stating that the fast-evolving technology “can bring a wide array of economic and societal benefits” through “improving prediction, optimising operations and resource allocation, and personalizing service delivery”. It states that the use of AI can also bring about new risks or negative consequences for individuals’ fundamental rights and are in need of regulation.

The AI Regulation proposes a “proportionate risk-based approach” which, in a similar vein to the existing GDPR framework, will have an extra-territorial scope that applies to providers wishing to place AI systems into service in the EU irrespective of where such a provider is established. The AI Regulation’s risk-based approach is categorised into three AI practices; (1) harmful prohibited practices; (2) “high-risk” AI systems; and (3) those deemed as relatively “low-risk” AI systems such as the use of chatbots or deep-fakes. Only minimum transparency obligations are proposed for the latter type of AI system; when individuals are interacting with an AI system, or their emotions or characteristics are recognised through automated means, they must be informed of that circumstance. The Commission have proposed such an obligation to allow individuals “to make informed choices or step back from a given situation”.

AI that has been proposed to be prohibited have an unacceptable risk and contravene Union values and violate fundamental rights. These are practices that “have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups… in order to material distort their behaviour that is likely to cause them… psychological or physical harm”. Systems that allow social scoring, facilitate discrimination and enable the use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, apart from in exceptional circumstances listed in Title II of the AI Regulation, have been regarded as unacceptable.

The AI Regulation’s main focus is the approach to “high-risk” AI. The Commission have proposed a series of requirements for high-risk AI systems to comply with and are mainly directed at the providers of AI. These include the application of risk management systems; technical documentation that demonstrate requirements are being complied with; data governance; record-keeping; transparency to enable users to interpret the system’s output; human oversight; and accuracy, robustness and cybersecurity.

It will be necessary for providers to register details of the AI system on an EU database; monitor performance; report serious incidents and breaches; and correct, withdraw or recall AI systems that do not conform with the regulations. The types of AI systems regarded as “high-risk” are systems where citizens are dependent on the outcome and hold the potential to negatively impact fundamental rights if they were to be abused and continued as unregulated. Systems considered “high-risk” include those related to educational and vocation training; safety components of products e.g. robot assisted surgery; employment opportunities e.g. CV sorting software for recruitment purposes; essential private and public services; law enforcement; migration and border control management; and the administration of justice. The introduction of the AI Regulation will therefore have a significant impact on organisations on a global scale that are involved in, or are considering, the use of AI systems within these sectors. Organisations that will be affected by the AI Regulation will therefore need to consider the Regulation’s implications on their current operational structures and future strategies.

The AI Regulation poses a range of penalties, listed in Article 71 of the Regulation, which are dependent on the nature of the breach, and follows a similar proportionate-based framework to the GDPR. Non-compliance with (1) the prohibition of the AI practices or (2) non-compliance of the AI system in relation to data governance and management practices will bring an administrative fine of up to either €30m or 6% of the undertaking’s total worldwide annual turnover. Whichever figure is higher will apply. Penalties of up to €20m or 4% of the undertaking’s annual turnover will be administrated for infringements of other relevant requirements not laid down in Articles 5 and 10. Thirdly, the supply of incorrect, incomplete or misleading information to national authorities in response to an information request will bring a fine of up to €10m or 2% of annual turnover, provided the offender is a company. The administration of fines will be decided on an individual case basis, taking into account the gravity and duration of the infringement, as well as the size and market share of the operator committing the infringement.

Member States will be required to appoint national supervisory authorities to ensure the application and implementation of the AI Regulation in a similar fashion to the way data protection authorities operate under GDPR.

At Union Level, it is intended for a European Artificial Intelligence Board to be established which will be comprised of representatives from both the Member States and the Commission.

While the AI Regulation still needs to proceed through the EU processes for further consideration before it comes into force, companies that are using, or are planning to use, AI will need to consider how the proposed requirements and being regulated to this extent will affect their practice.