News

A brief analysis of the new draft AI regulation compared to the Maltese Law

The draft AI regulation has been drafted with the aim of addressing the use of a family of technologies in a manner to ensure that such technologies are not used to the detriment of society. The ultimate objective of the regulation, that of protecting stakeholders particularly individual end-users and society as a whole is very much in line with the goal of technology regulation as implemented in Maltese law, and as implemented through the Malta Digital Innovation Authority (MDIA), albeit there are also some cardinal differences as I will remark hereunder.

The tools used to achieve this goal also share much with the Maltese approach, however, unlike the Maltese model, which is a voluntary model, the draft regulation aims to put in place a mandatory regime for the captured AI, and also classifies the types of AI which will be banned as well as those which will require to follow a pre-set of obligations before being introduced to the European Market as well as during its existence.

The approach which has been adopted by Malta is that the technology due diligence processes offered by the MDIA although being voluntary, it can only be mandated by another lead Authority which regulate specific industries or sectors, like Financial Services, Health, Electronic Communications etc. In this manner, domain specific risks are addressed in relevant law and, if need be, assessed by the Authority regulating such activity. This also allows for domain-specific control objectives to be assessed in conformity checks and monitoring to be identified by those regulating the domain – a process which requires expertise which a technology-centric authority would not have. Such an approach would also allow for the regulation of the use of any technology and not specifically AI-based systems.

 The Draft Regulation introduces three categories of “High-Risk AI Systems” and subjects Providers and Users as well as importers and distributors of such AI Systems to specific obligations. High-Risk AI Systems include:

  1. AI Systems intended to be used as a product or as a component of products covered by a set of pre-existing EU Directives on, for example, machinery, safety of toys, lifts, radio equipment, and medical devices. Concerning these AI Systems, the Draft Regulation largely refers to the provisions and conformity assessments under these specific Directives.
  2. AI Systems intended to be used as a product or as a component of products covered by pre-existing EU Regulations on aviation, motor vehicle, and railway safety.
  3. AI Systems explicitly listed by the Draft Regulation, that are intended to be used to:
    • Perform biometric identification and categorization of natural persons.
    • Work as safety components used in the management and operation of critical infrastructure (e.g., for road traffic and the supply of water, gas, or electricity) or to dispatch or establish priority in the dispatching of emergency first response services, e.g., firefighters and medical aid.
    • Determine access to educational and vocational training institutions as well as for recruitment (e.g., advertising job vacancies, screening or filtering applications, and evaluating candidates), make decisions on promotions, allocate tasks, and monitor work performance.
    • Evaluate the creditworthiness or establish the credit score of persons or evaluate their eligibility for public assistance benefits and services by public authorities or on their behalf.
    • Make predictions intended to be used as evidence or information to prevent, investigate, detect, or prosecute a criminal offense or adopt measures impacting the personal freedom of an individual; work with polygraphs or other tools to detect the emotional state of a person; or predict the occurrence of crimes or social unrest in order to allocate patrols and surveillance.
    • Process and examine asylum and visa applications to enter the EU or verify the authenticity of travel documents.
    • Assist judges in court by researching and interpreting facts and the law & applying the law to a concrete set of facts.

The list is not conclusive. When the EU Commission identifies other AI Systems generating a high level of risk of harm, those AI Systems may be added to this list via delegated acts.

In line with the Maltese approach, systems and solutions which require technological assurances will be required to:

  • Carry out conformity checks in order to ensure that the underlying technology is sound and safe; and
  • Carry out continued monitoring of the use and outcome of the technology.

The draft regulation covers this through the mandated conformity assessment: The Provider here, must perform a conformity assessment of the AI System to demonstrate its conformity with the requirements of the Draft Regulation and similar to the Maltese model, where there is substantial modification of the AI System, the Provider must undergo a new conformity assessment.

High-Risk AI Systems intended to be used for remote biometric identification and public infrastructure networks are subject to a third party conformity harassment . With regards to other High-Risk AI Systems, the Provider may opt to carry out a self-assessment and issue an EU declaration of conformity. The Provider must continuously update the declaration as appropriate.

The regulation places a focus high-risk applications and ones of a critical nature, very much in line with the recent widening of scope of the MDIA from addressing technological assurances for DLT-based systems to critical systems. Similarly, the regulation highlights the need to address start-ups and to set up sandbox environments to test technology – needs identified by the MDIA to be priorities and which are being addressed through the launch of a technology-driven sandbox aimed primarily at start-ups in the coming months.

High-Risk AI Systems under the draft regulation must follow:

Technical parameters and transparency

(1)    Risk management system: Providers must establish, implement, document, and maintain a risk management system, including specific steps such as the identification of foreseeable risks of the AI System and analysis of data gathered from a post-market monitoring system. The risk management system must ensure that risks are eliminated or reduced as far as possible by the AI System’s design and development and adequately mitigate risks that cannot be eliminated.

(2)    High quality data sets: The Draft Regulation requires High-Risk AI Systems to be trained, validated, and tested by high quality data sets that are relevant, representative, free of errors, and complete. This requirement must be ensured by appropriate data governance and data management.

(3)    Technical documentation and record keeping: The design of High-Risk AI Systems must enable tracing back and verifying their outputs. For that purpose, the Provider is obliged to retain technical documentation that reflects the conformity of the AI System with the requirements of the Draft Regulation.

(4)    Quality management system: Further, the Provider is required to put a quality management system in place that, among other obligations, includes a written strategy for regulatory compliance, systematic actions for the design on the AI System, technical standards, and reporting mechanisms.

(5)    Transparency and information for Users: Users must be able to understand and control how a High-Risk AI System produces its output. This must be ensured by accompanying documentation and instructions for use.

(6)    Human oversight: High-Risk AI Systems must be designed in such a way that they can be effectively overseen by competent natural persons. This requirement is aimed at preventing or minimizing the risks to health, safety, and fundamental rights that can emerge when a High-Risk AI System is used.

(7)    Robustness, accuracy, and cybersecurity: High-Risk AI Systems must be resistant to errors as well as attempts to alter their performance by malicious third parties and meet a high level of accuracy.

(8)    Authorized representative: Providers established outside the EU must appoint an authorized representative (a natural or legal person established in the EU with a written mandate form by the Provider) that has the necessary documentation permanently available.

The draft regulation also introduces the concept of certification and registration like the Maltese Laws albeit it mandates certification which will have an EU dimension and will rely on the existing process for CE marking in the EU. It also mandates a centralised EU register. This implies that unlike the Maltese certification regime, which was not automatically recognised and endorsed outside of our shores, with the proposed EU model, the conformity and certification is imbued with a principle of EU equivalence as well as passport ability.

Under the draft regulation the Provider must indicate the AI System’s conformity with the regulations by visibly affixing a CE marking so the AI System can operate freely within the EU. Before placing it on the market or putting it into service, the Provider must also register the AI System in the newly set up, publicly accessible EU database of High-Risk AI Systems. 

Like the Maltese Law, the draft regulation also caters for Post-market monitoring obligations. Providers must implement a proportionate post-market monitoring AI System to collect, document, and analyse data provided by Users or others on the performance of the AI System. This is also coupled with reporting obligations, where providers must notify the relevant national competent authorities about any serious incident or malfunctioning of the AI System, as well as any serious breaches of obligations. Unlike our Maltese regime however the draft regulation aside from covering the provider, also applies to the following:

(a) Users’ obligations for High-Risk AI Systems: Users must use High-Risk AI Systems in accordance with the instructions indicated by the Provider, monitor the operation for evident anomalies, and keep records of the input data.

(b) Importers’ obligations for High-Risk AI Systems: Importers must, among other obligations, ensure that the conformity assessment procedure has been carried out and technical documentation has been drawn up by the Provider before placing a High-Risk AI System on the market.

(c) Distributors’ obligations for High-Risk AI Systems: Distributors must, among other obligations, verify that the High-Risk AI System bears the required CE conformity marking and is accompanied by the required documentation and instructions for use.

(d) Users, importers, distributors, and third parties becoming Providers: Any party will be considered a “Provider” and subject to the relevant obligations if it (i) places on the market or puts into service a High-Risk AI System under its name or trademark, (ii) modifies the intended purpose of the High-Risk AI System already placed on the market or put into service, or (iii) makes substantial modifications to the High-Risk AI System. In any of these cases, the original Provider will no longer be considered a Provider under the Draft Regulation.

The draft regulations distinguish between national supervisory authority, which means the public authority to which a Member State assigns the responsibility for the overall implementation and application of the Regulation ( see requirements under Chapter 4),for coordinating the activities of other national competent authorities and for acting as the single contact point for the Commission and the European Artificial Intelligence Board and ‘national competent authority’ which means the public body to which a Member State assigns the responsibility to carry out certain activities related to the implementation and application of this Regulation as well as market surveillance authorities and national accreditation bodies.

Malta stands to benefit here, having already set up the MDIA with the goal of regulating technology. Given the horizontal, cross-cutting nature of technology, having an authority entrusted with the regulation of technology no matter the (vertical) operations domain is foundational.

The draft regulation also allows subcontracting of functions of the respective notified bodies and this could further increase the prevalence and use of MDIA’s system auditors.

It’s also worth noticing that the draft regulation puts emphasis on standardisation and that these should play a key role to provide technical solutions to providers to ensure compliance with the draft regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient. In the interim the draft regulation lists its own harmonised requirements.

Human oversight is also a welcome addition in the draft regulation. The Maltese Law (ITAS) also follows the same tenants as we have the role of the technology administrator, which is also licenced by the MDIA. The draft regulation also speaks of traceability and transparency, particularly under Articles 12 & 13, which are also reflected in our local Forensic Node model.

The sand box model as envisaged in Art 53 is also a welcome insertion. Locally the MDIA has also embarked on a similar sandbox model which will be launch soon.

The Draft Regulation, similar to the Maltese model also provides for administrative sanctions and fines. The draft regulation however is tougher as it provides for substantial fines in cases of non-compliance as follows:

  • Developing and placing a blacklisted AI System on the market or putting it into service (up to EUR 30 million or 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher).
  • Failing to fulfill the obligations of cooperation with the national competent authorities, including their investigations (up to EUR 20 million or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher).
  • Supplying incorrect, incomplete, or false information to notified entities (up to EUR 10 million or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher).

Article by Dr Ian Gauci

This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.