As Malta aspires to become the Ultimate AI launchpad, the Parliamentary Secretariat for Financial Services, Digital Economy and Innovation issued its consultation paper and vison on how Government wishes to develop an Ethical AI Framework to achieve its purpose and create a regulatory and innovation ecosystem that promotes the design and operation of trustworthy AI.
The purpose of the Malta Ethical AI Framework is to establish an ethical code, a set of guiding principles and trustworthy AI governance together with control practices that can serve as the foundation for the design and implementation of safe AI.
In developing the Malta Ethical AI Framework, the Government has the following four objectives:
- Build on a human-centric approach;
- Respect for all applicable laws and regulations, human rights and democratic values;
- Maximise the benefits of AI systems while preventing and minimising their risks;
- Align with emerging international standards and norms around AI ethics.
Malta plans to have an AI ecosystem that promotes an acceleration in the achievement of the benefits of AI, whilst minimising its risks. This will require that Malta maintains a fine balance between opposing factors including:
- Creating an AI ecosystem that promotes innovation and risk mitigation;
- Maintaining a dual role as a disruptor and protector; and
- Developing a regulatory framework that balances prescribed rules with agility.
In developing the Malta Ethical AI Framework, the Government has established four Ethical AI Principles for establishing trustworthy AI which are in alignment with the EU Ethics Guidelines for Trustworthy AI.
- Human autonomy — humans interacting with AI systems must be able to keep full and effective self-determination over themselves;
- Prevent harm — AI systems should not cause harm at any stage of their lifecycle to humans, the natural environment or other living beings;
- Fairness — the development, deployment, use and operation of AI systems must be fair; and
- Explicability — end-users and other members of the public should be able to understand and challenge the operation of AI systems, as required for the particular use case.
The achievement of the above objectives is already encoded, in part, in existing legal and regulatory requirements and therefore they should be considered in relation to mandatory compliance required as a function of laws and regulations, as well as enhanced ethical expectations by stakeholders.
To achieve the Ethical AI Guidelines and Trustworthy AI Requirements AI practitioners will be required to leverage existing control practices, while also developing new control practices that address the unique trust conditions necessary for AI including having:
- governance control practices over AI
- internal governance processes and mechanisms (where developing AI) including design considerations and align system with relevant standards (e.g. ISO and IEEE) or widely adopted protocols for daily data management and governance,
- cater for end-user and third-party processes and mechanisms,
- do the required impact and risk assessments and catering as well for data protection obligations, rights and data protection risk impact assessment .
The plan of is to lead to a trustworthy certification regime of AI and a healthy ecosystem where developers work in a safe and ethical environment and users’ rights and interests are adequately safeguarded.
The public consultation can be found here: (https://malta.ai/wp-content/uploads/2019/08/Malta_Towards_Ethical_and_Trustworthy_AI.pdf).
Feedback is urged in particular on the following considerations:
- Would an AI system designed and operated using the Ethical AI Principles outlined in this document be ethically aligned, transparent and socially responsible? If not, what is missing?
- What other governance and control practices do you feel are necessary to achieve Ethical & Trustworthy AI?
- Are there any other ethical considerations related to AI that should be addressed by the Ethical AI Framework?
Feedback can be sent by email to email@example.com by 6th September 2019.
For more information, please contact Dr Ian Gauci on firstname.lastname@example.org
This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.