Further regulation for AI? European Commission releases ethics guidelines

Kaveh Cope-Lahooti

On the 8th April, the European Commission’s High Level Expert Group on AI released their Ethics Guidelines for Trustworthy AI, focusing on outlining various ethical and moral principles for ‘trustworthy AI’, including that such systems be lawful (including the data used therein), ethical (in terms of compliance with core principles and values) and robust (including central security requirements).

The introduction of the guidelines follows a public consultation process between December and February. In particular, the guidelines build on top of other frameworks for autonomous and machine learning solutions, such as those launched by the European Commission, the Institute for Electrical and Electronic Engineers (IEEE), the Institute for Ethical AI and Machine Learning, several European data protection authorities and other private consultancies such as PWC, Deloitte and Gemserv. As such, they aim to form a framework for ensuring AI systems are deployed and designed in an accountable fashion, which will allow organisations to test and develop approaches to ethical AI, and which can potentially be translated into hard regulation.

Risk Management

The Ethics Guidelines centralises around several concepts that aim to introduce the necessary checks and assurance into AI systems. The guidelines focus on the need to address risks associated with AI systems, and the impacts they can have for individuals or communities – as part of what the High Level Working Group deems the ‘prevention of harm’ principle. Such effects can occur in the form of individuals being invasively targeted by advertising, being denied credit or loans on the basis of inaccurate data, even to pedestrians being harmed by self-driving cars.

Organisations can ease this process through an early data mapping exercise, focusing on the identification and sourcing of data, the selection of algorithmic models, the training and testing deployment, its use and deployment, and then monitoring. Through this, any issues with data accuracy and quality should be tracked and identified from the start, before they create problems in the form of skewed or biased decisions, later in the operation of systems. Moreover, training and testing should include key checks for, the identification of harm to individuals or groups of individuals and the potential for bias should be considered – including bias inherent in either the functions of data sets used.

Departments wishing to deploy or use AI systems should work with their development teams to ensure all of these tests can be performed prior to use or sign-off. However, it should not end there – systems should be subject to rigorous ongoing testing and retesting, including to see whether the system’s outcomes or outputs are meeting its objectives – such as, for example, whether user satisfaction is actually being achieved by a system that provided automated customer service responses.

Agency

Additionally, central to the High Level Expert Group’s approach to the regulation of AI is the concept of ‘human agency’ – the idea that individuals, including data subjects, should be autonomous and be able to determine how organisations control their data and how decisions affect them. The core concept of ‘agency’ builds upon individual rights under the Council of Europe ‘Convention 108’ and the GDPR – including such as to access, correct, amend and restrict the processing of their data, and even not to be subject to automated decisions that will have legal effects or similarly significant on them – unless necessary for a contract or permitted by law, or by explicit consent.  As such, organisations will have to build into AI systems the ability of individuals in processes, analysis and decisions made by AI systems – including to adjust their preferences and which data they disclose and to amend when they are tracked. However, they should also limit the harmful effects of AI – where ‘similarly significant’ effects is interpreted to mean negative impacts on human dignity, effects that target vulnerable groups, etc. In particular, both this and adherence to the concept of human agency can be achieved keeping a ‘human in the loop’, which refers to the governance or design capability for human intervention during the system’s operation, including to monitor or overrule AI decisions, so that such systems act as a complement, not a replacement, to human decision-making.

Transparency

Another increasingly mooted aspect of artificial intelligence systems is issue of transparency, which obliges organisations to introduce personnel and mechanisms not just to respond to interpret algorithms but also to respond to potential requests and challenges to their results. Transparency also involves – for example, designing interfaces to allow customers to see how their data is used, where they are public facing, which will include when collecting consent and/or personal data from individuals. Transparency is also largely connected to the element of ‘explainability’, which, in Convention 108’s most recent iteration, outlines that, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of the reasoning [of Artificial Intelligence systems], which led to any resulting conclusions”. This goes further than provisions of the GDPR in that, being a more expansive right, it extends to understanding why an actual outcome was reached, including after a decision was made, rather than simply focused on the basis for decisions made.

Faced with the difficulty of making AI systems explainable, there are two other ways organisations can perform the necessary due diligence – firstly, documenting all decisions that a system makes can contribute to such as functions, selection of data and data sources – even when full transparency cannot be achieved due to the intrinsic nature of the algorithm or machine-learning process. Secondly, companies should act in publicly committing to a Code of Ethics around how data will be sourced and used, and the values and aims of the systems, can also help with public engagement and reception.

Participation

Lastly, the guidelines discuss the participation of different stakeholders in the design of AI systems –  including both internal and external governance frameworks. It should be remembered that individual departments will often be the information owners responsible for the operational decisions governing AI systems, and so should liaise with developers and system providers, and other relevant third parties, to ensure their requirements are met. At a more strategic level, organisations should involve executive sponsors or management in approving AI systems that are likely to have an impact on the workforce or involve significant disruptive effects on operations. Moreover, where AI systems are likely to raise risks – legal, social, reputational or financial – they will need to approve and consider ethics and goal-orientated trade-offs for systems during their development.

To support this, organisations can appoint ethics panels, committees or boards to liaise ethics dialogue within their organizations, seeking approaches that are both aspirational and value-based. Within these groups, for example, the High Level Working Group emphasises that designers and technologists should engage in cross-disciplinary exchanges whereby ethicists, lawyers and sociologists to understand the impact of AI solutions.  However, whichever structure is established, the group or panel has to have ‘teeth’ to be able to accomplish effective oversight and management. This is a particular contemporary issue, given the recent failing of Google’s ethics board, which shut down following a backlash over both its lack of effectiveness and the composition and background of some of its members. As such, the group should be consulted during deployment of the system, particularly over its goals and potential effects, and regularly informed of the outcomes of monitoring of the solution deployed throughout the lifecyle of the system.

Conclusion

The High Level Working Group’s guidelines bring a very detailed discussion of evolving regulatory norms governing artificial intelligence systems. Specifically focusing on the protection of harm and individual rights, the framework aims to incorporate checks in the deployment of systems to ensure they are more ethically and individually-focused. Organisations wishing to remain accountable should take advantage of the reputational and compliance benefits of overtly demonstrating that they use data in accountable and fair ways, and that they are committed to delivering operations and services in line with the principles espoused by the guidelines – as the EU is likely to incorporate them into hard law regulating AI systems in the near future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: