The Council of Europe Guidelines on AI: Strengthening Rights over Automated Decisions

Kaveh Cope-Lahooti

To celebrate Data Protection Day, on the 28 January 2019, the Council of Europe released guidelines on data protection measures in relation to artificial intelligence. The guidelines contain recommendations that serve to codify much of the emerging best practice around ensuring artificial intelligence systems are made compatible with human and data subject rights, building upon existing regulation of the sector provided in the GDPR and Convention 108.

Convention 108 (the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data), which is an international law framework applicable to the (predominantly European) Council of Europe members, was last updated in 2018.  Building on the Convention, the guidelines further specify certain core elements that should be included when data is processed in AI systems, mainly focused about ensuring data accuracy, non-bias and a ‘human rights by design’ approach. It takes the latter to mean that all products and services should be “designed in a manner that ensures the right of individuals not to be subject to a decision significantly affecting them…without having their views taken into consideration”.

In practice, this will require organisations to conduct a wider risk assessment of their impacts in advance, and build-in governance methods to conduct an generate and consider relevant stakeholder input. One means of ensuring this is to involve, at an early stage in the design process, representatives from the design/development teams, HR, Data Protection and Risk departments and potentially executive and Boards members, in addition to seeking the advice of NGOs and other industry bodies already regulating data ethics. Organisations can rely on both external and internal data ethics committees, both to give their opinion on the potential social or ethical impact of AI systems, but also to be involved as a tool for ongoing monitoring of the deployment of AI systems.

Most notably, the Guidelines also highlight the right for individuals to obtain information on the reasoning underlying AI data processing operations applied to them. Indeed, this refers to Convention 108’s most recent iteration, which outlines that, in the context of automated decision-making systems, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of such a reasoning, which led to any resulting conclusions”. This goes further than provisions of the GDPR that provide for data subjects to receive information on “meaningful information about the logic involved” in such decisions, as well as the “significance and the envisaged consequences of such processing”. 

Rather than simply covering explaining system functionality’ or which processes are performed (i.e. whether there is any profiling, ranking or data matching than occurs), and perhaps what the functions of features (e.g. categories of data) are involved in the design of an algorithm, the Convention 108 right is a more expansive right, extending to understanding why an actual outcome was reached, including after a decision was made. This would require a company to assess and track the way algorithms are trained, and perhaps even re-run decisions with modified or different criteria, in order to be able to diagnose what “led to any resulting conclusions”.

Not only do the Guidelines refer to this right to information or explanation, but they also allude to the fact that AI systems should allow “meaningful control” by individuals over data processing and its related effects on them and on society. The thinking behind this is that, wher provided with the information to do so, data subjects will be able to exercise their other corollary rights under Convention 108 or the GDPR, including the right to either not be affected by solely automated decisions or to challenge their reasoning. As such, organisations should put into place mechanisms for challenging and reviewing automated decisions to ensure a fair and equitable outcome. These will have to be integrate either elements of human decision-making or at least, human intervention, over such decisions, on top of their obligations to notify data subjects of their rights. This will serve to provide sufficient assurance of objectivity of process under the GDPR, and will also help streamline any requests for challenging or querying decisions.

The Convention 108 is another ‘soft law’ mechanism that organisations should view as a sign of further elaboration on how organisations can practically comply with data protection and human rights norms in their deployment of AI systems, although not yet as a further regulatory step following the GDPR. From this perspective, the Guidelines, with the amended Convention, appear to serve to clarify much of the existing practice on designing transparency into AI processes, which can only serve to make organisations more objective and accountable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: