The Privacy Reckoning: Why US businesses should brace for Impact

Between late 2024 and throughout 2025, a tidal wave of state privacy laws is set to come crashing down on businesses across the U.S., ushering in a new era of consumer data rights. These regulations—such as the Montana Consumer Data Privacy Act (MCDPA), the Delaware Personal Data Privacy Act (DPDPA), and the Iowa Consumer Data Protection Act (ICDPA)—bring comprehensive privacy legislation at a state level that demands immediate action from organizations handling personal data.

Businesses that fail to comply could face a regulatory nightmare, with hefty fines, lawsuits, and irreversible reputational damage. The stakes have never been higher. Companies that rely on targeted advertising, large-scale data collection, or sensitive personal information—such as social media platforms, financial firms and e-commerce giants—are in the crosshairs of these new laws.

This article explores the critical themes in these new privacy laws, from consumer empowerment and opt-out rights to jurisdictional complexities and the heightened need for transparency.


Empowering Consumers: Opt-Out Rights and Consent Requirements

A prominent theme in these new privacy laws is the enhancement of consumer rights, particularly concerning the sale and sharing of personal data. For instance, the Montana MCDPA grants consumers the right to opt out of the sale of their personal data and sharing for targeted advertising. Similarly, the Delaware DPDPA provides consumers with rights to access, correct, delete, and transfer their personal data, as well as the right to opt out of the sale of their data and targeted advertising. The Iowa ICDPA also offers consumers the right to opt out of targeted advertising and the sale of their personal data.

These provisions necessitate that organizations reassess their data handling practices. Companies engaged in large-scale data sharing, such as those employing cookie-based advertising, must implement mechanisms to honor consumer opt-out requests and obtain explicit consent before processing sensitive information.


Jurisdictional Thresholds: Understanding Applicability Across States

An essential aspect of these privacy laws is determining when they apply to businesses, which often depends on specific thresholds related to consumer data. For example, the Delaware DPDPA applies to entities that conduct business in Delaware or target products or services to Delaware residents and, in a calendar year, control or process the personal data of at least 35,000 consumers (excluding data processed solely for payment transactions) or at least 10,000 consumers while deriving more than 20% of annual gross revenue from the sale of personal data. In contrast, the Iowa ICDPA applies to businesses that control or process personal data of at least 100,000 Iowa consumers or derive over 50% of revenue from selling the personal data of at least 25,000 Iowa consumers.

These varying thresholds mean that businesses must carefully assess their operations in each state to determine applicability. A company processing data from 40,000 consumers would fall under Delaware’s law but not Iowa’s, highlighting the importance of understanding each law’s specific criteria.


Navigating ‘Do Not Sell’ Provisions: Compliance Strategies for Organizations

The new privacy laws impose stringent requirements on the handling of personal data, particularly concerning consumers’ rights to opt out of the sale or sharing of their information (Do Not Sell rights). To comply with individuals’ ‘Do Not Sell’ rights, organizations should implement clear and accessible opt-out mechanisms, such as user-friendly web forms or preference centers, allowing consumers to easily exercise their rights. Additionally, businesses must update their privacy policies to inform consumers about their rights and the processes in place to honor opt-out requests. Furthermore, organizations should establish procedures to respond promptly to opt-out requests and maintain records of these interactions to demonstrate compliance in case of audits or legal inquiries.

Under the Montana MCDPA, businesses are also prohibited from processing sensitive data without obtaining the consumer’s consent. Sensitive data encompasses personal information revealing racial or ethnic origin, religious beliefs, health diagnoses, sexual orientation, citizenship or immigration status, genetic or biometric data, and precise geolocation data. Similarly, the Delaware DPDPA restricts the processing of sensitive data without consumer consent, and the Iowa ICDPA requires businesses to provide an opt-out mechanism for the processing of sensitive data.


Transparency and Accountability: Clear Privacy Notices and Data Mapping

Transparency is a cornerstone of the new privacy regulations. The Montana MCDPA mandates that controllers provide consumers with a reasonably accessible, clear, and meaningful privacy notice. This notice must include the categories of personal data processed, the purposes for processing, and the categories of personal data shared with third parties. Similarly, the Delaware DPDPA and Iowa ICDPA require businesses to maintain privacy notices that inform consumers about their data practices, including the types of personal data collected, the purposes for collection, and the categories of third parties with whom data is shared.

To comply, organizations should conduct thorough assessments of their data flows, mapping out how personal data is collected, used, and shared. This process not only ensures compliance but also enhances accountability and fosters consumer trust.


Conclusion: Proactive Steps for Compliance in a Dynamic Privacy Landscape

The evolving landscape of U.S. state privacy laws underscores the importance of a proactive, risk-based approach to data management. Organizations must assess their data flows, implement mechanisms for consumer consent and opt-out requests, and establish robust safeguards for sensitive information. Understanding the jurisdictional thresholds of each law is crucial to determine applicability and ensure compliance. By embracing transparency and accountability, businesses can navigate these regulations effectively, mitigating risks and building trust with consumers.

Kaveh Cope-Lahooti

Online Advertising, Big Tech & the Privacy Battleground

Kaveh Cope-Lahooti

Big Tech and Online Advertising

For many years, the big tech companies (principally Facebook, Amazon, Apple and Google) have dominated the online advertising sphere, through selling access to or carrying out targeted advertisements on their platforms. Nevertheless, this business model caused privacy concerns, not least for the individuals whose behaviour and – behaviour which served the basis for them to be subject to ‘micro-targeting’ with marketing, including the political advertisements made infamous in the case of Cambridge Analytica.

However, the GDPR, and other privacy legislation following suit in the US, is beginning to threaten their business models. With respect to online advertising, the GDPR introduces a new standard of consent into the ePrivacy Directive (often referred to as ‘the cookie directive’) namely, a “clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement”(as per Article 2(f) of the ePrivacy Directive). This stance has been interpreted in guidance and caselaw, such as the Court of the Justice of the European Union’s 2019 Planet 49 decision, to require active consent to a variety of online tracking technologies, including cookies, mobile advertising IDs, and location-based monitoring. As a result, since the GDPR, consent is specifically required for each type of cookie through a banner or consent management tool, with information about the purposes of cookies, third parties and cookie duration to be provided to the user prior to their consent.

In particular, this new standard affects the entire online advertising ecosystem that relies on user consent for the deployment of cookies and other tracking technologies that are regulated by the ePrivacy regulation, and led to a decrease in engagement and success rates of tracking individuals online. Most notably, the reason why online advertising has been singularly so highly affect is due to the cross-platform, ‘third party’ nature of many advertising cookies, which users interact with across multiple websites and thus require a chain of consent mechanisms (or ‘signals’) to be collected and exchanged between the relevant parties. On top of this parties engaging in online advertising remain liable to ensure consent was validly collected further up the chain. If regulators were to enforce this strict interpretation regularly, it would most probably make collecting cookie-based consent for advertising unworkable.

Response of the big tech players

However, rather than stemming the business models of large technology corporations, the GDPR has caused them to develop their own proprietary methods of collecting user data. This week, Apple is releasing an update to its iOS on all its devices which will require user consent to before application data linked to users’ iOS device IDs (‘IDFAs’) is shared with third party app developers. Apple itself will still be able to keep this data by default on iOS devices, thus giving the technology company an edge over its rivals. Apple’s move on the one hand allows it to claim to be defending user privacy, whilst on the other hand will deprive app developers (including Facebook, Twitter and many smaller companies) of user data to improve and market their software products, whilst strengthening Apple’s hand. It will also force many app providers to pursue an alternative (non-advertising based) business model and charge customers for the use of their products, which will in turn drive more revenue from Apple.

To replace the lacuna that IDFA advertising filled, Apple has offered app developers its SKADNetwork, which aims to provide statistical information on impressions and conversion data without revealing any user or device-level information. Whilst this aggregated information can be useful for improving products, it lacks the user-specific behavioural information needed to create segmented profiles and provide individuals with targeted advertising.

Google, itself previously subject to large fines for failing to collect adequate user consent and transparency for when monitoring users online [citation needed], has decided to phase out its third parties cookies before 2022. However, cognisant of the fact that advertising during the pandemic has drive its parent company Alphabet to its highest ever quarter profit [citation needed], Google is not done yet. Mirroring the approach of Apple, it has recently announced testing of it’s own privacy preserving mechanism for interest based ad selection and the news are shaking up the adtech ecosystem, and that Google itself will control. Google calls this it’s Federated Learning of Cohorts (FLoC), which will operate within the Chrome environment and forms part of Google’s Privacy Sandbox.

The FLoC project aims to move online behavioural advertising’s focus on individual users to instead being ‘interest’ or ‘context’ based. Advertising will be targeted at ‘cohorts’ – groups of users with similar interests. rather than allow individual identification and profiling. Whilst the FLoC mechanism will still capture data on individual users, members of the FLoC will only see information about a user at a cohort level. Additionally, website users will also be able to opt out to participating or forming part of a FLoC cohort within Chrome.

Conclusion

Whilst Google’s FLoC and Apple’s SKADNetwork may appear to be more privacy-friendly solutions to the cookie or mobile-ID based third-party tracking technologies we have become used to, they will still involve tracking information on users within certain environments (e.g. Chrome or iOS apps) – just this information won’t be shared with all of the advertising ecosystem. Additionally, such technologies run the risk of strengthening the data that singular players, such as Google and Apple, can have access to about individuals. At the same time, they entrench the dominant position in the software and advertising marketplace that these tech companies have, at the expense of their rivals. It is likely that this is only the beginning of the big tech companies’ battle over privacy.

Privacy and security challenges with the Irish smart metering roll-out

Photo by Thomas Kelley on Unsplash

Kaveh Cope-Lahooti & Abhay Soorya

Background

Smart meters, which involve energy suppliers deploying devices that allow both customers and providers to monitor consumption and usage trends, are a core component of the move towards ‘Smart Homes’ and ‘Smart Grids’ as part of the growth of the Internet of Things.

In Ireland, smart metering is a key contemporary topic, with the National Smart Metering Programme commencing with deployment into domestic residences this year. Recently, ESB networks announced that, starting in September 2019, it would roll-out 20,000 meters in selected locations in Ireland, with a further 250,000 in place by the end of 2020 and a further 500,000 [1]by 2024.[2] The roll-out will be in phases, whereby Phase 1 will provide smart meters with basic Credit services and half-hourly interval data, Phase 2 will add further meters and enable Smart PAYG and specifics such as switching, and Phase 3 will include provisioning real-time consumption and usage data to consumers via their home device.

Smart meters facilitate increased data collection – it particular, it will now be possible for both the user and energy company to monitor usage data at more regular intervals, including down to the hour, quarter of an hour, and more – a significant increase from the current estimated readings every 2 months, and physical reads every quarter. Among these benefits, smart meters will enable electricity to be priced in accordance with demand – so during peak times energy is most expensive – which in theory would reduce spikes in usage and result in a lower need for peak capacity, increasing the efficiency and maintenance of the energy supply[3].

Using smart meters will also allow customers to keep track of costs and, by combining information on a building or location basis, allow operators to plan the supply of electricity more effectively. However, the collection of information, and the sharing and transfer between devices and networks of this data, raises both privacy and security concerns.

Data Protection Issues

As with all mass data collection, smart meters raise concerns around data minimisation and privacy intrusion. The NSMP is required under Irish law (Statutory Instrument 426 of 2014) to meet privacy standards applicable under the General Data Protection Regulation (Regulation 2016/679) (GDPR). Firstly, there is the issue of the legality of the data collection in the first place. In July, the Spanish Supreme Court ruled that information collected on energy usage, in addition to the corollary meter serial number to which the information is attributed, constituted personal data. This is an approach that has mirrored by the Information Commissioner’s Office (ICO), in the UK, which considers consumption information collected by meters, when linked with the meter serial numbers/MPANs, as personal information, and the Irish Data Protection Commission has taken a similar line.[4] The application of the GDPR to smart metering data is also foreseen by Article 23 of the Electricity Directive (Directive 2019/944).

As such, information collected through smart meters is subject to the provisions of the GDPR in fully – and therefore, all parties having access to the relevant data, including energy suppliers, smart metering systems operators and network operators (all of which will act as data controllers), need to consider compliance with the core principles of the GDPR. Within this, data must be collected with a suitable legal basis, only used for specific purposes and retention periods, not collected in excess of what is needed, and kept secure – and the usage should be made clear to customers, as provided by Article 20 of the Electricity Directive.

 Although consumption data can arguably by used for monitoring usage at a statistical level, calculating bills and providing feedback to customers for the supplier’s within the energy contract (and therefore not require consent), use of information for other purposes, such as improving grid efficiency, identifying energy theft, debt management, etc. will most probably require a Privacy Impact Assessment or legitimate interest assessment before undertaking. In particular, organisations (such as energy suppliers) should only use household-level data where necessary, and for data sharing or data analytics, the use of aggregated data relating to multiple households or regions (of the sampling of certain households) should be preferred.

Moreover, excessive information, and data sharing with third parties, should at the very least be notified to customers and potentially risk assessed for the reasonable expectations of data subjects, including limitations of the amount of personal data collected by default, as part of the concept of ‘Privacy by Design’. As the frequency of smart meter readings will be the main component of data minimisation, this could include a limitation on processing of data more granular than day/night/peak for Time of Use billing and Energy Use Statements, or collection by suppliers on a monthly basis, as suggested by the CER [5]. Customers should also have a general opt-out of sharing consumption information with the energy supplier and third parties.

Within this, there is also a danger that usage can on users to develop detailed consumption profiles. Consumers may not want their energy company to build an understanding of their domestic habits, which could reveal, through attribution or inference via data mining techniques, more detailed lifestyle – such as if and the hours they work, how they interact with and use home appliances (such as watching television, conducting the laundry, entertaining guests, etc.), when they go on holidays, and even religious practices. This is particularly the case if this information is used in context with other data available to energy suppliers or parties they contract with – even basic identifiers such as age, household size, location or other demographic data can allow them to build up user profiles.

Additionally, there is the issue that this information, once collected, could be used unfairly. Customer profiles could be used to allow targeted pricing, and where such decisions are made on automating profiling. In this situation, the General Data Protection Regulation (Regulation 2016/679) (GDPR) particularly restricts processing, requiring transparency around the criteria used to present a product’s price. Moreover, sharing such data with third parties, who could offer their products or services based on user profiles, including through targeted advertising and direct marketing, is an activity that would clearly be prohibited under the GDPR without express user consent.

Data Security Issues

From a data security aspect, there are certain unique features around the design of the Irish smart meter network. Compared to the UK SEC (Smart Energy Code), which mandates end to end functional and technical specifications, data and security models, and various processes for parties interacting with the smart infrastructure, the CRU’s High-Level Design defines a technology agnostic abstraction for the network to build atop. Within this, HAN (Home Area Network) and WAN (Wide Area Network) technologies are procured by ESB Networks. Additionally, it is the DSO’s (Distribution System Operator) responsibility to make available energy data to the market: Gas Networks Ireland, Eirgrid, as well as others relying on an exchange of market messages.

Moreover, in contrast to the design of smart metering devices in the UK, in the Irish scheme, there is very little functionality on meter. This ‘thin design’ means no complex calculations are performed on the edge; the function of the device is merely to record time-bound consumption and transmit this data to the DSO; both the electricity meter and gas meter  record consumption every 30 minutes, with the gas meter waking up every half hour. Gas Meters and IHDs communicate to the DSO through a securely established communications link with the Electricity Meter.

The DSO shares information collected through the smart meters with a few parties. To assist settlement and network optimization, relevant parties for both utilities Eirgrid and GNI (Gas Networks Ireland) are provided with this data. Gas or electricity suppliers receive a daily snapshot; it is their responsibility to perform necessary calculations (Pay As You Go balance, historical cost and consumption, tariff bands and Time of Use rates) and provision this information to consumers through non-AMI channels if necessary. This includes periodic ‘smart bills’, downloadable files online, or phone applications. It can be inferred that none of these data items are produced in real-time.

NSMP Security

As a critical infrastructure system, the security of the smart meter network is required to confirm to the EU NIS directive (Network Information Systems Directive). ESB Networks has also published a set of principles for the network’s security, for which a sample is provided:

Key PrincipleApplication
Confidentiality & PrivacyEncryption of data in transit and storage
Access controls on all infrastructural components
Deletion of data which is no longer required
Compliance with Data Protection Law
Comprehensive and timely review of audit logs
Detection of unauthorised modification of data
Integrity
AvailabilityAutomated failover to standby backup infrastructure
Detection of DoS attacks or other events
Automated action to remove the impediment
Authentication & IdentificationUse of usernames with strong passwords
Digital certificates and signing processes
Multi-factor authentication
Defining specific functions (view, modify, create, delete)
Message based auditing and accounting
Authorisation
Non-Repudiation
Auditing & AccountingRecording which user initiated an action
Logging successful and unsuccessful attempts

However, one of the potential problems with this set of security principles is a lack of sufficient specificity. For example:

  • For HAN (Home Area Network) Communications, the Core Design states: the HAN ‘will be an open standard wireless communications protocol that enables transfer of data between the smart utility meters and specific securely paired devices in the home’, without further clarification.
  • Meter to Display communications are not standardized in terms of technology.
  • Security requirements regarding pairing between the CAD (Consumer Access Devices) and further consumer devices are not specified in the Core Design.

Furthermore, certain communication links within the network may also be proprietary, become deprecated, or have newly discovered flaws; it is unclear whether there is a process or governance for dealing with such issues as they arise. Mandated data items to be displayed on the (In-Home Display) IHD and exchanged include instantaneous demand, cumulative and historic consumption; such information is considered personal data may require a standardized protection scheme. Equally, responses mechanisms for some scenarios are unstated; it is unclear for example, what happens if an insecure CAD is joined to the HAN and floods the network with malformed messages. More generally, these issues point to the proprietary nature of the implementation.

Several signing processes and encryption schemes exist, and some standardization may be necessary to establish full protection. For example, reuse of Initialization Vectors, use of insecure symmetric keys, or use of the wrong cipher suite or AES-mode (a form of encryption), can cause encrypted data to be exposed. The split between security at the Application Layer and that at the lower layers is also unclear in terms of ownership; for example, it is unclear whether authentication – one of the security principles – is end to end or point to point, and what other communication links and devices are used for multi-factor authentication – another principle aforementioned. Other unstated technical specifics include storage locations for all personal data and management of cryptographic keys throughout the infrastructure.

The relative level of security enforced on various use cases and functions is also left unstated. For example, the requirement to define “specific functions (view, modify, create, delete)” is not elaborated further from the perspective of access control: for example, which parties along the infrastructure can evoke each function? The level of security enforced in join mechanisms is also unclear in the public NSMP specifications and Core Design documents.

By comparison, join mechanisms for the UK roll-out mandate two levels of security. One arises from the ZigBee architecture (in the form of link and network keys) and the other builds atop it to ensure further security (in the form of SMIP specific end-user and remote party credentials). Within Ireland, the company implementing the roll-out, ESB Networks, has stated that Application Layer encryption will be used in addition to link level encryption whenever metering protocol sessions are established between devices, however, more thorough and prescriptive security constructs may need careful consideration in the changing regulatory and data security landscape.

Conclusion

The NSMP will undoubtedly bring significant efficiency benefits in allowing both customers, suppliers, network operators and other network players to make better informed decisions with respect to energy usage, metering functionality and pricing. The Commission for Energy Regulation is currently working on assessing and addressing data protection and security issues, ranging from the possibility of detailed user profiles being built to issues with specifications between the security requirements for communications between the device and the network. However, further challenges may only become apparent on a case-by-case basis as usage of the upgraded smart meters by consumers develops over time.


[1] Commission for Energy Regulation. (2017, September 21). Update on the Smart Meter Upgrade. Retrieved from: https://www.cru.ie/wp-content/uploads/2016/11/CER17279-NSMP-Info-Note.pdf

[2] Gorey, C. (2019, July 3). ESB reveals first Irish towns to receive smart meters in later 2019. Retrieved from: https://www.siliconrepublic.com/machines/esb-smart-meters-locations-2019

[3] European Data Protection Supervisor. (2012, Jun 8). Opinion of the European Data Protection Supervisor on the Commission Recommendation on preparations for the roll-out of smart metering system. Retrieved from: https://edps.europa.eu/sites/edp/files/publication/12-06-08_smart_metering_en.pdf

[4] Commission for Energy Regulation. (2015, July 29). CER National Smart Metering Programme Information Paper on Data Access & Privacy. Retrieved from: https://www.cru.ie/wp-content/uploads/2015/07/CER15139-Data-Access.pdf

[5] Ibid.

Legitimate Interests: Balancing your business operations with individual rights

Kaveh Cope-Lahooti

Introduction

Even since before the GDPR, legitimate interests have been one of the most frequent bases that organisations have relied upon to justify processing personal data. However, the GDPR placed increased obligations and scrutiny on this practice. Particularly in industries where business models are increasingly based around the use of personal information, understanding where organisations can legally rely on legitimate interests and where the rights of individuals will be considered “overriding”, is key to compliance.

Background

Legitimate interests have been used as a flexible basis to justify data-driven operations since the Data Protection Directive in 1995. This rose to the fore when, the Google Spain case (Case C-131/12 Google Spain SL, Google Inc. v Agencia Espanola de Proteccion de Datos (AEPD), Mario Costeja Gonzalez, judgement of 13 May 2014), the Court of Justice of the European Union (CJEU) considered the assessment of the balance between the legitimate interests of the internet search engine providers and of internet users in receiving and having access to information in search results, on one side, and the rights of the data subject (here Mr. Gonzalez) in his privacy. Weighing these competing rights, the court considered both the centrality of the data processing to the commercial activity of a search engine and also the sensitivity of the information and the public profile of the data subject.

Changes under the GDPR

Under the GDPR, the legal test for legitimate interests means the onus is now on the controller to demonstrate that the interests or the fundamental rights and freedoms of the data subject do not “override” their interests – where formerly, such processing only needed to be “unwarranted” – a much higher hurdle. On top of the need for this new balance test, the legitimate interests relied upon must now be published in a Privacy Notice and individuals are able to request specific information on the legitimate interest assessment conducted, which increases the obligations and scrutiny on the controller to ensure a proper risk analysis is conducted.

On the plus side for companies, examples of legitimate interests are now provided specifically provided under the GDPR, and are being elaborated on in supervisory authority guidance – including for example, situations including the prevention and detection of fraud, network security and employee monitoring. However, the weighting given to the priority of either the data subject or the controller’s interests – and subsequently, the protective safeguards that must be put into place such as increased notice to data subjects or reduction of the scope of processing – for certain ‘legitimate’ activities such as employee monitoring will differ vastly across EU member states.

Conclusion

A year on from the GDPR’s entry into force, the circumstances in which legitimate interests can be relied upon are still evolving, and as an area in which caselaw and best practice is likely to play a huge part, all organisations should keep themselves updated.

Further regulation for AI? European Commission releases ethics guidelines

Kaveh Cope-Lahooti

On the 8th April, the European Commission’s High Level Expert Group on AI released their Ethics Guidelines for Trustworthy AI, focusing on outlining various ethical and moral principles for ‘trustworthy AI’, including that such systems be lawful (including the data used therein), ethical (in terms of compliance with core principles and values) and robust (including central security requirements).

The introduction of the guidelines follows a public consultation process between December and February. In particular, the guidelines build on top of other frameworks for autonomous and machine learning solutions, such as those launched by the European Commission, the Institute for Electrical and Electronic Engineers (IEEE), the Institute for Ethical AI and Machine Learning, several European data protection authorities and other private consultancies such as PWC, Deloitte and Gemserv. As such, they aim to form a framework for ensuring AI systems are deployed and designed in an accountable fashion, which will allow organisations to test and develop approaches to ethical AI, and which can potentially be translated into hard regulation.

Risk Management

The Ethics Guidelines centralises around several concepts that aim to introduce the necessary checks and assurance into AI systems. The guidelines focus on the need to address risks associated with AI systems, and the impacts they can have for individuals or communities – as part of what the High Level Working Group deems the ‘prevention of harm’ principle. Such effects can occur in the form of individuals being invasively targeted by advertising, being denied credit or loans on the basis of inaccurate data, even to pedestrians being harmed by self-driving cars.

Organisations can ease this process through an early data mapping exercise, focusing on the identification and sourcing of data, the selection of algorithmic models, the training and testing deployment, its use and deployment, and then monitoring. Through this, any issues with data accuracy and quality should be tracked and identified from the start, before they create problems in the form of skewed or biased decisions, later in the operation of systems. Moreover, training and testing should include key checks for, the identification of harm to individuals or groups of individuals and the potential for bias should be considered – including bias inherent in either the functions of data sets used.

Departments wishing to deploy or use AI systems should work with their development teams to ensure all of these tests can be performed prior to use or sign-off. However, it should not end there – systems should be subject to rigorous ongoing testing and retesting, including to see whether the system’s outcomes or outputs are meeting its objectives – such as, for example, whether user satisfaction is actually being achieved by a system that provided automated customer service responses.

Agency

Additionally, central to the High Level Expert Group’s approach to the regulation of AI is the concept of ‘human agency’ – the idea that individuals, including data subjects, should be autonomous and be able to determine how organisations control their data and how decisions affect them. The core concept of ‘agency’ builds upon individual rights under the Council of Europe ‘Convention 108’ and the GDPR – including such as to access, correct, amend and restrict the processing of their data, and even not to be subject to automated decisions that will have legal effects or similarly significant on them – unless necessary for a contract or permitted by law, or by explicit consent.  As such, organisations will have to build into AI systems the ability of individuals in processes, analysis and decisions made by AI systems – including to adjust their preferences and which data they disclose and to amend when they are tracked. However, they should also limit the harmful effects of AI – where ‘similarly significant’ effects is interpreted to mean negative impacts on human dignity, effects that target vulnerable groups, etc. In particular, both this and adherence to the concept of human agency can be achieved keeping a ‘human in the loop’, which refers to the governance or design capability for human intervention during the system’s operation, including to monitor or overrule AI decisions, so that such systems act as a complement, not a replacement, to human decision-making.

Transparency

Another increasingly mooted aspect of artificial intelligence systems is issue of transparency, which obliges organisations to introduce personnel and mechanisms not just to respond to interpret algorithms but also to respond to potential requests and challenges to their results. Transparency also involves – for example, designing interfaces to allow customers to see how their data is used, where they are public facing, which will include when collecting consent and/or personal data from individuals. Transparency is also largely connected to the element of ‘explainability’, which, in Convention 108’s most recent iteration, outlines that, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of the reasoning [of Artificial Intelligence systems], which led to any resulting conclusions”. This goes further than provisions of the GDPR in that, being a more expansive right, it extends to understanding why an actual outcome was reached, including after a decision was made, rather than simply focused on the basis for decisions made.

Faced with the difficulty of making AI systems explainable, there are two other ways organisations can perform the necessary due diligence – firstly, documenting all decisions that a system makes can contribute to such as functions, selection of data and data sources – even when full transparency cannot be achieved due to the intrinsic nature of the algorithm or machine-learning process. Secondly, companies should act in publicly committing to a Code of Ethics around how data will be sourced and used, and the values and aims of the systems, can also help with public engagement and reception.

Participation

Lastly, the guidelines discuss the participation of different stakeholders in the design of AI systems –  including both internal and external governance frameworks. It should be remembered that individual departments will often be the information owners responsible for the operational decisions governing AI systems, and so should liaise with developers and system providers, and other relevant third parties, to ensure their requirements are met. At a more strategic level, organisations should involve executive sponsors or management in approving AI systems that are likely to have an impact on the workforce or involve significant disruptive effects on operations. Moreover, where AI systems are likely to raise risks – legal, social, reputational or financial – they will need to approve and consider ethics and goal-orientated trade-offs for systems during their development.

To support this, organisations can appoint ethics panels, committees or boards to liaise ethics dialogue within their organizations, seeking approaches that are both aspirational and value-based. Within these groups, for example, the High Level Working Group emphasises that designers and technologists should engage in cross-disciplinary exchanges whereby ethicists, lawyers and sociologists to understand the impact of AI solutions.  However, whichever structure is established, the group or panel has to have ‘teeth’ to be able to accomplish effective oversight and management. This is a particular contemporary issue, given the recent failing of Google’s ethics board, which shut down following a backlash over both its lack of effectiveness and the composition and background of some of its members. As such, the group should be consulted during deployment of the system, particularly over its goals and potential effects, and regularly informed of the outcomes of monitoring of the solution deployed throughout the lifecyle of the system.

Conclusion

The High Level Working Group’s guidelines bring a very detailed discussion of evolving regulatory norms governing artificial intelligence systems. Specifically focusing on the protection of harm and individual rights, the framework aims to incorporate checks in the deployment of systems to ensure they are more ethically and individually-focused. Organisations wishing to remain accountable should take advantage of the reputational and compliance benefits of overtly demonstrating that they use data in accountable and fair ways, and that they are committed to delivering operations and services in line with the principles espoused by the guidelines – as the EU is likely to incorporate them into hard law regulating AI systems in the near future.