For a long period of time, the big tech companies (principally Facebook, Amazon, Apple and Google) have dominated the online advertising sphere, through selling access to or carrying out targeted advertisements on their platforms. Nevertheless, this business model caused privacy concerns, not least for the individuals whose behaviour and – behaviour which served the basis for them to be subject to ‘micro-targeting’ with marketing, including the political advertisements made infamous in the case of Cambridge Analytica.
However, the GDPR, and other privacy legislation following suit in the US, is beginning to threaten their business models. With respect to online advertising, the GDPR introduces a new standard of consent into the ePrivacy Directive (often referred to as ‘the cookie directive’) namely, a “clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement”(as per Article 2(f) of the ePrivacy Directive). This stance has been interpreted in guidance and caselaw, such as the Court of the Justice of the European Union’s 2019 Planet 49 decision, to require active consent to a variety of online tracking technologies, including cookies, mobile advertising IDs, and location-based monitoring. As a result, since the GDPR, consent is specifically required for each type of cookie through a banner or consent management tool, with information about the purposes of cookies, third parties and cookie duration to be provided to the user prior to their consent.
In particular, this new standard affects the entire online advertising ecosystem that relies on user consent for the deployment of cookies and other tracking technologies that are regulated by the ePrivacy regulation, and led to a decrease in engagement and success rates of tracking individuals online. Most notably, the reason why online advertising has been singularly so highly affect is due to the cross-platform, ‘third party’ nature of many advertising cookies, which users interact with across multiple websites and thus require a chain of consent mechanisms (or ‘signals’) to be collected and exchanged between the relevant parties. On top of this parties engaging in online advertising remain liable to ensure consent was validly collected further up the chain. If regulators were to enforce this strict interpretation regularly, it would most probably make collecting cookie-based consent for advertising unworkable.
Response of the big tech players
However, rather than stemming the business models of large technology corporations, the GDPR has caused them to develop their own proprietary methods of collecting user data. This week, Apple is releasing an update to its iOS on all its devices which will require user consent to before application data linked to users’ iOS device IDs (‘IDFAs’) is shared with third party app developers. Apple itself will still be able to keep this data by default on iOS devices, thus giving the technology company an edge over its rivals. Apple’s move on the one hand allows it to claim to be defending user privacy, whilst on the other hand will deprive app developers (including Facebook, Twitter and many smaller companies) of user data to improve and market their software products, whilst strengthening Apple’s hand. It will also force many app providers to pursue an alternative (non-advertising based) business model and charge customers for the use of their products, which will in turn drive more revenue from Apple.
To replace the lacuna that IDFA advertising filled, Apple has offered app developers its SKADNetwork, which aims to provide statistical information on impressions and conversion data without revealing any user or device-level information. Whilst this aggregated information can be useful for improving products, it lacks the user-specific behavioural information needed to create segmented profiles and provide individuals with targeted advertising.
Google, itself previously subject to large fines for failing to collect adequate user consent and transparency for when monitoring users online [citation needed], has decided to phase out its third parties cookies before 2022. However, cognisant of the fact that advertising during the pandemic has drive its parent company Alphabet to its highest ever quarter profit [citation needed], Google is not done yet. Mirroring the approach of Apple, it has recently announced testing of it’s own privacy preserving mechanism for interest based ad selection and the news are shaking up the adtech ecosystem, and that Google itself will control. Google calls this it’s Federated Learning of Cohorts (FLoC), which will operate within the Chrome environment and forms part of Google’s Privacy Sandbox.
The FLoC project aims to move online behavioural advertising’s focus on individual users to instead being ‘interest’ or ‘context’ based. Advertising will be targeted at ‘cohorts’ – groups of users with similar interests. rather than allow individual identification and profiling. Whilst the FLoC mechanism will still capture data on individual users, members of the FLoC will only see information about a user at a cohort level. Additionally, website users will also be able to opt out to participating or forming part of a FLoC cohort within Chrome.
Conclusion
Whilst Google’s FLoC and Apple’s SKADNetwork may appear to be more privacy-friendly solutions to the cookie or mobile-ID based third-party tracking technologies we have become used to, they will still involve tracking information on users within certain environments (e.g. Chrome or iOS apps) – just this information won’t be shared with all of the advertising ecosystem. Additionally, such technologies run the risk of strengthening the data that singular players, such as Google and Apple, can have access to about individuals. At the same time, they entrench the dominant position in the software and advertising marketplace that these tech companies have, at the expense of their rivals. It is likely that this is only the beginning of the big tech companies’ battle over privacy.
Popular algorithms are dictating every aspect of our daily lives. From marking exam papers, influencing what we buy on a daily basis, to dictating news that pops up on our apps and social media feeds. But often no one knows how they do it, so no one is accountable.
We are at a critical juncture in society as machines are seamlessly taking the place of humans across industry, government and in civil society. In fact, the digital and technological revolution is unstoppable, and as we see with the machine-learning algorithms used to mark the exams papers of this year’s graduating students, it comes with many ethical dilemmas and must be subject to scrutiny.
Most recently, the Office of Qualifications and Examinations Regulation (Ofqual) was forced to backtrack after it used an algorithm to predict A-level and GCSE students’ results, which led to grades being generally lowered compared with previous years and teacher-predicted grades. The department has since decided to use students’ predicted grades, as opposed to the algorithm, as the method of determining results.
Budgetary challenges and the increased digitised forms of commerce and business operations presented by the coronavirus pandemic have required both public and private sector to automate and innovate quickly. March and April saw what Microsoft’s CEO Satya Nadella outlined as “two years’ worth of digital transformation in two months”. However, the speed of digitisation has in many cases outpaced the ability of organisations to trial, test and explain the logic and effects of such systems.
The Ofqual algorithm aimed to meet these goals using a machine-learning system that determines what students’ grades would be based on previous results for that school or area, rather than teachers’ predicted grades. (These are often used for university offers but may tend to be overinflated.) Indeed, the use of AI and machine-learning to make informed decisions with speed, accuracy and efficiency is seen as a key competitive advantage whether between countries or companies. However using a technology to make decisions that impact individuals’ lives so significantly – like determining their exam results – as well as lacking the transparency needed to explain to students and parents, was always going to cause reputational challenges.
In essence, the core arguments against the scoring algorithm were not so much about inequities in scoring, but more on the lack of clarification as to how grades were awarded. For example, students were confused as to what extent their academic history had been considered over their algorithm-generated grade. They were also unsure how they could challenge or ask for a re-mark. Schools, meanwhile, were left wondering how adjustments made considering previous students’ attainment at a UK-wide level could potentially drag their own results down.
As such, the key challenge for organisations is to drive efficiencies using automated technologies in a more effective manner; one which has the buy-in of participants. Developers of AI systems will need to be able to provide a transparent set of scoring criteria to the public. This should cover the scope of the data used in the algorithm, and details on the factors or variables that influenced its decisions. For example, in relation to students’ grades, this extra information could include providing details on the weighting given to a particular student’s previous results and performance, teacher predictions, and the average student scores at a school and national level.
On top of this, AI systems that require complex human decision-making – arguably what exam scoring and predicting grades can involve are required, by the UK’s Information Commissioner’s Office (ICO), and other supervisory authorities, to be subject to further scrutiny. For example, particularly when used in the public sphere where accountability is much more strongly demanded, organisations and institutions need to be able to demonstrate that such a system has been thoroughly interrogated and tested, or that it has consulted with teachers’ bodies in an independent review of the costs and benefits of such systems.
In the last week, the ICO has released guidance on AI and data protection, which will help organisations prepare for the roll-out of its AI Auditing Framework, the enforcement of which has been delayed due to coronavirus. Organisations keen on digitising using AI should use this financial year as an opportunity to continue to automate their operations, whilst also preparing to meet the public, and regulatory scrutiny that will undoubtedly come from deploying complex technologies.
In a summer dominated by the Schrems II decision invalidating the EU-US Privacy Shield, the news story that continues to dominate business headlines is Microsoft’s pursuit of acquiring TikTok’s US, Canada, Australia and New Zealand operations. While the subject of privacy may feel like a subject that has been done to death over the last few years, this is a little different. The interest that has been generated in the potential purchase of the app, and the US President’s high-profile intervention in the negotiations, further demonstrates the importance to governments of keeping their citizens’ data firmly within their own borders.
TikTok, a popular mobile app used for video streaming, also collects and stores basic user details, contact information, location information, IP address, behavioural habits and viewing history, and can even monitor users’ keystrokes. It is that the US government is concerned about, and argues that it wants to stop this being collected by a foreign private entity (TikTok/ByteDance), domiciled abroad, that may be able (or required by Chinese law) to pass this data to the Chinese authorities, thereby allowing them to ‘spy’ on American citizens.
This ‘data sovereignty’ is now a core consideration in assessing opportunities in global markets, and increasingly affects free trade and geopolitics. ‘Data sovereignty’ refers to the policy by which information flowing or having a connection to a certain state is governed solely by its laws and governance. Linked to this is the concept of ‘data localisation’, by which jurisdictions such as Russia, China, and more recently India have passed laws that require organisations (typically those such as Microsoft, Facebook and others) to retain their citizens’ data on servers located in such jurisdictions. The GDPR can arguably be considered a ‘data sovereignty’ rule in that it extends the law’s application to any companies (mostly online service providers) targeting EU citizens, regardless of where they are established, and sets strenuous ‘equivalence’ obligations before data can be sent or in some cases even accessed overseas.
However, in this case, the US administration has gone further in effectively forcing TikTok to divulge its operations on several Western states, by threatening to ban it completely in the US for allegedly having the ability to share users’ data with the Chinese authorities, in a manner similar to measures taken against Huawei. This policy therefore affects not just the storage and laws applicable to data collected by TikTok, but its entire operations within jurisdictions. Most notably, the US, Canada, Australia and New Zealand make up four of the ‘Five Eyes’ surveillance data sharing nations, so at a time where geopolitical tensions are strained, states are actioned to collectively keep citizens’ data within their boundaries.
For large multinational e-commerce, technology and platform operators, the variety of data sovereignty measures that states can enact therefore present an occupational hazard to targeting markets overseas, and will require at the least localised data centres, and at the most complex organisational structures or local entities to navigate these hazards. The free movement of data can no longer be taken for granted.
Smart meters, which involve energy suppliers deploying devices that allow both customers and providers to monitor consumption and usage trends, are a core component of the move towards ‘Smart Homes’ and ‘Smart Grids’ as part of the growth of the Internet of Things.
In Ireland, smart metering is a key contemporary topic, with the National Smart Metering Programme commencing with deployment into domestic residences this year. Recently, ESB networks announced that, starting in September 2019, it would roll-out 20,000 meters in selected locations in Ireland, with a further 250,000 in place by the end of 2020 and a further 500,000 [1]by 2024.[2] The roll-out will be in phases, whereby Phase 1 will provide smart meters with basic Credit services and half-hourly interval data, Phase 2 will add further meters and enable Smart PAYG and specifics such as switching, and Phase 3 will include provisioning real-time consumption and usage data to consumers via their home device.
Smart meters facilitate increased data collection – it particular, it will now be possible for both the user and energy company to monitor usage data at more regular intervals, including down to the hour, quarter of an hour, and more – a significant increase from the current estimated readings every 2 months, and physical reads every quarter. Among these benefits, smart meters will enable electricity to be priced in accordance with demand – so during peak times energy is most expensive – which in theory would reduce spikes in usage and result in a lower need for peak capacity, increasing the efficiency and maintenance of the energy supply[3].
Using smart meters will also allow customers to keep track of costs and, by combining information on a building or location basis, allow operators to plan the supply of electricity more effectively. However, the collection of information, and the sharing and transfer between devices and networks of this data, raises both privacy and security concerns.
Data Protection Issues
As with all mass data collection, smart meters raise concerns around data minimisation and privacy intrusion. The NSMP is required under Irish law (Statutory Instrument 426 of 2014) to meet privacy standards applicable under the General Data Protection Regulation (Regulation 2016/679) (GDPR). Firstly, there is the issue of the legality of the data collection in the first place. In July, the Spanish Supreme Court ruled that information collected on energy usage, in addition to the corollary meter serial number to which the information is attributed, constituted personal data. This is an approach that has mirrored by the Information Commissioner’s Office (ICO), in the UK, which considers consumption information collected by meters, when linked with the meter serial numbers/MPANs, as personal information, and the Irish Data Protection Commission has taken a similar line.[4] The application of the GDPR to smart metering data is also foreseen by Article 23 of the Electricity Directive (Directive 2019/944).
As such, information collected through smart meters is subject to the provisions of the GDPR in fully – and therefore, all parties having access to the relevant data, including energy suppliers, smart metering systems operators and network operators (all of which will act as data controllers), need to consider compliance with the core principles of the GDPR. Within this, data must be collected with a suitable legal basis, only used for specific purposes and retention periods, not collected in excess of what is needed, and kept secure – and the usage should be made clear to customers, as provided by Article 20 of the Electricity Directive.
Although consumption data can arguably by used for monitoring usage at a statistical level, calculating bills and providing feedback to customers for the supplier’s within the energy contract (and therefore not require consent), use of information for other purposes, such as improving grid efficiency, identifying energy theft, debt management, etc. will most probably require a Privacy Impact Assessment or legitimate interest assessment before undertaking. In particular, organisations (such as energy suppliers) should only use household-level data where necessary, and for data sharing or data analytics, the use of aggregated data relating to multiple households or regions (of the sampling of certain households) should be preferred.
Moreover, excessive information, and data sharing with third parties, should at the very least be notified to customers and potentially risk assessed for the reasonable expectations of data subjects, including limitations of the amount of personal data collected by default, as part of the concept of ‘Privacy by Design’. As the frequency of smart meter readings will be the main component of data minimisation, this could include a limitation on processing of data more granular than day/night/peak for Time of Use billing and Energy Use Statements, or collection by suppliers on a monthly basis, as suggested by the CER [5]. Customers should also have a general opt-out of sharing consumption information with the energy supplier and third parties.
Within this, there is also a danger that usage can on users to develop detailed consumption profiles. Consumers may not want their energy company to build an understanding of their domestic habits, which could reveal, through attribution or inference via data mining techniques, more detailed lifestyle – such as if and the hours they work, how they interact with and use home appliances (such as watching television, conducting the laundry, entertaining guests, etc.), when they go on holidays, and even religious practices. This is particularly the case if this information is used in context with other data available to energy suppliers or parties they contract with – even basic identifiers such as age, household size, location or other demographic data can allow them to build up user profiles.
Additionally, there is the issue that this information, once collected, could be used unfairly. Customer profiles could be used to allow targeted pricing, and where such decisions are made on automating profiling. In this situation, the General Data Protection Regulation (Regulation 2016/679) (GDPR) particularly restricts processing, requiring transparency around the criteria used to present a product’s price. Moreover, sharing such data with third parties, who could offer their products or services based on user profiles, including through targeted advertising and direct marketing, is an activity that would clearly be prohibited under the GDPR without express user consent.
Data Security Issues
From a data security aspect, there are certain unique features around the design of the Irish smart meter network. Compared to the UK SEC (Smart Energy Code), which mandates end to end functional and technical specifications, data and security models, and various processes for parties interacting with the smart infrastructure, the CRU’s High-Level Design defines a technology agnostic abstraction for the network to build atop. Within this, HAN (Home Area Network) and WAN (Wide Area Network) technologies are procured by ESB Networks. Additionally, it is the DSO’s (Distribution System Operator) responsibility to make available energy data to the market: Gas Networks Ireland, Eirgrid, as well as others relying on an exchange of market messages.
Moreover, in contrast to the design of smart metering devices in the UK, in the Irish scheme, there is very little functionality on meter. This ‘thin design’ means no complex calculations are performed on the edge; the function of the device is merely to record time-bound consumption and transmit this data to the DSO; both the electricity meter and gas meter record consumption every 30 minutes, with the gas meter waking up every half hour. Gas Meters and IHDs communicate to the DSO through a securely established communications link with the Electricity Meter.
The DSO shares information collected through the smart meters with a few parties. To assist settlement and network optimization, relevant parties for both utilities Eirgrid and GNI (Gas Networks Ireland) are provided with this data. Gas or electricity suppliers receive a daily snapshot; it is their responsibility to perform necessary calculations (Pay As You Go balance, historical cost and consumption, tariff bands and Time of Use rates) and provision this information to consumers through non-AMI channels if necessary. This includes periodic ‘smart bills’, downloadable files online, or phone applications. It can be inferred that none of these data items are produced in real-time.
NSMP Security
As a critical infrastructure system, the security of the smart meter network is required to confirm to the EU NIS directive (Network Information Systems Directive). ESB Networks has also published a set of principles for the network’s security, for which a sample is provided:
Key Principle
Application
Confidentiality & Privacy
Encryption of data in transit and storage Access controls on all infrastructural components Deletion of data which is no longer required Compliance with Data Protection Law Comprehensive and timely review of audit logs Detection of unauthorised modification of data
Integrity
Availability
Automated failover to standby backup infrastructure Detection of DoS attacks or other events Automated action to remove the impediment
Authentication & Identification
Use of usernames with strong passwords Digital certificates and signing processes Multi-factor authentication Defining specific functions (view, modify, create, delete) Message based auditing and accounting
Authorisation
Non-Repudiation
Auditing & Accounting
Recording which user initiated an action Logging successful and unsuccessful attempts
However, one of the potential problems with this set of security principles is a lack of sufficient specificity. For example:
For HAN (Home Area Network) Communications, the Core Design states: the HAN ‘will be an open standard wireless communications protocol that enables transfer of data between the smart utility meters and specific securely paired devices in the home’, without further clarification.
Meter to Display communications are not standardized in terms of technology.
Security requirements regarding pairing between the CAD (Consumer Access Devices) and further consumer devices are not specified in the Core Design.
Furthermore, certain communication links within the network may also be proprietary, become deprecated, or have newly discovered flaws; it is unclear whether there is a process or governance for dealing with such issues as they arise. Mandated data items to be displayed on the (In-Home Display) IHD and exchanged include instantaneous demand, cumulative and historic consumption; such information is considered personal data may require a standardized protection scheme. Equally, responses mechanisms for some scenarios are unstated; it is unclear for example, what happens if an insecure CAD is joined to the HAN and floods the network with malformed messages. More generally, these issues point to the proprietary nature of the implementation.
Several signing processes and encryption schemes exist, and some standardization may be necessary to establish full protection. For example, reuse of Initialization Vectors, use of insecure symmetric keys, or use of the wrong cipher suite or AES-mode (a form of encryption), can cause encrypted data to be exposed. The split between security at the Application Layer and that at the lower layers is also unclear in terms of ownership; for example, it is unclear whether authentication – one of the security principles – is end to end or point to point, and what other communication links and devices are used for multi-factor authentication – another principle aforementioned. Other unstated technical specifics include storage locations for all personal data and management of cryptographic keys throughout the infrastructure.
The relative level of security enforced on various use cases and functions is also left unstated. For example, the requirement to define “specific functions (view, modify, create, delete)” is not elaborated further from the perspective of access control: for example, which parties along the infrastructure can evoke each function? The level of security enforced in join mechanisms is also unclear in the public NSMP specifications and Core Design documents.
By comparison, join mechanisms for the UK roll-out mandate two levels of security. One arises from the ZigBee architecture (in the form of link and network keys) and the other builds atop it to ensure further security (in the form of SMIP specific end-user and remote party credentials). Within Ireland, the company implementing the roll-out, ESB Networks, has stated that Application Layer encryption will be used in addition to link level encryption whenever metering protocol sessions are established between devices, however, more thorough and prescriptive security constructs may need careful consideration in the changing regulatory and data security landscape.
Conclusion
The NSMP will undoubtedly bring significant efficiency benefits in allowing both customers, suppliers, network operators and other network players to make better informed decisions with respect to energy usage, metering functionality and pricing. The Commission for Energy Regulation is currently working on assessing and addressing data protection and security issues, ranging from the possibility of detailed user profiles being built to issues with specifications between the security requirements for communications between the device and the network. However, further challenges may only become apparent on a case-by-case basis as usage of the upgraded smart meters by consumers develops over time.
The Information Commissioner’s Office (ICO) announced, on 9th July, its intention to fine Marriott International, a hotel chain, over £99m, following an investigation into a major data breach last year. This update occurs the same week that the ICO had issued a similar notice to fine British Airways (BA) over £183m in relation to a data breach, which are the ICO’s first investigations under the GDPR to produce monetary penalties.
Large fines and ‘delayed reactions’
In particular, the high value of the fines (in comparison to
the ICO’s investigations over the previous few months) is due to the fact that data
breaches that occurred after the GDPR entered into force (in May 2018) are
being discovered or reported, meaning that the GDPR-level of fines can now
be applied by regulators. In particular, in Marriott’s case, this involved a November
2018 revelation of a hack into collected by Starwood, a hotel chain it acquired
in 2016, that exposed 339 million guest records. In BA’s case, the data breach
related to a security vulnerability on BA’s booking website that involved
500,000 customers’ details being exfiltrated by cyber attackers, in
June-September 2018. In both cases, large datasets including customers’
personal information and card details were affected, which were clearly
targeted by financially-motivated criminals.
These breaches bring into sharp
relief the fact that large stores of data, particularly those involving
financial information, are likely to be particular targets for hackers. Both
companies were slow to identify or detect the data breaches – with BA, for
example, the loss of information through bookings was discovered two weeks
after it occurred. Considering Marriott, the data breach was only uncovered in
late 2018, despite the company being acquired in 2016, and the leak of the data
actually beginning as early as 2014.
In particular, in addition to
maintaining internal data security, Marriott’s data breach is a stark reminder
to organisations that examining data protection governance should remain a core
part of due diligence when acquiring a new organisation. In particular, organisations
should check for evidence of data mapping activities (including insecure
locations where it could be stored or collected from), records of previous
security incidents or data breaches and action taken, and any third parties to
whom data is provided as sources of potential reputational and compliance risks
to the buyer. The absence of any such governance or self-assessment should
represent an immediate red flag.
Conclusion
As both cases were high-profile data breaches that the ICO
could not possibly ignore (both Marriott and BA took steps to notify their
customers and investors of the loss of information), they do not tell us much
about its upcoming enforcement strategy. However, they do signal that the ICO
will not be reticent to apply penalties up to the 2% of global turnover
permitted for security lapses and 4% for wider governance failures, under the
GDPR. Nevertheless, both organisations still have the right to make
representations which may reduce the fines, and the ICO’s final monetary
penalty notice how the penalties were calculated.