For many years, the big tech companies (principally Facebook, Amazon, Apple and Google) have dominated the online advertising sphere, through selling access to or carrying out targeted advertisements on their platforms. Nevertheless, this business model caused privacy concerns, not least for the individuals whose behaviour and – behaviour which served the basis for them to be subject to ‘micro-targeting’ with marketing, including the political advertisements made infamous in the case of Cambridge Analytica.
However, the GDPR, and other privacy legislation following suit in the US, is beginning to threaten their business models. With respect to online advertising, the GDPR introduces a new standard of consent into the ePrivacy Directive (often referred to as ‘the cookie directive’) namely, a “clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement”(as per Article 2(f) of the ePrivacy Directive). This stance has been interpreted in guidance and caselaw, such as the Court of the Justice of the European Union’s 2019 Planet 49 decision, to require active consent to a variety of online tracking technologies, including cookies, mobile advertising IDs, and location-based monitoring. As a result, since the GDPR, consent is specifically required for each type of cookie through a banner or consent management tool, with information about the purposes of cookies, third parties and cookie duration to be provided to the user prior to their consent.
In particular, this new standard affects the entire online advertising ecosystem that relies on user consent for the deployment of cookies and other tracking technologies that are regulated by the ePrivacy regulation, and led to a decrease in engagement and success rates of tracking individuals online. Most notably, the reason why online advertising has been singularly so highly affect is due to the cross-platform, ‘third party’ nature of many advertising cookies, which users interact with across multiple websites and thus require a chain of consent mechanisms (or ‘signals’) to be collected and exchanged between the relevant parties. On top of this parties engaging in online advertising remain liable to ensure consent was validly collected further up the chain. If regulators were to enforce this strict interpretation regularly, it would most probably make collecting cookie-based consent for advertising unworkable.
Response of the big tech players
However, rather than stemming the business models of large technology corporations, the GDPR has caused them to develop their own proprietary methods of collecting user data. This week, Apple is releasing an update to its iOS on all its devices which will require user consent to before application data linked to users’ iOS device IDs (‘IDFAs’) is shared with third party app developers. Apple itself will still be able to keep this data by default on iOS devices, thus giving the technology company an edge over its rivals. Apple’s move on the one hand allows it to claim to be defending user privacy, whilst on the other hand will deprive app developers (including Facebook, Twitter and many smaller companies) of user data to improve and market their software products, whilst strengthening Apple’s hand. It will also force many app providers to pursue an alternative (non-advertising based) business model and charge customers for the use of their products, which will in turn drive more revenue from Apple.
To replace the lacuna that IDFA advertising filled, Apple has offered app developers its SKADNetwork, which aims to provide statistical information on impressions and conversion data without revealing any user or device-level information. Whilst this aggregated information can be useful for improving products, it lacks the user-specific behavioural information needed to create segmented profiles and provide individuals with targeted advertising.
Google, itself previously subject to large fines for failing to collect adequate user consent and transparency for when monitoring users online [citation needed], has decided to phase out its third parties cookies before 2022. However, cognisant of the fact that advertising during the pandemic has drive its parent company Alphabet to its highest ever quarter profit [citation needed], Google is not done yet. Mirroring the approach of Apple, it has recently announced testing of it’s own privacy preserving mechanism for interest based ad selection and the news are shaking up the adtech ecosystem, and that Google itself will control. Google calls this it’s Federated Learning of Cohorts (FLoC), which will operate within the Chrome environment and forms part of Google’s Privacy Sandbox.
The FLoC project aims to move online behavioural advertising’s focus on individual users to instead being ‘interest’ or ‘context’ based. Advertising will be targeted at ‘cohorts’ – groups of users with similar interests. rather than allow individual identification and profiling. Whilst the FLoC mechanism will still capture data on individual users, members of the FLoC will only see information about a user at a cohort level. Additionally, website users will also be able to opt out to participating or forming part of a FLoC cohort within Chrome.
Conclusion
Whilst Google’s FLoC and Apple’s SKADNetwork may appear to be more privacy-friendly solutions to the cookie or mobile-ID based third-party tracking technologies we have become used to, they will still involve tracking information on users within certain environments (e.g. Chrome or iOS apps) – just this information won’t be shared with all of the advertising ecosystem. Additionally, such technologies run the risk of strengthening the data that singular players, such as Google and Apple, can have access to about individuals. At the same time, they entrench the dominant position in the software and advertising marketplace that these tech companies have, at the expense of their rivals. It is likely that this is only the beginning of the big tech companies’ battle over privacy.
Smart meters, which involve energy suppliers deploying devices that allow both customers and providers to monitor consumption and usage trends, are a core component of the move towards ‘Smart Homes’ and ‘Smart Grids’ as part of the growth of the Internet of Things.
In Ireland, smart metering is a key contemporary topic, with the National Smart Metering Programme commencing with deployment into domestic residences this year. Recently, ESB networks announced that, starting in September 2019, it would roll-out 20,000 meters in selected locations in Ireland, with a further 250,000 in place by the end of 2020 and a further 500,000 [1]by 2024.[2] The roll-out will be in phases, whereby Phase 1 will provide smart meters with basic Credit services and half-hourly interval data, Phase 2 will add further meters and enable Smart PAYG and specifics such as switching, and Phase 3 will include provisioning real-time consumption and usage data to consumers via their home device.
Smart meters facilitate increased data collection – it particular, it will now be possible for both the user and energy company to monitor usage data at more regular intervals, including down to the hour, quarter of an hour, and more – a significant increase from the current estimated readings every 2 months, and physical reads every quarter. Among these benefits, smart meters will enable electricity to be priced in accordance with demand – so during peak times energy is most expensive – which in theory would reduce spikes in usage and result in a lower need for peak capacity, increasing the efficiency and maintenance of the energy supply[3].
Using smart meters will also allow customers to keep track of costs and, by combining information on a building or location basis, allow operators to plan the supply of electricity more effectively. However, the collection of information, and the sharing and transfer between devices and networks of this data, raises both privacy and security concerns.
Data Protection Issues
As with all mass data collection, smart meters raise concerns around data minimisation and privacy intrusion. The NSMP is required under Irish law (Statutory Instrument 426 of 2014) to meet privacy standards applicable under the General Data Protection Regulation (Regulation 2016/679) (GDPR). Firstly, there is the issue of the legality of the data collection in the first place. In July, the Spanish Supreme Court ruled that information collected on energy usage, in addition to the corollary meter serial number to which the information is attributed, constituted personal data. This is an approach that has mirrored by the Information Commissioner’s Office (ICO), in the UK, which considers consumption information collected by meters, when linked with the meter serial numbers/MPANs, as personal information, and the Irish Data Protection Commission has taken a similar line.[4] The application of the GDPR to smart metering data is also foreseen by Article 23 of the Electricity Directive (Directive 2019/944).
As such, information collected through smart meters is subject to the provisions of the GDPR in fully – and therefore, all parties having access to the relevant data, including energy suppliers, smart metering systems operators and network operators (all of which will act as data controllers), need to consider compliance with the core principles of the GDPR. Within this, data must be collected with a suitable legal basis, only used for specific purposes and retention periods, not collected in excess of what is needed, and kept secure – and the usage should be made clear to customers, as provided by Article 20 of the Electricity Directive.
Although consumption data can arguably by used for monitoring usage at a statistical level, calculating bills and providing feedback to customers for the supplier’s within the energy contract (and therefore not require consent), use of information for other purposes, such as improving grid efficiency, identifying energy theft, debt management, etc. will most probably require a Privacy Impact Assessment or legitimate interest assessment before undertaking. In particular, organisations (such as energy suppliers) should only use household-level data where necessary, and for data sharing or data analytics, the use of aggregated data relating to multiple households or regions (of the sampling of certain households) should be preferred.
Moreover, excessive information, and data sharing with third parties, should at the very least be notified to customers and potentially risk assessed for the reasonable expectations of data subjects, including limitations of the amount of personal data collected by default, as part of the concept of ‘Privacy by Design’. As the frequency of smart meter readings will be the main component of data minimisation, this could include a limitation on processing of data more granular than day/night/peak for Time of Use billing and Energy Use Statements, or collection by suppliers on a monthly basis, as suggested by the CER [5]. Customers should also have a general opt-out of sharing consumption information with the energy supplier and third parties.
Within this, there is also a danger that usage can on users to develop detailed consumption profiles. Consumers may not want their energy company to build an understanding of their domestic habits, which could reveal, through attribution or inference via data mining techniques, more detailed lifestyle – such as if and the hours they work, how they interact with and use home appliances (such as watching television, conducting the laundry, entertaining guests, etc.), when they go on holidays, and even religious practices. This is particularly the case if this information is used in context with other data available to energy suppliers or parties they contract with – even basic identifiers such as age, household size, location or other demographic data can allow them to build up user profiles.
Additionally, there is the issue that this information, once collected, could be used unfairly. Customer profiles could be used to allow targeted pricing, and where such decisions are made on automating profiling. In this situation, the General Data Protection Regulation (Regulation 2016/679) (GDPR) particularly restricts processing, requiring transparency around the criteria used to present a product’s price. Moreover, sharing such data with third parties, who could offer their products or services based on user profiles, including through targeted advertising and direct marketing, is an activity that would clearly be prohibited under the GDPR without express user consent.
Data Security Issues
From a data security aspect, there are certain unique features around the design of the Irish smart meter network. Compared to the UK SEC (Smart Energy Code), which mandates end to end functional and technical specifications, data and security models, and various processes for parties interacting with the smart infrastructure, the CRU’s High-Level Design defines a technology agnostic abstraction for the network to build atop. Within this, HAN (Home Area Network) and WAN (Wide Area Network) technologies are procured by ESB Networks. Additionally, it is the DSO’s (Distribution System Operator) responsibility to make available energy data to the market: Gas Networks Ireland, Eirgrid, as well as others relying on an exchange of market messages.
Moreover, in contrast to the design of smart metering devices in the UK, in the Irish scheme, there is very little functionality on meter. This ‘thin design’ means no complex calculations are performed on the edge; the function of the device is merely to record time-bound consumption and transmit this data to the DSO; both the electricity meter and gas meter record consumption every 30 minutes, with the gas meter waking up every half hour. Gas Meters and IHDs communicate to the DSO through a securely established communications link with the Electricity Meter.
The DSO shares information collected through the smart meters with a few parties. To assist settlement and network optimization, relevant parties for both utilities Eirgrid and GNI (Gas Networks Ireland) are provided with this data. Gas or electricity suppliers receive a daily snapshot; it is their responsibility to perform necessary calculations (Pay As You Go balance, historical cost and consumption, tariff bands and Time of Use rates) and provision this information to consumers through non-AMI channels if necessary. This includes periodic ‘smart bills’, downloadable files online, or phone applications. It can be inferred that none of these data items are produced in real-time.
NSMP Security
As a critical infrastructure system, the security of the smart meter network is required to confirm to the EU NIS directive (Network Information Systems Directive). ESB Networks has also published a set of principles for the network’s security, for which a sample is provided:
Key Principle
Application
Confidentiality & Privacy
Encryption of data in transit and storage
Access controls on all infrastructural components
Deletion of data which is no longer required
Compliance with Data Protection Law
Comprehensive and timely review of audit logs
Detection of unauthorised modification of data
Integrity
Availability
Automated failover to standby backup infrastructure
Detection of DoS attacks or other events
Automated action to remove the impediment
Authentication & Identification
Use of usernames with strong passwords
Digital certificates and signing processes
Multi-factor authentication
Defining specific functions (view, modify, create, delete)
Message based auditing and accounting
Authorisation
Non-Repudiation
Auditing & Accounting
Recording which user initiated an action
Logging successful and unsuccessful attempts
However, one of the potential problems with this set of security principles is a lack of sufficient specificity. For example:
For HAN (Home Area Network) Communications, the Core Design states: the HAN ‘will be an open standard wireless communications protocol that enables transfer of data between the smart utility meters and specific securely paired devices in the home’, without further clarification.
Meter to Display communications are not standardized in terms of technology.
Security requirements regarding pairing between the CAD (Consumer Access Devices) and further consumer devices are not specified in the Core Design.
Furthermore, certain communication links within the network may also be proprietary, become deprecated, or have newly discovered flaws; it is unclear whether there is a process or governance for dealing with such issues as they arise. Mandated data items to be displayed on the (In-Home Display) IHD and exchanged include instantaneous demand, cumulative and historic consumption; such information is considered personal data may require a standardized protection scheme. Equally, responses mechanisms for some scenarios are unstated; it is unclear for example, what happens if an insecure CAD is joined to the HAN and floods the network with malformed messages. More generally, these issues point to the proprietary nature of the implementation.
Several signing processes and encryption schemes exist, and some standardization may be necessary to establish full protection. For example, reuse of Initialization Vectors, use of insecure symmetric keys, or use of the wrong cipher suite or AES-mode (a form of encryption), can cause encrypted data to be exposed. The split between security at the Application Layer and that at the lower layers is also unclear in terms of ownership; for example, it is unclear whether authentication – one of the security principles – is end to end or point to point, and what other communication links and devices are used for multi-factor authentication – another principle aforementioned. Other unstated technical specifics include storage locations for all personal data and management of cryptographic keys throughout the infrastructure.
The relative level of security enforced on various use cases and functions is also left unstated. For example, the requirement to define “specific functions (view, modify, create, delete)” is not elaborated further from the perspective of access control: for example, which parties along the infrastructure can evoke each function? The level of security enforced in join mechanisms is also unclear in the public NSMP specifications and Core Design documents.
By comparison, join mechanisms for the UK roll-out mandate two levels of security. One arises from the ZigBee architecture (in the form of link and network keys) and the other builds atop it to ensure further security (in the form of SMIP specific end-user and remote party credentials). Within Ireland, the company implementing the roll-out, ESB Networks, has stated that Application Layer encryption will be used in addition to link level encryption whenever metering protocol sessions are established between devices, however, more thorough and prescriptive security constructs may need careful consideration in the changing regulatory and data security landscape.
Conclusion
The NSMP will undoubtedly bring significant efficiency benefits in allowing both customers, suppliers, network operators and other network players to make better informed decisions with respect to energy usage, metering functionality and pricing. The Commission for Energy Regulation is currently working on assessing and addressing data protection and security issues, ranging from the possibility of detailed user profiles being built to issues with specifications between the security requirements for communications between the device and the network. However, further challenges may only become apparent on a case-by-case basis as usage of the upgraded smart meters by consumers develops over time.
[1] Commission for Energy Regulation. (2017, September 21). Update on the Smart Meter Upgrade. Retrieved from: https://www.cru.ie/wp-content/uploads/2016/11/CER17279-NSMP-Info-Note.pdf
[2] Gorey, C. (2019, July 3). ESB reveals first Irish towns to receive smart meters in later 2019. Retrieved from: https://www.siliconrepublic.com/machines/esb-smart-meters-locations-2019
[3] European Data Protection Supervisor. (2012, Jun 8). Opinion of the European Data Protection Supervisor on the Commission Recommendation on preparations for the roll-out of smart metering system. Retrieved from: https://edps.europa.eu/sites/edp/files/publication/12-06-08_smart_metering_en.pdf
[4] Commission for Energy Regulation. (2015, July 29). CER National Smart Metering Programme Information Paper on Data Access & Privacy. Retrieved from: https://www.cru.ie/wp-content/uploads/2015/07/CER15139-Data-Access.pdf
Even since before the GDPR, legitimate interests have been one of the most frequent bases that organisations have relied upon to justify processing personal data. However, the GDPR placed increased obligations and scrutiny on this practice. Particularly in industries where business models are increasingly based around the use of personal information, understanding where organisations can legally rely on legitimate interests and where the rights of individuals will be considered “overriding”, is key to compliance.
Background
Legitimate interests have been used as a flexible basis to justify data-driven operations since the Data Protection Directive in 1995. This rose to the fore when, the Google Spain case (Case C-131/12 Google Spain SL, Google Inc. v Agencia Espanola de Proteccion de Datos (AEPD), Mario Costeja Gonzalez, judgement of 13 May 2014), the Court of Justice of the European Union (CJEU) considered the assessment of the balance between the legitimate interests of the internet search engine providers and of internet users in receiving and having access to information in search results, on one side, and the rights of the data subject (here Mr. Gonzalez) in his privacy. Weighing these competing rights, the court considered both the centrality of the data processing to the commercial activity of a search engine and also the sensitivity of the information and the public profile of the data subject.
Changes under the GDPR
Under the GDPR, the legal test for legitimate interests means the onus is now on the controller to demonstrate that the interests or the fundamental rights and freedoms of the data subject do not “override” their interests – where formerly, such processing only needed to be “unwarranted” – a much higher hurdle. On top of the need for this new balance test, the legitimate interests relied upon must now be published in a Privacy Notice and individuals are able to request specific information on the legitimate interest assessment conducted, which increases the obligations and scrutiny on the controller to ensure a proper risk analysis is conducted.
On the plus side for companies, examples of legitimate interests are now provided specifically provided under the GDPR, and are being elaborated on in supervisory authority guidance – including for example, situations including the prevention and detection of fraud, network security and employee monitoring. However, the weighting given to the priority of either the data subject or the controller’s interests – and subsequently, the protective safeguards that must be put into place such as increased notice to data subjects or reduction of the scope of processing – for certain ‘legitimate’ activities such as employee monitoring will differ vastly across EU member states.
Conclusion
A year on from the GDPR’s entry into force, the circumstances in which legitimate interests can be relied upon are still evolving, and as an area in which caselaw and best practice is likely to play a huge part, all organisations should keep themselves updated.
On the 8th April, the European Commission’s High Level Expert Group on AI released their Ethics Guidelines for Trustworthy AI, focusing on outlining various ethical and moral principles for ‘trustworthy AI’, including that such systems be lawful (including the data used therein), ethical (in terms of compliance with core principles and values) and robust (including central security requirements).
The introduction of the guidelines follows a public consultation process between December and February. In particular, the guidelines build on top of other frameworks for autonomous and machine learning solutions, such as those launched by the European Commission, the Institute for Electrical and Electronic Engineers (IEEE), the Institute for Ethical AI and Machine Learning, several European data protection authorities and other private consultancies such as PWC, Deloitte and Gemserv. As such, they aim to form a framework for ensuring AI systems are deployed and designed in an accountable fashion, which will allow organisations to test and develop approaches to ethical AI, and which can potentially be translated into hard regulation.
Risk Management
The Ethics Guidelines centralises around several concepts that aim to introduce the necessary checks and assurance into AI systems. The guidelines focus on the need to address risks associated with AI systems, and the impacts they can have for individuals or communities – as part of what the High Level Working Group deems the ‘prevention of harm’ principle. Such effects can occur in the form of individuals being invasively targeted by advertising, being denied credit or loans on the basis of inaccurate data, even to pedestrians being harmed by self-driving cars.
Organisations can ease this process through an early data mapping exercise, focusing on the identification and sourcing of data, the selection of algorithmic models, the training and testing deployment, its use and deployment, and then monitoring. Through this, any issues with data accuracy and quality should be tracked and identified from the start, before they create problems in the form of skewed or biased decisions, later in the operation of systems. Moreover, training and testing should include key checks for, the identification of harm to individuals or groups of individuals and the potential for bias should be considered – including bias inherent in either the functions of data sets used.
Departments wishing to deploy or use AI systems should work with their development teams to ensure all of these tests can be performed prior to use or sign-off. However, it should not end there – systems should be subject to rigorous ongoing testing and retesting, including to see whether the system’s outcomes or outputs are meeting its objectives – such as, for example, whether user satisfaction is actually being achieved by a system that provided automated customer service responses.
Agency
Additionally, central to the High Level Expert Group’s approach to the regulation of AI is the concept of ‘human agency’ – the idea that individuals, including data subjects, should be autonomous and be able to determine how organisations control their data and how decisions affect them. The core concept of ‘agency’ builds upon individual rights under the Council of Europe ‘Convention 108’ and the GDPR – including such as to access, correct, amend and restrict the processing of their data, and even not to be subject to automated decisions that will have legal effects or similarly significant on them – unless necessary for a contract or permitted by law, or by explicit consent. As such, organisations will have to build into AI systems the ability of individuals in processes, analysis and decisions made by AI systems – including to adjust their preferences and which data they disclose and to amend when they are tracked. However, they should also limit the harmful effects of AI – where ‘similarly significant’ effects is interpreted to mean negative impacts on human dignity, effects that target vulnerable groups, etc. In particular, both this and adherence to the concept of human agency can be achieved keeping a ‘human in the loop’, which refers to the governance or design capability for human intervention during the system’s operation, including to monitor or overrule AI decisions, so that such systems act as a complement, not a replacement, to human decision-making.
Transparency
Another increasingly mooted aspect of artificial intelligence systems is issue of transparency, which obliges organisations to introduce personnel and mechanisms not just to respond to interpret algorithms but also to respond to potential requests and challenges to their results. Transparency also involves – for example, designing interfaces to allow customers to see how their data is used, where they are public facing, which will include when collecting consent and/or personal data from individuals. Transparency is also largely connected to the element of ‘explainability’, which, in Convention 108’s most recent iteration, outlines that, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of the reasoning [of Artificial Intelligence systems], which led to any resulting conclusions”. This goes further than provisions of the GDPR in that, being a more expansive right, it extends to understanding why an actual outcome was reached, including after a decision was made, rather than simply focused on the basis for decisions made.
Faced with the difficulty of making AI systems explainable, there are two other ways organisations can perform the necessary due diligence – firstly, documenting all decisions that a system makes can contribute to such as functions, selection of data and data sources – even when full transparency cannot be achieved due to the intrinsic nature of the algorithm or machine-learning process. Secondly, companies should act in publicly committing to a Code of Ethics around how data will be sourced and used, and the values and aims of the systems, can also help with public engagement and reception.
Participation
Lastly, the guidelines discuss the participation of different stakeholders in the design of AI systems – including both internal and external governance frameworks. It should be remembered that individual departments will often be the information owners responsible for the operational decisions governing AI systems, and so should liaise with developers and system providers, and other relevant third parties, to ensure their requirements are met. At a more strategic level, organisations should involve executive sponsors or management in approving AI systems that are likely to have an impact on the workforce or involve significant disruptive effects on operations. Moreover, where AI systems are likely to raise risks – legal, social, reputational or financial – they will need to approve and consider ethics and goal-orientated trade-offs for systems during their development.
To support this, organisations can appoint ethics panels, committees or boards to liaise ethics dialogue within their organizations, seeking approaches that are both aspirational and value-based. Within these groups, for example, the High Level Working Group emphasises that designers and technologists should engage in cross-disciplinary exchanges whereby ethicists, lawyers and sociologists to understand the impact of AI solutions. However, whichever structure is established, the group or panel has to have ‘teeth’ to be able to accomplish effective oversight and management. This is a particular contemporary issue, given the recent failing of Google’s ethics board, which shut down following a backlash over both its lack of effectiveness and the composition and background of some of its members. As such, the group should be consulted during deployment of the system, particularly over its goals and potential effects, and regularly informed of the outcomes of monitoring of the solution deployed throughout the lifecyle of the system.
Conclusion
The High Level Working Group’s guidelines bring a very detailed discussion of evolving regulatory norms governing artificial intelligence systems. Specifically focusing on the protection of harm and individual rights, the framework aims to incorporate checks in the deployment of systems to ensure they are more ethically and individually-focused. Organisations wishing to remain accountable should take advantage of the reputational and compliance benefits of overtly demonstrating that they use data in accountable and fair ways, and that they are committed to delivering operations and services in line with the principles espoused by the guidelines – as the EU is likely to incorporate them into hard law regulating AI systems in the near future.
Much has been written about the new EU initiative currently awaiting approval by the Council. Few issues seem to spark such fierce debates within the internet and technology community as those pertaining to copyright enforcement. This is nothing new. What is new, however, is perhaps the extent tech giants are able to capitalise on this in order to push their agenda. Propaganda is a strong, loaded term but it is difficult to find another way to describe what is effectively the targeted lobbying of the general public via corporate communications. In order to understand the current regulatory environment, as well as the extent of the backlash (i.e. its motivations), one must first look back almost 20 years, to the Electronic Commerce Directive.
Two decades i.e. a millennia in internet years
It’s fairly uncontroversial that the pace of technological change vastly outstrips that of the legislative process. 20 years in real time is basically a millennia in computer years. Nevertheless, the e-Commerce Directive, until recently, has remained the basis for determining intermediary liability. It has been incredibly successful in this regard, largely by immunising ‘information society services’ from liability with regards the content that flows through their networks. Suffice to say that there is a reason that household tech giants exist, and this is because policing the content they publish and profit immensely from has, for the most part, not fallen on them but instead on charities and those that produce the content (ostensibly not charities). This has made UUC platforms incredibly lucrative. Network effects + free content + ad revenue = billion dollar industry.
Regulatory delays has its costs
The economic adage that there are no solutions, only trade-offs rings true here: ‘publishers’ (or more accurately, ‘platforms’ since it is not always that an online intermediary will be deemed a publisher, if ever) have done very well off the back of content produced by others and with no real responsibility for it. This has far-reaching implications for business but also the wider public. Free speech, fake news and access to content are all relevant. However, now that these platforms have matured, having been given the legal room to do so, the tables are finally beginning to turn. As with all change, and perhaps especially with change that threatens to disrupt billion-dollar businesses, there has been a lot of protest.
None of the foregoing is to say that tech companies don’t provide anything. Many resources are expended in developing whizzy platforms that can keep up with somewhat fickle and impatient consumption patterns. They provide the necessary infrastructure for the ‘new’ media to flourish. I say ‘new’ in inverted commas because while the technology (i.e. the speed, volume and on-demand nature of it all) is new, ultimately the structures for delivering it are not so new (or at least not that different in principle from things like radio, television and telephony). Instead, the safe harbour regime is what cradled and nurtured these nascent entities to build the infrastructure needed, by way of a sort of de-regulatory economic stimulus.
And thus after 20 years of decline, it is only now that content industries, like the music industry (and perhaps journalism too?) are finally seeing some growth. Obviously not back to pre-internet figures, but finally an up-tick. This is quite remarkable given they have still been producing content all these years with tightened belts. The point is that the proceeds of such content have gone to the conduits and not the producers themselves. Why engage in licensing discussions over content if a third party uploader will provide it to you for free? Expensive and time consuming.
Enter the resistance
So if you were a giant ‘mere conduit’ that is in practice if not in law ‘publishing’ all sorts of content, how best would you resist a change that obliges you to licence and at least kick back some of that revenue to those who created it? How about publishing ‘public information notices’ instead of the ads you would normally show users informing them why a new EU Directive threatens the very nature of the internet and all the memes users hold dear? For someone with extensive research experience, such claims made me chuckle. But for a general user, they are taken for truth and stick with incredible efficiency. It’s quite ironic given that most people online are at once incredibly critical yet nevertheless so willing to believe sensationalised claims.
Needless to say, the claims of ‘censorship’, ‘the end of the internet’ and a ‘ban on memes’ are grossly overblown. For one, banning memes would be like trying to ban books. You could do it, but only in a dystopian world with infinite resources for (a very high tech) law enforcement agency and only in the offline world. In fact, even the book ban is a bad analogy because books take a lot more time, effort and intellect to produce than a meme.
Secondly, content filters. Not only do these already exist (see, e.g., YouTube’s Content ID system), but they have not resulted in ‘breaking the internet’ or preventing people from sharing content. It also is unlikely to bring disproportionate costs on the small billion-dollar business that is Alphabet (they could even just license this system to other platforms if they wanted to; not that they would because they’re a giant monopoly anyway). Recitals to the Directive are not that ambiguous as to who the ‘filtering’ requirements would apply to either – only the largest of intermediaries, and not start-ups.
Lastly, copyright exceptions. It’s true that automated systems make poor legal analysts. Luckily, we do not live in a Judge Dredd dystopia. As such, there is always an individual behind a takedown notice and systems like Content ID. Counterclaims are possible and will remain. That’s the entire point of having a legal system at all, rather than what has effectively been a technological wild west for the past 20 years. Will this cause overblocking? No, it will only force free speech ‘advocates’ like YouTube to put their money where their mouth is — they can either license the content or overblock. Either way the onus will be on them, rather than a third party, like an actual content producer, to defend the choice to takedown the illicit content if they have refused to pay a fair share for its publication. They are, after all, the more technologically sophisticated party, and so should be accountable as such.
A final irony
It’s interesting that the content industries have been side-lined by the public for so long, despite their consumptive appetites for entertainment. It is hard to understand why the copyright industries are so shunned by the general public. My guess is propaganda and bad press. Yes, it has taken some time for enforcement efforts to align with licensing efforts, but much of this is owing to delays in updating the regulatory framework. What ‘copyleft’ protesters and corporations fail to acknowledge, however, is that a lax copyright regime directly helps fund criminals who use the proceeds for behaviours that no one in their right mind would dream of protesting over. Further, in an age where big business is already perceived by the public as having too much power, why are so many in the public willing to hand them even more power over our news and other media by defending their position? They have had their day, or rather their two decades and still will regardless. Let’s allow the pendulum to swing back in favour of creatives again.