Some thoughts on the Copyright Directive


Photo by Tobias Tullius on Unsplash

Much has been written about the new EU initiative currently awaiting approval by the Council. Few issues seem to spark such fierce debates within the internet and technology community as those pertaining to copyright enforcement. This is nothing new. What is new, however, is perhaps the extent tech giants are able to capitalise on this in order to push their agenda. Propaganda is a strong, loaded term but it is difficult to find another way to describe what is effectively the targeted lobbying of the general public via corporate communications. In order to understand the current regulatory environment, as well as the extent of the backlash (i.e. its motivations), one must first look back almost 20 years, to the Electronic Commerce Directive.

Two decades i.e. a millennia in internet years

It’s fairly uncontroversial that the pace of technological change vastly outstrips that of the legislative process. 20 years in real time is basically a millennia in computer years. Nevertheless, the e-Commerce Directive, until recently, has remained the basis for determining intermediary liability. It has been incredibly successful in this regard, largely by immunising ‘information society services’ from liability with regards the content that flows through their networks. Suffice to say that there is a reason that household tech giants exist, and this is because policing the content they publish and profit immensely from has, for the most part, not fallen on them but instead on charities and those that produce the content (ostensibly not charities). This has made UUC platforms incredibly lucrative. Network effects + free content + ad revenue = billion dollar industry.  

Regulatory delays has its costs

The economic adage that there are no solutions, only trade-offs rings true here: ‘publishers’ (or more accurately, ‘platforms’ since it is not always that an online intermediary will be deemed a publisher, if ever) have done very well off the back of content produced by others and with no real responsibility for it. This has far-reaching implications for business but also the wider public. Free speech, fake news and access to content are all relevant. However, now that these platforms have matured, having been given the legal room to do so, the tables are finally beginning to turn. As with all change, and perhaps especially with change that threatens to disrupt billion-dollar businesses, there has been a lot of protest.

None of the foregoing is to say that tech companies don’t provide anything. Many resources are expended in developing whizzy platforms that can keep up with somewhat fickle and impatient consumption patterns. They provide the necessary infrastructure for the ‘new’ media to flourish. I say ‘new’ in inverted commas because while the technology (i.e. the speed, volume and on-demand nature of it all) is new, ultimately the structures for delivering it are not so new (or at least not that different in principle from things like radio, television and telephony). Instead, the safe harbour regime is what cradled and nurtured these nascent entities to build the infrastructure needed, by way of a sort of de-regulatory economic stimulus.

And thus after 20 years of decline, it is only now that content industries, like the music industry (and perhaps journalism too?) are finally seeing some growth. Obviously not back to pre-internet figures, but finally an up-tick. This is quite remarkable given they have still been producing content all these years with tightened belts. The point is that the proceeds of such content have gone to the conduits and not the producers themselves. Why engage in licensing discussions over content if a third party uploader will provide it to you for free? Expensive and time consuming.

Enter the resistance

So if you were a giant ‘mere conduit’ that is in practice if not in law ‘publishing’ all sorts of content, how best would you resist a change that obliges you to licence and at least kick back some of that revenue to those who created it? How about publishing ‘public information notices’ instead of the ads you would normally show users informing them why a new EU Directive threatens the very nature of the internet and all the memes users hold dear? For someone with extensive research experience, such claims made me chuckle. But for a general user, they are taken for truth and stick with incredible efficiency. It’s quite ironic given that most people online are at once incredibly critical yet nevertheless so willing to believe sensationalised claims.

Needless to say, the claims of ‘censorship’, ‘the end of the internet’ and a ‘ban on memes’ are grossly overblown. For one, banning memes would be like trying to ban books. You could do it, but only in a dystopian world with infinite resources for (a very high tech) law enforcement agency and only in the offline world. In fact, even the book ban is a bad analogy because books take a lot more time, effort and intellect to produce than a meme.

Secondly, content filters. Not only do these already exist (see, e.g., YouTube’s Content ID system), but they have not resulted in ‘breaking the internet’ or preventing people from sharing content. It also is unlikely to bring disproportionate costs on the small billion-dollar business that is Alphabet (they could even just license this system to other platforms if they wanted to; not that they would because they’re a giant monopoly anyway). Recitals to the Directive are not that ambiguous as to who the ‘filtering’ requirements would apply to either – only the largest of intermediaries, and not start-ups.

Lastly, copyright exceptions. It’s true that automated systems make poor legal analysts. Luckily, we do not live in a Judge Dredd dystopia. As such, there is always an individual behind a takedown notice and systems like Content ID. Counterclaims are possible and will remain. That’s the entire point of having a legal system at all, rather than what has effectively been a technological wild west for the past 20 years. Will this cause overblocking? No, it will only force free speech ‘advocates’ like YouTube to put their money where their mouth is — they can either license the content or overblock. Either way the onus will be on them, rather than a third party, like an actual content producer, to defend the choice to takedown the illicit content if they have refused to pay a fair share for its publication. They are, after all, the more technologically sophisticated party, and so should be accountable as such.

A final irony

It’s interesting that the content industries have been side-lined by the public for so long, despite their consumptive appetites for entertainment. It is hard to understand why the copyright industries are so shunned by the general public. My guess is propaganda and bad press. Yes, it has taken some time for enforcement efforts to align with licensing efforts, but much of this is owing to delays in updating the regulatory framework. What ‘copyleft’ protesters and corporations fail to acknowledge, however, is that a lax copyright regime directly helps fund criminals who use the proceeds for behaviours that no one in their right mind would dream of protesting over. Further, in an age where big business is already perceived by the public as having too much power, why are so many in the public willing to hand them even more power over our news and other media by defending their position? They have had their day, or rather their two decades and still will regardless. Let’s allow the pendulum to swing back in favour of creatives again.

Further reading is available here.

— Hernán R. Dutschmann

The Singapore PDPC’s record fines signal growing scrutiny of IT security outsourcing arrangements

Kaveh Cope-Lahooti

The Personal Data Protection Commission of Singapore has recently fined two companies operating in the healthcare sector (a healthcare provider, SingHealth, and its IT provider, IHiS) its highest ever fines of SGD 250,000 and SGD 750,000 respectively, for their failure to put into place adequate data security measures that exacerbated the effects of a cyberattack last year. These fines reflect the fact that the data breach was the worst in Singapore’s history, affecting some 1.5 million patients (in a jurisdiction with around 5.6 million residents). The breach arose after an attacker gained access to the healthcare database by infecting a user’s workstation, obtained login credentials to the database and was able to repeatedly access and copy data.

Drawing on domestic legislation in addition to guidance from supervisory authorities in the EU, Canada and Australia, the PDPC engaged in a detailed analysis of the organisation’s security and operational procedures, including those around its outsourcing arrangements between SingHealth and IHiS. The PDPC’s mature approach provides a valuable lesson to organisations wishing to take similar measures to protect the security of personal data under the GDPR.

Security roles and responsibilities

Similarly to under the GDPR (where there is a standard to take measures ‘appropriate’ to the risk of the processing and nature of the personal data), in Singapore, both organisations controlling personal data and their outsourced service providers have a duty (and concurrent liability) to take ‘reasonable’ security measures.

 The PDPC considered the data security framework that SingHealth had in place, centring its analysis around the responsibilities of their staff and the outsourcing arrangements. The government-owned IT service provider, IHiS was responsible for hiring and managing IT personnel for most functions, who were deployed to SingHealth. Effectively, these exercised the functions of identifying and reporting suspicious incidents and to provide information to SingHealth’s Board of Directors and Risk Oversight Committee on IT security measures and updates.

The PDPC considered that, because of the outsourcing arrangements, it was not clearly apparent whether SingHealth or IHiS is responsible for the actions of Group Information Security Officer and reporting CISO Cluster Information Security Officer, who worked at SingHealth but were deployed by IHiS. However, the PDPC drew from an earlier decision that considered that where the data processing activities are carried out by an organisation’s external vendor, “the organisation has a supervisory or general role for the protection of the personal data, while the data intermediary has a more direct and specific role in the protection of personal data arising from its direct possession of or control over the personal data”.

In particular, the PDPC considered that SingHealth failed to put in place the necessary operational and governance measures to support IHiS’ services, such as, for example, ensuring that the CISO had a team within SingHealth to provide support during the CISO’s absence. Moreover, focusing on IHiS’ responsibility, the PDPC specifically considered the provider’s responsibility in terms of its practical, rather than organisational, measures – such as its use of anti-virus and anti-malware software, network firewalls and running scripts to monitor the confidentiality and integrity of the SCM database, among other detection methods. It found these did not meet the requisite standard.

A similar standard could prove useful in outsourcing arrangements under the GDPR, particularly where responsibility for preventing security breaches is to be assigned between two parties that can both be responsible for data security failures. This also brings into play what contractual measures can be introduced to pass liability with such parties (such as, for example, where the outsourcing organisation may expect its IT provide to take care of all the relevant arrangements).

Liability and contractual measures

The PDPC’s analysis largely concerned arrangements between SingHealth and IHiS, which it held was a ‘data intermediary’ (effectively a data processor under Singaporean legislation). Along this vein, the PDPC suggested that several of the issues in responsibility, particularly among the CISO and GCIO, could have been solved by the parties signing relevant contractual clauses. The PDPC suggested such clauses should include a variety of measures, many largely similar to those under the GDPR, such as:

  1. controls around the use, return, destruction or deletion of the personal data;
  2.  a prohibition on sub-contracting;
  3. the right of the data user (controller) to audit and inspect how the data processor handles and stores personal data.

The third provision, whilst not expressly required by the GDPR, is a best practice in many controller-processor contracts. In practice, IHiS (including its cloud systems) were subject to an annual audit that was brought to the attention of SingHealth’s senior management. Typically, these provisions serve as a means for ensuring compliance with the above obligations, and also serves to demonstrate that the data controller has done its due diligence on suppliers, which itself is required by the GDPR.

Notably, the PDPC placed less emphasis on the obligations to assist the data user/controller with complying with data subject rights and data protection impact assessments – as such provisions, beyond the rights to notice access and correction – are not per se in place in Singapore.

However, the PDPC’s recommendations also included novel suggestions, such as:

  1. requirements for the immediate reporting of any sign of abnormalities (e.g. the PDPC suggests this could occur where an audit trail shows unusual frequent access of the personal data entrusted to the data processor by a staff member at odd hours) or security breaches by the data processor.

This appears to be a pre-cursor to breach notification, and is a significantly higher obligation than most best practice security provisions in controller-processor agreements in the EU (aside from perhaps those with IT Support providers) and will impose obligations on processors to monitor their relevant systems.

The PDPC held that these measures were largely met by the presence of an outsourcing contract in place committing IHiS to take appropriate data security measures and the presence of relevant policies, including standard sub-contractor agreements. The PDPC’s suggestions are a welcome intervention, particularly where controller-processor contracts, in practice, in the EU are beginning to include further provisions on notification, e.g. for the processor to specify where the data processing can no longer be performed to the obligations on limiting the scope of the personal data to be processed by the processor, no doubt heavily influenced by the Privacy Shield onward transfer requirements.

Conclusion

Essentially, the PDPC be concluded was that although IHiS did not have adequate security monitoring in place, SingHealth should also have identified those procedures were not sufficient or that there was insufficient governance in place. The PDPC’s decision, whilst noticeably profound in the level of the fines and its detail, is not exceptional in its content – with many European supervisory authorities increasingly examining controller’s outsourcing arrangements as a key area of compliance. For example, just last week, the Dutch supervisory authority announced it would be requesting from 30 organisations in the media, energy and trade sectors what agreements they have in place with third parties processing personal data on their behalf. In this background, the decision can prove useful guidance in tackling the nature of increasingly ubiquitous contracts with IT providers, as organisations look to offload the risk associated with data security.

EDPS’ issues first ‘Technology Report’ on Smart Glasses

Kaveh Cope-Lahooti & Abhay Soorya

The European Data Protection Supervisor, the authority responsible for overseeing the compliance of EU institutions with privacy and data protection norms, recently published an analysis of the deployment of Smart Glasses in its first technology report, in which it brings to light a variety of market and compliance issues with different systems. In particular, the guidelines examine technology and security issues related to the use of such Internet of Things (IoT) devices.

Smart glasses are wearable, IoT-connected (in most cases) computers that allow the user to interact with their environment whilst also serving as a visual display unit (VDU). Whilst not exactly a widespread technology, they have garnered some attention, in particular, that of Google Glass, which was subject to scrutiny by several national data protection authorities when it was released. At one end of the spectrum, they simply provide the user with wearable audio and video functionalities, whereas on the other, they can immerse users in virtual or augmented reality surroundings. Smart glasses raise several notable legal and security concerns owing to the fact that:

  1. Sensors can be used to track and record a variety of information about a wearer – including location data, photograph and video images and audio recordings;
  2. Like other connected technologies, smart glasses may be linked to other interfaces, either locally or via the Internet, such as through WiFi, Bluetooth and others, which can raise security concerns.

Data Protection Considerations

From a data protection perspective, the EDPS’ analysis centres around the scale of personal data that can be collected by wearable devices, and the lack of transparency. The EDPS, for example, refers to the fact that “One of the main concerns regarding smart glasses is their capacity to record video and audio in such a discreet way that the people being recorded are not aware of it”. We have seen this raised with other connected devices, for example, in the smart homes sphere, where devices recorded and profiled a resident’s consumption of their utilities, often with no transparent privacy notices in place. This can be exacerbated by the possibility of incidental recording of members of the public through the glasses

As such, wearable technology manufacturers need to consider what are the reasonable expectations of the users and of any unwitting data subjects. To solve this problem, Privacy by Design concerns would see, or perhaps even necessitate, such collection being highlighted either with a regular, timely notification to the user that the device is recording or maintaining limitations, such as those in Google Glass, to keep a standard recording time to one hour.

Another in-built problem is the lack of user control over how the data care be stored and shared by the IoT-connected devices. The EDPS raises is the potential for leveraging different types of personal data collected by the sensors on ‘connected glasses’ for profiling. One of the aspects of smart glasses is the fact that, by their very nature, they collect a significant amount of personal data of all types, and simultaneously. The use and collection of video images, alongside recordings, can be very intrusive and include compound data – i.e. as well as recordings of people, the devices could be used to scan financial information and sensitive personal data. This could allow organisations to combine such data to create more and more complex maps of user’s behaviour and interests.

As a result, the lack of control and specification of this personal information raises several headaches in terms of data protection, most notably due to the fact that it makes it difficult to give appropriate transparency notices and apply set retention periods or security measures to ill-defined, and potentially indefinite, categories of personal data.  #

Data Security Considerations

From a security perspective, there are a few impediments affecting smart glasses.

The first is premised upon functional requirements of the battery. Enabling a long charge life dictates processing to be restricted to a bare minimum, however this is antithetical to maximum security. The cornerstones of modern security – Integrity, Confidentiality, Availability – find common technical implementations through Hashing, Encryption, and Reply Protection for exchanged messages; all require adequate processing capability to be done well. Smart glasses require small batteries for convenience and bearing in mind that they must make possible faculties such as intelligence, light/sound sensing, user notifications, and so on, to provision their respective use cases, the balancing act becomes challenging. Very often, we observe security to take a back seat here.

The second is the lack of a mainstream security standard for connected devices. There is no IoT equivalent of a global standard that holds vendors and manufacturers of connected devices accountable under serious breach. Applicability of common rules is tenuous. Case in point: the GDPR requires intelligence to be explainable, however, this is difficult for algorithms that require a black-box approach or arbitrary choice making by design to ensure greatest efficiency. Additionally, such requirements find practical implementation in multitudinous ways; this is unhelpful when trying to ascertain causality of breach. Overall accountability in this space is still limited.

Third is a lack of convergence in the IoT landscape. There is value to be capitalised if an IoT device can connect across multiple protocols and products, dispersed across geographies. For example, assume that you are on holiday abroad and wanted to see why the smart camera in your home sounded alarm bells. Doing so requires data to traverse across multiple protocols; it may originate from ZWave camera, land beyond the home on WiFi, traverse the waters in 4G, and end up with a local transmission on Bluetooth from your phone to the camera. Every node along the chain is exploitable by a determined black hat actor. Add in the backdrop of emerging protocols – LPWAN, Sigfox, NB-IoT etc. – alongside the various devices and proprietary implementations, the picture is an inextricable mess.

As a result, the security of such devices – both physical and in terms of software or data – is currently entering a space without much consensus, transparency, and accountability.

Future Regulation

Like any novel technology, smart glasses were designed with functionality and its heart and privacy and security as incidental considerations, which has led to concern over their data collection and sharing abilities. Whilst the GDPR aims to solve several of these concerns, and is increasingly being used as a tool to demand accountability from companies from the online advertising industry to the provision of health and fitness apps – we share the opinion of the EDPS that future regulation such as the ePrivacy Regulation, which may impose stricter requirements around consent and data retention on IoT providers, can help introduce more stringent privacy and security standards.

The Council of Europe Guidelines on AI: Strengthening Rights over Automated Decisions

Kaveh Cope-Lahooti

To celebrate Data Protection Day, on the 28 January 2019, the Council of Europe released guidelines on data protection measures in relation to artificial intelligence. The guidelines contain recommendations that serve to codify much of the emerging best practice around ensuring artificial intelligence systems are made compatible with human and data subject rights, building upon existing regulation of the sector provided in the GDPR and Convention 108.

Convention 108 (the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data), which is an international law framework applicable to the (predominantly European) Council of Europe members, was last updated in 2018.  Building on the Convention, the guidelines further specify certain core elements that should be included when data is processed in AI systems, mainly focused about ensuring data accuracy, non-bias and a ‘human rights by design’ approach. It takes the latter to mean that all products and services should be “designed in a manner that ensures the right of individuals not to be subject to a decision significantly affecting them…without having their views taken into consideration”.

In practice, this will require organisations to conduct a wider risk assessment of their impacts in advance, and build-in governance methods to conduct an generate and consider relevant stakeholder input. One means of ensuring this is to involve, at an early stage in the design process, representatives from the design/development teams, HR, Data Protection and Risk departments and potentially executive and Boards members, in addition to seeking the advice of NGOs and other industry bodies already regulating data ethics. Organisations can rely on both external and internal data ethics committees, both to give their opinion on the potential social or ethical impact of AI systems, but also to be involved as a tool for ongoing monitoring of the deployment of AI systems.

Most notably, the Guidelines also highlight the right for individuals to obtain information on the reasoning underlying AI data processing operations applied to them. Indeed, this refers to Convention 108’s most recent iteration, which outlines that, in the context of automated decision-making systems, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of such a reasoning, which led to any resulting conclusions”. This goes further than provisions of the GDPR that provide for data subjects to receive information on “meaningful information about the logic involved” in such decisions, as well as the “significance and the envisaged consequences of such processing”. 

Rather than simply covering explaining system functionality’ or which processes are performed (i.e. whether there is any profiling, ranking or data matching than occurs), and perhaps what the functions of features (e.g. categories of data) are involved in the design of an algorithm, the Convention 108 right is a more expansive right, extending to understanding why an actual outcome was reached, including after a decision was made. This would require a company to assess and track the way algorithms are trained, and perhaps even re-run decisions with modified or different criteria, in order to be able to diagnose what “led to any resulting conclusions”.

Not only do the Guidelines refer to this right to information or explanation, but they also allude to the fact that AI systems should allow “meaningful control” by individuals over data processing and its related effects on them and on society. The thinking behind this is that, wher provided with the information to do so, data subjects will be able to exercise their other corollary rights under Convention 108 or the GDPR, including the right to either not be affected by solely automated decisions or to challenge their reasoning. As such, organisations should put into place mechanisms for challenging and reviewing automated decisions to ensure a fair and equitable outcome. These will have to be integrate either elements of human decision-making or at least, human intervention, over such decisions, on top of their obligations to notify data subjects of their rights. This will serve to provide sufficient assurance of objectivity of process under the GDPR, and will also help streamline any requests for challenging or querying decisions.

The Convention 108 is another ‘soft law’ mechanism that organisations should view as a sign of further elaboration on how organisations can practically comply with data protection and human rights norms in their deployment of AI systems, although not yet as a further regulatory step following the GDPR. From this perspective, the Guidelines, with the amended Convention, appear to serve to clarify much of the existing practice on designing transparency into AI processes, which can only serve to make organisations more objective and accountable.

Children’s Data: Consultation on Code of Conduct opens in Ireland

Kaveh Cope-Lahooti

The Irish Data Protection Commission (DPC) has launched a consultation on the processing of children’s personal data with a view to introducing guidance and a Code of Conduct for organisations. Although it will be some time before any steps are taken, it is important that businesses are aware of the key issues involved and to consider developing and investing in their own technology solutions to meet legal requirements under the Data Protection Act and General Data Protection Regulation (GDPR).

Background

On the 19th December 2018, the DPC opened a consultation and invited public comment on the processing of children’s personal data.

The GDPR’s regulation of children’s data is open-ended. There is no direct list of information that must be provided to children (or their parents) before children’s data is processed. The different EU member states may set the age at which children themselves may consent to the processing of their data for use online which means for children under that age parent’s consent would be required. This varies between 13 and 16 in member states, with Ireland taking 16 as the threshold – a marker of its commitment to protecting children.

The DPC intends to publish a Code of Conduct on processing children’s data. This has a basis in the Irish Data Protection Act, which encourages the formation of codes of conduct in specific sectoral areas, such as the protection of children’s rights, transparency of information and the manner in which parents’ consent is to be obtained. The Code will enable the DPC to carry out mandatory monitoring of compliance with the Codes of Conduct by the controllers or processors which undertake to apply it.

Key issues raised in the consultation and possible solutions are discussed in the following paragraphs.

Age of Consent

The GDPR’s specific regulation of children’s data is largely based on a similar standard and grounds as the US’ Children’s Online Privacy Protection Rule (COPPA), which, broadly speaking, applies to children’s information (for those under the age of 13) collected, used and disclosed by websites, mobile apps, games and advertising networks (among others) online. The GDPR’s requirement is slightly narrower, however, applying to information processed by ‘information society services’ offered to a child – which must ‘normally’ be paid or intended to be paid services. It also covers where these services are only ‘offered’ – i.e. at an early stage, such as where an account creation is initiated. In these circumstances, the online service provider must make “reasonable efforts” to verify that consent is given by the holder of parental responsibility “taking into consideration available technology”.

As discussed, the fledgling nature of the regulation of children’s data means there are no prescribed methods of collecting parents’ consent – and this is largely what the consultation asks for input on. Many of the attempts to introduce a means for collecting consent have been based on recommendations and practice under COPPA, including those recommended by the US Federal Trade Commission (FTC). In particular, this is the case for the proof of parent’s consent, where there is much discussion of what mechanisms organisations must put into place to collect the relevant consent. Clearly, there must be some information needed to verify this, i.e.:

  • Some form of age selection process by the user, where possible, built in to the website, which must be before a registration or a payment is made.
  • Where applicable, the identification of the parent may be required to be confirmed. This could be achieved by charging a nominal fee to a registered credit card in the parent’s name.
  • Parents will also need to confirm GDPR-compliant consent via an affirmative action such as signing a consent form, replying to an email or calling a number to confirm, which should be evidenced and auditable by the organisation, if possible.

In particular, it should be remembered that there are various other methods to prove or collect the required information, but each have their advantages or disadvantages. For example, it has been discussed that collecting a photograph of the parent’s ID (such as via a passport copy or other ID) would violate data minimisation requirements, even if deleted immediately. Arguably, on top of the potential to violate data minimisation requirements, the collection of this ID for verification purposes would be a practical and administrative burden for organisations to comply with far greater than charging a nominal fee to a credit card would. In particular, the verification of the ID would require sophisticated software or human intervention.

Most notably, it should be remembered that organisations should also put into place a method for the parent’s consent to be withdrawn, or the personal data deleted, in accordance with the rights under the GDPR. This would either involve saving the parent’s details – such as their email address – along with the fact that they had previously been verified as a parent – to process the request in the future.

Notification and Transparency

As the DPC notes, transparency is particularly important in the context of children’s data – such as through notices needing to be directed at children. An age-appropriate notice would have a broader application and, under the GDPR, is required regardless of whether the requirement for parent’s consent for paid online services applies or not. In particular, the DPC asks whether two separate sets of transparency information should be provided that are each tailored according to the relevant audience i.e. one for the parent and one for the child (i.e. aged under 16). Other issues that have been discussed around this include the fact that a child aged 15 will have a different capacity for understanding than a child under 10 years old.

A solution to this would be for the organisation to consider what the primary age demographic they could expect their website to be accessed by or targeted at and draft an appropriate notice. An age-selection method on a website’s homepage (to generate an appropriate notice) may be another method, although collecting such information so early may violate data minimisation requirements and may be cumbersome to many viewers, as well as for website owners to implement.

In relation to notifying parents, as explained previously, parents’ consent must meet the GDPR’s requirements, including that consent to be ‘informed’. One solution, such as the FTC recommends in relation to COPPA, could involve providing parents with information on why they are being contacted, which information was collected from the child and that the parents’ consent is required, alongside a link to the organisation’s Privacy Notice.

Children’s Rights

The consultation also touches on the issue of children’s rights, including the new rights under the GDPR, and how these are to be exercised. For example, in Canada, the Office of the Privacy Commission has proposed that, where information about children is posted online (by themselves or others), the right to remove it should be ‘as close to absolute as possible’, a sentiment echoed in Article 17(1)(f) of the GDPR. Ireland has taken a similar approach, for example, whereby under the Irish Data Protection Act, there is a stronger ‘right to be forgotten’. This applies to children whose personal data has been processed for information society services, without the need to prove that the processing was no longer necessary or unlawful, or the legitimate interest is unjustified, etc. As such, where the right to erasure does not apply absolutely (i.e. where consent is not relied on) organisations should be prepared to make such an objective assessment, considering (for example) whether the child or the guardian would have been aware of the use of the child’s personal data, and whether it was used in particularly invasive instances of processing.

Profiling

Additionally, whilst there has been some discussion among European supervisory authorities that children’s data will be particularly protected under data protection legislation, Ireland has gone further than this to protect children’s rights. In particular, the Data Protection Act 2018 has made it an offence to process children’s data (children, for this specific section, meaning those aged under 18, not under 13) for direct marketing, profiling or micro-targeting, regardless of consent. This has very wide implications – where profiling could simply be carried out by marketing or retail companies to tailor products and services to their child customers. On the broadest reading, would also exclude using factors from marketing that are likely to specifically target children, such as an online user’s interest in toys or browsing, for example.[

The consultation considers the incidence of where profiling involves specifically targeting children, particularly as guidance from supervisory authorities in several jurisdictions has held that this sort of automated profiling that specifically targets vulnerable groups should be prohibited. The DPC invites comments on how this can be balanced with an organisation’s legitimate interests. In practice, many organisations are already attempting to err on the side of caution by excluding factors related to children from the profiling.

Next Steps

The consultation touches on several other issues – such as how online service providers should ensure they comply with different ages of digital consent in different EU states – for which there are various possible legal, policy or technological solutions. The consultation is open for submissions until 1 March 2019, although there is a long way to go after this before businesses have any certainty over their procedures. After publishing the consultation submissions, the DPC will publish guidance and work with industry towards a Code of Conduct for organisations on measures and methods to be taken to the comply with provisions in the Data Protection Act and GDPR relating to children’s data. However, these will invariably be open to interpretation, meaning there is scope for business to develop and invest in their own technology solutions to meet these legal demands.