Legitimate Interests: Balancing your business operations with individual rights

Kaveh Cope-Lahooti

Introduction

Even since before the GDPR, legitimate interests have been one of the most frequent bases that organisations have relied upon to justify processing personal data. However, the GDPR placed increased obligations and scrutiny on this practice. Particularly in industries where business models are increasingly based around the use of personal information, understanding where organisations can legally rely on legitimate interests and where the rights of individuals will be considered “overriding”, is key to compliance.

Background

Legitimate interests have been used as a flexible basis to justify data-driven operations since the Data Protection Directive in 1995. This rose to the fore when, the Google Spain case (Case C-131/12 Google Spain SL, Google Inc. v Agencia Espanola de Proteccion de Datos (AEPD), Mario Costeja Gonzalez, judgement of 13 May 2014), the Court of Justice of the European Union (CJEU) considered the assessment of the balance between the legitimate interests of the internet search engine providers and of internet users in receiving and having access to information in search results, on one side, and the rights of the data subject (here Mr. Gonzalez) in his privacy. Weighing these competing rights, the court considered both the centrality of the data processing to the commercial activity of a search engine and also the sensitivity of the information and the public profile of the data subject.

Changes under the GDPR

Under the GDPR, the legal test for legitimate interests means the onus is now on the controller to demonstrate that the interests or the fundamental rights and freedoms of the data subject do not “override” their interests – where formerly, such processing only needed to be “unwarranted” – a much higher hurdle. On top of the need for this new balance test, the legitimate interests relied upon must now be published in a Privacy Notice and individuals are able to request specific information on the legitimate interest assessment conducted, which increases the obligations and scrutiny on the controller to ensure a proper risk analysis is conducted.

On the plus side for companies, examples of legitimate interests are now provided specifically provided under the GDPR, and are being elaborated on in supervisory authority guidance – including for example, situations including the prevention and detection of fraud, network security and employee monitoring. However, the weighting given to the priority of either the data subject or the controller’s interests – and subsequently, the protective safeguards that must be put into place such as increased notice to data subjects or reduction of the scope of processing – for certain ‘legitimate’ activities such as employee monitoring will differ vastly across EU member states.

Conclusion

A year on from the GDPR’s entry into force, the circumstances in which legitimate interests can be relied upon are still evolving, and as an area in which caselaw and best practice is likely to play a huge part, all organisations should keep themselves updated.

Further regulation for AI? European Commission releases ethics guidelines

Kaveh Cope-Lahooti

On the 8th April, the European Commission’s High Level Expert Group on AI released their Ethics Guidelines for Trustworthy AI, focusing on outlining various ethical and moral principles for ‘trustworthy AI’, including that such systems be lawful (including the data used therein), ethical (in terms of compliance with core principles and values) and robust (including central security requirements).

The introduction of the guidelines follows a public consultation process between December and February. In particular, the guidelines build on top of other frameworks for autonomous and machine learning solutions, such as those launched by the European Commission, the Institute for Electrical and Electronic Engineers (IEEE), the Institute for Ethical AI and Machine Learning, several European data protection authorities and other private consultancies such as PWC, Deloitte and Gemserv. As such, they aim to form a framework for ensuring AI systems are deployed and designed in an accountable fashion, which will allow organisations to test and develop approaches to ethical AI, and which can potentially be translated into hard regulation.

Risk Management

The Ethics Guidelines centralises around several concepts that aim to introduce the necessary checks and assurance into AI systems. The guidelines focus on the need to address risks associated with AI systems, and the impacts they can have for individuals or communities – as part of what the High Level Working Group deems the ‘prevention of harm’ principle. Such effects can occur in the form of individuals being invasively targeted by advertising, being denied credit or loans on the basis of inaccurate data, even to pedestrians being harmed by self-driving cars.

Organisations can ease this process through an early data mapping exercise, focusing on the identification and sourcing of data, the selection of algorithmic models, the training and testing deployment, its use and deployment, and then monitoring. Through this, any issues with data accuracy and quality should be tracked and identified from the start, before they create problems in the form of skewed or biased decisions, later in the operation of systems. Moreover, training and testing should include key checks for, the identification of harm to individuals or groups of individuals and the potential for bias should be considered – including bias inherent in either the functions of data sets used.

Departments wishing to deploy or use AI systems should work with their development teams to ensure all of these tests can be performed prior to use or sign-off. However, it should not end there – systems should be subject to rigorous ongoing testing and retesting, including to see whether the system’s outcomes or outputs are meeting its objectives – such as, for example, whether user satisfaction is actually being achieved by a system that provided automated customer service responses.

Agency

Additionally, central to the High Level Expert Group’s approach to the regulation of AI is the concept of ‘human agency’ – the idea that individuals, including data subjects, should be autonomous and be able to determine how organisations control their data and how decisions affect them. The core concept of ‘agency’ builds upon individual rights under the Council of Europe ‘Convention 108’ and the GDPR – including such as to access, correct, amend and restrict the processing of their data, and even not to be subject to automated decisions that will have legal effects or similarly significant on them – unless necessary for a contract or permitted by law, or by explicit consent.  As such, organisations will have to build into AI systems the ability of individuals in processes, analysis and decisions made by AI systems – including to adjust their preferences and which data they disclose and to amend when they are tracked. However, they should also limit the harmful effects of AI – where ‘similarly significant’ effects is interpreted to mean negative impacts on human dignity, effects that target vulnerable groups, etc. In particular, both this and adherence to the concept of human agency can be achieved keeping a ‘human in the loop’, which refers to the governance or design capability for human intervention during the system’s operation, including to monitor or overrule AI decisions, so that such systems act as a complement, not a replacement, to human decision-making.

Transparency

Another increasingly mooted aspect of artificial intelligence systems is issue of transparency, which obliges organisations to introduce personnel and mechanisms not just to respond to interpret algorithms but also to respond to potential requests and challenges to their results. Transparency also involves – for example, designing interfaces to allow customers to see how their data is used, where they are public facing, which will include when collecting consent and/or personal data from individuals. Transparency is also largely connected to the element of ‘explainability’, which, in Convention 108’s most recent iteration, outlines that, data subjects should be entitled to know the “reasoning underlying the processing of data, including the consequences of the reasoning [of Artificial Intelligence systems], which led to any resulting conclusions”. This goes further than provisions of the GDPR in that, being a more expansive right, it extends to understanding why an actual outcome was reached, including after a decision was made, rather than simply focused on the basis for decisions made.

Faced with the difficulty of making AI systems explainable, there are two other ways organisations can perform the necessary due diligence – firstly, documenting all decisions that a system makes can contribute to such as functions, selection of data and data sources – even when full transparency cannot be achieved due to the intrinsic nature of the algorithm or machine-learning process. Secondly, companies should act in publicly committing to a Code of Ethics around how data will be sourced and used, and the values and aims of the systems, can also help with public engagement and reception.

Participation

Lastly, the guidelines discuss the participation of different stakeholders in the design of AI systems –  including both internal and external governance frameworks. It should be remembered that individual departments will often be the information owners responsible for the operational decisions governing AI systems, and so should liaise with developers and system providers, and other relevant third parties, to ensure their requirements are met. At a more strategic level, organisations should involve executive sponsors or management in approving AI systems that are likely to have an impact on the workforce or involve significant disruptive effects on operations. Moreover, where AI systems are likely to raise risks – legal, social, reputational or financial – they will need to approve and consider ethics and goal-orientated trade-offs for systems during their development.

To support this, organisations can appoint ethics panels, committees or boards to liaise ethics dialogue within their organizations, seeking approaches that are both aspirational and value-based. Within these groups, for example, the High Level Working Group emphasises that designers and technologists should engage in cross-disciplinary exchanges whereby ethicists, lawyers and sociologists to understand the impact of AI solutions.  However, whichever structure is established, the group or panel has to have ‘teeth’ to be able to accomplish effective oversight and management. This is a particular contemporary issue, given the recent failing of Google’s ethics board, which shut down following a backlash over both its lack of effectiveness and the composition and background of some of its members. As such, the group should be consulted during deployment of the system, particularly over its goals and potential effects, and regularly informed of the outcomes of monitoring of the solution deployed throughout the lifecyle of the system.

Conclusion

The High Level Working Group’s guidelines bring a very detailed discussion of evolving regulatory norms governing artificial intelligence systems. Specifically focusing on the protection of harm and individual rights, the framework aims to incorporate checks in the deployment of systems to ensure they are more ethically and individually-focused. Organisations wishing to remain accountable should take advantage of the reputational and compliance benefits of overtly demonstrating that they use data in accountable and fair ways, and that they are committed to delivering operations and services in line with the principles espoused by the guidelines – as the EU is likely to incorporate them into hard law regulating AI systems in the near future.

Some thoughts on the Copyright Directive


Photo by Tobias Tullius on Unsplash

Much has been written about the new EU initiative currently awaiting approval by the Council. Few issues seem to spark such fierce debates within the internet and technology community as those pertaining to copyright enforcement. This is nothing new. What is new, however, is perhaps the extent tech giants are able to capitalise on this in order to push their agenda. Propaganda is a strong, loaded term but it is difficult to find another way to describe what is effectively the targeted lobbying of the general public via corporate communications. In order to understand the current regulatory environment, as well as the extent of the backlash (i.e. its motivations), one must first look back almost 20 years, to the Electronic Commerce Directive.

Two decades i.e. a millennia in internet years

It’s fairly uncontroversial that the pace of technological change vastly outstrips that of the legislative process. 20 years in real time is basically a millennia in computer years. Nevertheless, the e-Commerce Directive, until recently, has remained the basis for determining intermediary liability. It has been incredibly successful in this regard, largely by immunising ‘information society services’ from liability with regards the content that flows through their networks. Suffice to say that there is a reason that household tech giants exist, and this is because policing the content they publish and profit immensely from has, for the most part, not fallen on them but instead on charities and those that produce the content (ostensibly not charities). This has made UUC platforms incredibly lucrative. Network effects + free content + ad revenue = billion dollar industry.  

Regulatory delays has its costs

The economic adage that there are no solutions, only trade-offs rings true here: ‘publishers’ (or more accurately, ‘platforms’ since it is not always that an online intermediary will be deemed a publisher, if ever) have done very well off the back of content produced by others and with no real responsibility for it. This has far-reaching implications for business but also the wider public. Free speech, fake news and access to content are all relevant. However, now that these platforms have matured, having been given the legal room to do so, the tables are finally beginning to turn. As with all change, and perhaps especially with change that threatens to disrupt billion-dollar businesses, there has been a lot of protest.

None of the foregoing is to say that tech companies don’t provide anything. Many resources are expended in developing whizzy platforms that can keep up with somewhat fickle and impatient consumption patterns. They provide the necessary infrastructure for the ‘new’ media to flourish. I say ‘new’ in inverted commas because while the technology (i.e. the speed, volume and on-demand nature of it all) is new, ultimately the structures for delivering it are not so new (or at least not that different in principle from things like radio, television and telephony). Instead, the safe harbour regime is what cradled and nurtured these nascent entities to build the infrastructure needed, by way of a sort of de-regulatory economic stimulus.

And thus after 20 years of decline, it is only now that content industries, like the music industry (and perhaps journalism too?) are finally seeing some growth. Obviously not back to pre-internet figures, but finally an up-tick. This is quite remarkable given they have still been producing content all these years with tightened belts. The point is that the proceeds of such content have gone to the conduits and not the producers themselves. Why engage in licensing discussions over content if a third party uploader will provide it to you for free? Expensive and time consuming.

Enter the resistance

So if you were a giant ‘mere conduit’ that is in practice if not in law ‘publishing’ all sorts of content, how best would you resist a change that obliges you to licence and at least kick back some of that revenue to those who created it? How about publishing ‘public information notices’ instead of the ads you would normally show users informing them why a new EU Directive threatens the very nature of the internet and all the memes users hold dear? For someone with extensive research experience, such claims made me chuckle. But for a general user, they are taken for truth and stick with incredible efficiency. It’s quite ironic given that most people online are at once incredibly critical yet nevertheless so willing to believe sensationalised claims.

Needless to say, the claims of ‘censorship’, ‘the end of the internet’ and a ‘ban on memes’ are grossly overblown. For one, banning memes would be like trying to ban books. You could do it, but only in a dystopian world with infinite resources for (a very high tech) law enforcement agency and only in the offline world. In fact, even the book ban is a bad analogy because books take a lot more time, effort and intellect to produce than a meme.

Secondly, content filters. Not only do these already exist (see, e.g., YouTube’s Content ID system), but they have not resulted in ‘breaking the internet’ or preventing people from sharing content. It also is unlikely to bring disproportionate costs on the small billion-dollar business that is Alphabet (they could even just license this system to other platforms if they wanted to; not that they would because they’re a giant monopoly anyway). Recitals to the Directive are not that ambiguous as to who the ‘filtering’ requirements would apply to either – only the largest of intermediaries, and not start-ups.

Lastly, copyright exceptions. It’s true that automated systems make poor legal analysts. Luckily, we do not live in a Judge Dredd dystopia. As such, there is always an individual behind a takedown notice and systems like Content ID. Counterclaims are possible and will remain. That’s the entire point of having a legal system at all, rather than what has effectively been a technological wild west for the past 20 years. Will this cause overblocking? No, it will only force free speech ‘advocates’ like YouTube to put their money where their mouth is — they can either license the content or overblock. Either way the onus will be on them, rather than a third party, like an actual content producer, to defend the choice to takedown the illicit content if they have refused to pay a fair share for its publication. They are, after all, the more technologically sophisticated party, and so should be accountable as such.

A final irony

It’s interesting that the content industries have been side-lined by the public for so long, despite their consumptive appetites for entertainment. It is hard to understand why the copyright industries are so shunned by the general public. My guess is propaganda and bad press. Yes, it has taken some time for enforcement efforts to align with licensing efforts, but much of this is owing to delays in updating the regulatory framework. What ‘copyleft’ protesters and corporations fail to acknowledge, however, is that a lax copyright regime directly helps fund criminals who use the proceeds for behaviours that no one in their right mind would dream of protesting over. Further, in an age where big business is already perceived by the public as having too much power, why are so many in the public willing to hand them even more power over our news and other media by defending their position? They have had their day, or rather their two decades and still will regardless. Let’s allow the pendulum to swing back in favour of creatives again.

Further reading is available here.

— Hernán R. Dutschmann

The Singapore PDPC’s record fines signal growing scrutiny of IT security outsourcing arrangements

Kaveh Cope-Lahooti

The Personal Data Protection Commission of Singapore has recently fined two companies operating in the healthcare sector (a healthcare provider, SingHealth, and its IT provider, IHiS) its highest ever fines of SGD 250,000 and SGD 750,000 respectively, for their failure to put into place adequate data security measures that exacerbated the effects of a cyberattack last year. These fines reflect the fact that the data breach was the worst in Singapore’s history, affecting some 1.5 million patients (in a jurisdiction with around 5.6 million residents). The breach arose after an attacker gained access to the healthcare database by infecting a user’s workstation, obtained login credentials to the database and was able to repeatedly access and copy data.

Drawing on domestic legislation in addition to guidance from supervisory authorities in the EU, Canada and Australia, the PDPC engaged in a detailed analysis of the organisation’s security and operational procedures, including those around its outsourcing arrangements between SingHealth and IHiS. The PDPC’s mature approach provides a valuable lesson to organisations wishing to take similar measures to protect the security of personal data under the GDPR.

Security roles and responsibilities

Similarly to under the GDPR (where there is a standard to take measures ‘appropriate’ to the risk of the processing and nature of the personal data), in Singapore, both organisations controlling personal data and their outsourced service providers have a duty (and concurrent liability) to take ‘reasonable’ security measures.

 The PDPC considered the data security framework that SingHealth had in place, centring its analysis around the responsibilities of their staff and the outsourcing arrangements. The government-owned IT service provider, IHiS was responsible for hiring and managing IT personnel for most functions, who were deployed to SingHealth. Effectively, these exercised the functions of identifying and reporting suspicious incidents and to provide information to SingHealth’s Board of Directors and Risk Oversight Committee on IT security measures and updates.

The PDPC considered that, because of the outsourcing arrangements, it was not clearly apparent whether SingHealth or IHiS is responsible for the actions of Group Information Security Officer and reporting CISO Cluster Information Security Officer, who worked at SingHealth but were deployed by IHiS. However, the PDPC drew from an earlier decision that considered that where the data processing activities are carried out by an organisation’s external vendor, “the organisation has a supervisory or general role for the protection of the personal data, while the data intermediary has a more direct and specific role in the protection of personal data arising from its direct possession of or control over the personal data”.

In particular, the PDPC considered that SingHealth failed to put in place the necessary operational and governance measures to support IHiS’ services, such as, for example, ensuring that the CISO had a team within SingHealth to provide support during the CISO’s absence. Moreover, focusing on IHiS’ responsibility, the PDPC specifically considered the provider’s responsibility in terms of its practical, rather than organisational, measures – such as its use of anti-virus and anti-malware software, network firewalls and running scripts to monitor the confidentiality and integrity of the SCM database, among other detection methods. It found these did not meet the requisite standard.

A similar standard could prove useful in outsourcing arrangements under the GDPR, particularly where responsibility for preventing security breaches is to be assigned between two parties that can both be responsible for data security failures. This also brings into play what contractual measures can be introduced to pass liability with such parties (such as, for example, where the outsourcing organisation may expect its IT provide to take care of all the relevant arrangements).

Liability and contractual measures

The PDPC’s analysis largely concerned arrangements between SingHealth and IHiS, which it held was a ‘data intermediary’ (effectively a data processor under Singaporean legislation). Along this vein, the PDPC suggested that several of the issues in responsibility, particularly among the CISO and GCIO, could have been solved by the parties signing relevant contractual clauses. The PDPC suggested such clauses should include a variety of measures, many largely similar to those under the GDPR, such as:

  1. controls around the use, return, destruction or deletion of the personal data;
  2.  a prohibition on sub-contracting;
  3. the right of the data user (controller) to audit and inspect how the data processor handles and stores personal data.

The third provision, whilst not expressly required by the GDPR, is a best practice in many controller-processor contracts. In practice, IHiS (including its cloud systems) were subject to an annual audit that was brought to the attention of SingHealth’s senior management. Typically, these provisions serve as a means for ensuring compliance with the above obligations, and also serves to demonstrate that the data controller has done its due diligence on suppliers, which itself is required by the GDPR.

Notably, the PDPC placed less emphasis on the obligations to assist the data user/controller with complying with data subject rights and data protection impact assessments – as such provisions, beyond the rights to notice access and correction – are not per se in place in Singapore.

However, the PDPC’s recommendations also included novel suggestions, such as:

  1. requirements for the immediate reporting of any sign of abnormalities (e.g. the PDPC suggests this could occur where an audit trail shows unusual frequent access of the personal data entrusted to the data processor by a staff member at odd hours) or security breaches by the data processor.

This appears to be a pre-cursor to breach notification, and is a significantly higher obligation than most best practice security provisions in controller-processor agreements in the EU (aside from perhaps those with IT Support providers) and will impose obligations on processors to monitor their relevant systems.

The PDPC held that these measures were largely met by the presence of an outsourcing contract in place committing IHiS to take appropriate data security measures and the presence of relevant policies, including standard sub-contractor agreements. The PDPC’s suggestions are a welcome intervention, particularly where controller-processor contracts, in practice, in the EU are beginning to include further provisions on notification, e.g. for the processor to specify where the data processing can no longer be performed to the obligations on limiting the scope of the personal data to be processed by the processor, no doubt heavily influenced by the Privacy Shield onward transfer requirements.

Conclusion

Essentially, the PDPC be concluded was that although IHiS did not have adequate security monitoring in place, SingHealth should also have identified those procedures were not sufficient or that there was insufficient governance in place. The PDPC’s decision, whilst noticeably profound in the level of the fines and its detail, is not exceptional in its content – with many European supervisory authorities increasingly examining controller’s outsourcing arrangements as a key area of compliance. For example, just last week, the Dutch supervisory authority announced it would be requesting from 30 organisations in the media, energy and trade sectors what agreements they have in place with third parties processing personal data on their behalf. In this background, the decision can prove useful guidance in tackling the nature of increasingly ubiquitous contracts with IT providers, as organisations look to offload the risk associated with data security.

EDPS’ issues first ‘Technology Report’ on Smart Glasses

Kaveh Cope-Lahooti & Abhay Soorya

The European Data Protection Supervisor, the authority responsible for overseeing the compliance of EU institutions with privacy and data protection norms, recently published an analysis of the deployment of Smart Glasses in its first technology report, in which it brings to light a variety of market and compliance issues with different systems. In particular, the guidelines examine technology and security issues related to the use of such Internet of Things (IoT) devices.

Smart glasses are wearable, IoT-connected (in most cases) computers that allow the user to interact with their environment whilst also serving as a visual display unit (VDU). Whilst not exactly a widespread technology, they have garnered some attention, in particular, that of Google Glass, which was subject to scrutiny by several national data protection authorities when it was released. At one end of the spectrum, they simply provide the user with wearable audio and video functionalities, whereas on the other, they can immerse users in virtual or augmented reality surroundings. Smart glasses raise several notable legal and security concerns owing to the fact that:

  1. Sensors can be used to track and record a variety of information about a wearer – including location data, photograph and video images and audio recordings;
  2. Like other connected technologies, smart glasses may be linked to other interfaces, either locally or via the Internet, such as through WiFi, Bluetooth and others, which can raise security concerns.

Data Protection Considerations

From a data protection perspective, the EDPS’ analysis centres around the scale of personal data that can be collected by wearable devices, and the lack of transparency. The EDPS, for example, refers to the fact that “One of the main concerns regarding smart glasses is their capacity to record video and audio in such a discreet way that the people being recorded are not aware of it”. We have seen this raised with other connected devices, for example, in the smart homes sphere, where devices recorded and profiled a resident’s consumption of their utilities, often with no transparent privacy notices in place. This can be exacerbated by the possibility of incidental recording of members of the public through the glasses

As such, wearable technology manufacturers need to consider what are the reasonable expectations of the users and of any unwitting data subjects. To solve this problem, Privacy by Design concerns would see, or perhaps even necessitate, such collection being highlighted either with a regular, timely notification to the user that the device is recording or maintaining limitations, such as those in Google Glass, to keep a standard recording time to one hour.

Another in-built problem is the lack of user control over how the data care be stored and shared by the IoT-connected devices. The EDPS raises is the potential for leveraging different types of personal data collected by the sensors on ‘connected glasses’ for profiling. One of the aspects of smart glasses is the fact that, by their very nature, they collect a significant amount of personal data of all types, and simultaneously. The use and collection of video images, alongside recordings, can be very intrusive and include compound data – i.e. as well as recordings of people, the devices could be used to scan financial information and sensitive personal data. This could allow organisations to combine such data to create more and more complex maps of user’s behaviour and interests.

As a result, the lack of control and specification of this personal information raises several headaches in terms of data protection, most notably due to the fact that it makes it difficult to give appropriate transparency notices and apply set retention periods or security measures to ill-defined, and potentially indefinite, categories of personal data.  #

Data Security Considerations

From a security perspective, there are a few impediments affecting smart glasses.

The first is premised upon functional requirements of the battery. Enabling a long charge life dictates processing to be restricted to a bare minimum, however this is antithetical to maximum security. The cornerstones of modern security – Integrity, Confidentiality, Availability – find common technical implementations through Hashing, Encryption, and Reply Protection for exchanged messages; all require adequate processing capability to be done well. Smart glasses require small batteries for convenience and bearing in mind that they must make possible faculties such as intelligence, light/sound sensing, user notifications, and so on, to provision their respective use cases, the balancing act becomes challenging. Very often, we observe security to take a back seat here.

The second is the lack of a mainstream security standard for connected devices. There is no IoT equivalent of a global standard that holds vendors and manufacturers of connected devices accountable under serious breach. Applicability of common rules is tenuous. Case in point: the GDPR requires intelligence to be explainable, however, this is difficult for algorithms that require a black-box approach or arbitrary choice making by design to ensure greatest efficiency. Additionally, such requirements find practical implementation in multitudinous ways; this is unhelpful when trying to ascertain causality of breach. Overall accountability in this space is still limited.

Third is a lack of convergence in the IoT landscape. There is value to be capitalised if an IoT device can connect across multiple protocols and products, dispersed across geographies. For example, assume that you are on holiday abroad and wanted to see why the smart camera in your home sounded alarm bells. Doing so requires data to traverse across multiple protocols; it may originate from ZWave camera, land beyond the home on WiFi, traverse the waters in 4G, and end up with a local transmission on Bluetooth from your phone to the camera. Every node along the chain is exploitable by a determined black hat actor. Add in the backdrop of emerging protocols – LPWAN, Sigfox, NB-IoT etc. – alongside the various devices and proprietary implementations, the picture is an inextricable mess.

As a result, the security of such devices – both physical and in terms of software or data – is currently entering a space without much consensus, transparency, and accountability.

Future Regulation

Like any novel technology, smart glasses were designed with functionality and its heart and privacy and security as incidental considerations, which has led to concern over their data collection and sharing abilities. Whilst the GDPR aims to solve several of these concerns, and is increasingly being used as a tool to demand accountability from companies from the online advertising industry to the provision of health and fitness apps – we share the opinion of the EDPS that future regulation such as the ePrivacy Regulation, which may impose stricter requirements around consent and data retention on IoT providers, can help introduce more stringent privacy and security standards.