In three-month time, the General Data Protection Regulation (GDPR), will become applicable to many, if not all, data processing activities to which living individuals can be associated. Businesses operating in Europe have had about two years to prepare for this change. As readers know, even if the GDPR is a lengthy piece of legislation, additional interpretative guidance is very much welcome to create and aid understanding about the ‘links’ between key concepts arising across the different pieces of the legislative ‘jigsaw’. The influential EU Article 29 Data Protection Working Party (Art. 29 WP) has therefore been working hard these past few months to give context to some of the most important GDPR requirements: e.g. by publishing guidelines on issues such as data protection impact assessment, data protection officers, the right to data portability, automated individual decision-making and profiling, personal data breach notification, consent, and transparency.
For new comers to the field, excited about working with data (including personal data) to build and develop smart algorithmic systems, getting simple answers to key questions about how to comply with the GDPR is not always easy. [The same is often probably true for avid readers of the GDPR…]
What if one had only 1000 words to explain to businesses wanting to innovate with data relating to people what the GDPR is about? What would the message be?
For the sake of this thought exercise attempted here, we should probably assume that data innovation, in the main, implies the repurposing of data. The data is first collected for a specific or specified purpose and is then processed for a different purpose, one that most likely was not anticipated at the initial stage of collection by the data controller.
One of the first questions to pose in that context is whether a new legal (‘lawful’) basis is needed to comply with EU data protection law for this change of purpose. Under GDPR Article 6, the principle of lawfulness demands that at least one legal basis (chosen from a limited list of options) be identified to justify a personal data processing activity, either: consent; performance of a contract or steps necessary to entering into a contract; protection of the vital interests of the data subject; performance of a task carried out in the public interest or in the exercise of official authority vested in the controller; or, necessary to achieve legitimate interests pursued by the data controller, or by a third party, as long as the interests are not overridden by the interests or fundamental rights and freedoms of the data subjects.
Reading both Article 6(4) GDPR and the last version of Art. 29 WP guidelines on consent (‘WP259’) in conjunction, it appears that if the initial legal basis relied upon to justify personal data processing is consent, the only way to comply with the principle of lawfulness at the second stage (the data analytics stage) is to seek consent again.
This is what Art. 29 WP writes at p. 12 of WP259: “If a controller processes data based on consent and wishes to process the data for a new purpose, the controller needs to seek a new consent from the data subject for the new processing purpose.”
Nevertheless, Art. 29 WP is mindful of the fact that the law is changing and the GDPR introduces stricter conditions for obtaining informed consent where it is being relied upon by a data controller. It therefore adds (p. 30): “If a controller finds that the consent previously obtained under the old legislation will not meet the standard of GDPR consent, then controllers must assess whether the processing may be based on a different lawful basis, taking into account the conditions set by the GDPR. However, this is a one off situation as controllers are moving from applying the Directive to applying the GDPR. Under the GDPR, it is not possible to swap between one lawful basis and another.”
GDPR Art. 6(4) and Recital 50 seem to confirm that – following the GDPR coming into force – if the initial legal basis to be relied upon to justify processing personal data is consent, the doctrine of (in)compatibility of purposes (to ensure compliance with the so-called principle of ‘purpose limitation’) is not applicable. [Note that there has not always been consensus on the exact effects of the doctrine of (in)compatibility of purposes, see my previous post here, but Recital 50 now clarifies that “[t]he processing of personal data for purposes other than those for which the personal data were initially collected should be allowed only where the processing is compatible with the purposes for which the personal data were initially collected. In such a case, no legal basis separate from that which allowed the collection of the personal data is required.”].
But then, even if one is ready to seek consent again at the data analytics stage, could data subjects really be said to be capable of providing meaningful consent to such secondary practices? Article 6(4) provides that consent can only be given in relation to specific purposes.
Recital 33 GDPR suggests that, for scientific research purposes, the principle of purpose limitation should be relaxed. This is because, “It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.”
Although the GDPR seems to adopt a broad definition of scientific research, which covers “technological development and demonstration, fundamental research, applied research and privately funded research” (Recital 159), this relaxation per definition only applies to scientific research. Data analytics practices are not necessarily tantamount to scientific research activities. In fact, in most cases they do not involve researchers at all.
This explains why the GDPR uses a different term to describe data analytics: that of ‘general analysis.’ In Recital 29, one reads as follows:
“In order to create incentives to apply pseudonymisation when processing personal data, measures of pseudonymisation should, whilst allowing general analysis, be possible within the same controller when that controller has taken technical and organisational measures necessary to ensure, for the processing concerned, that this Regulation is implemented, and that additional information for attributing the personal data to a specific data subject is kept separately.”
What could Recital 29 mean?
It seems to suggests that, assuming the initial data controller also performs the secondary ‘general analysis’, the new purpose pursued at this later stage should be deemed compatible with the initial purpose at least where a process of pseudonymisation (see Article 4(5) for the GDPR definition of ‘pseudonymisation’ matching the description in Recital 29) is applied to the personal data post-collection. Therefore, could we also surmise – logically – that, assuming consent was not the initial legal basis relied upon to justify the collection of the personal data originally, no new legal basis would be needed to justify its secondary usage?
On the other hand, by contrast, what if the secondary ‘general analysis’ of that same personal data was actually to be undertaken by a third party, which implies that the data controller would transfer the data set to a recipient [e.g. a start-up] to carry out the innovatory analytics job? Would the old and new purposes be necessarily incompatible? If the answer is yes, a new legal basis would then be needed to justify the secondary processing at the data analytics stage.
What should a start-up receiving personal data from a data provider, to develop a solution and sell it back to the data provider, really do then?
At a minimum, the start-up should probably check what the legal basis for the repurposing of the data is likely to be, BUT ALSO whether the initial legal basis relied upon by the data provider in collecting/creating the personal data was consent obtained from the data subject, or not.
Taking this analysis one step further, assuming there is an argument [which is not straightforward as explained above] that the processing of personal data for general analysis (secondary analytics) purposes was compatible with the initial purpose justifying the original collection –even if the general analysis is to be undertaken by a third party on behalf of the data controller – that third party should in principle receive the data after a pseudonymisation process has been applied to the personal data.
Start-ups should therefore specifically ask for pseudonymised data from the provider of the data they will be experimenting with, whenever possible.
This makes particular sense in the light of Article 11 GDPR, which expressly states in alluding to a state of personal data very similar to the state of personal data that has undergone the process of GDPR pseudonymisation, that if “the controller is able to demonstrate that it is not in a position to identify the data subject, the controller shall inform the data subject accordingly, if possible. In such cases, Articles 15 to 20 shall not apply except where the data subject, for the purpose of exercising his or her rights under those articles, provides additional information enabling his or her identification.” [As a reminder, Article 15-20 GDPR refers to the data subject’s rights of: access by the data subject; to rectification; to erasure; to restriction of processing; and, to data portability].
Surprisingly the right to object is not exempted under Article 11 as it is governed by Article 21, whereas Article 12(2) expressly states: “In the cases referred to in Article 11(2), the controller shall not refuse to act on the request of the data subject for exercising his or her rights under Articles 15 to 22, unless the controller demonstrates that it is not in a position to identify the data subject.” [How do we explain this?]
But that is probably not all that we can infer from the above logic being suggested.
Under GDPR Art. 4(4) ‘profiling’ means “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”
Whereas, reading Art. 29 WP guidelines on automated individual decision-making and profiling (‘WP251’, recently adopted in final form here), it appears clear that the Art.29 WP envisages that the secondary data analytics stage, i.e. analysis to identify correlations in personal datasets at a later time period, is covered by this GDPR definition of profiling. Specifically, it alludes to the fact that analysis to identify correlations would/should fall underwould/should fall under the GDPR definition of profiling. (p.7).
As a result, if the data shared retains individual level data points [a fact that is consistent with the process of pseudonymisation being applied to personal data precisely to minimise the risk of harm arising to data subjects consequential to later processing activities], there is an argument that the recipient responsible for the data analytics effort may yet be determined to be engaging in profiling activities whenever it looks for patterns of commonalities. [The way the ultimate purpose of the set of processing activities is described could make the difference. E.g. “I am analysing data generated by driverless cars to identify where the most accidents take place and adapt road signs”].
As profiling activities require special care under the GDPR, in particular if such activities are followed by individual decisions taken as a result of profiles created (see GDPR Art.22 and Art. 35, for example), start-ups could find it useful to check with their data providers whether a data protection impact assessment has been undertaken to make sure the future risks for the individual data subjects – those at the very centre of the data analytics ‘story’ – have been taken into account and mitigated at an early stage.
What is the moral of the story? Data providers and start-ups should probably work closely together when doing people-centric data innovation or…. 1000 words is never enough to tell a data protection story!
This article was first published on Peep Beep!, a blog dedicated to information law.