General Data Protection Regulation Top Ten Issues

The ink is barely dry on the draft, but the  EU General Data Protection Regulation (GDPR) looks set to change the regulatory environment for personal information not just in the EU, but around the world. Its aim is to create a legal infrastructure for the use of personal data that is fit for purpose, both today and in the future.

The GDPR was designed to increase legal certainty with regards to information flows both within the EU’s borders and beyond. It also introduces stronger consumer protections, with requirements for greater transparency and accountability about how data is used by businesses, not-for-profits and governments alike.

This is intended to give individuals increased trust in data practices.  Consumer research in the last few years has shown consistently high levels of concern and lack of trust in this area, and this is believed to be a potential brake on the future growth of digital technologies.

However, in order to achieve these goals the GDPR does come with some stings in its tail. It places much greater requirements on businesses to communicate effectively with customers, and obtain much clearer consent for the use of their data.  Organisations also have to provide customer choice mechanisms, and there is a greater emphasis on documenting data processing activity. And then of course there are the fines.

At over 200 pages it is a very wide ranging instrument.  However, for those who haven’t had time to read it yet, these are what we think the top 10 issues for most organisations will be.

1.  A broader definition of Personal Data

As we predicted earlier, the scope of what constitutes ‘personal data’ has explicitly been broadened to include any information ‘relating to’ an individual. This specifically includes ‘online identifiers’ so cookies and the advertising IDs seen in the mobile eco-system will be caught up, along with anything that contributes to identifying an individual, or links to such identifying information. This has some widespread implications for online tracking in particular.

2.  A higher bar for consent

Whilst the final text shied away from explicit consent as a requirement, except when special categories of (sensitive) data are concerned, there is still much emphasis on gaining consent through active user mechanisms like tick boxes.

A key part of the test of the validity of consent is whether consumers understand what they are agreeing to, and are given a meaningful choice. There is also a significant shift in the burden of proof.  You will need to be able to provide evidence that you obtained consent from specific data subjects, which is going to require much better record keeping for many organisations.

3.  Data Protection Officers

Although not a universal requirement, many organisations will be required to appoint a Data Protection Officer (DPO) to oversee data uses and ensure compliance with the law. They will be mandatory in the public sector, but for private sector organisations the key test will be whether the organisation is involved in “systematic monitoring of data subjects on a large scale“, however it is not clear at this time how ‘large scale’ will be interpreted.

Earlier, more detailed, requirements for the skills and experience of the DPO and guarantees over their employment, have been dropped but a key issue in the short to medium term will be a lack of the right people to fill such roles.

DPOs however can be outsourced, which may create a market for new services, especially to cater for the needs of smaller businesses.  The DPO responsibilities can also be given to someone alongside other work within the organisation, as long as this does not create a conflict of interest.  So training existing staff into the role could be a viable option for many.

4.  Transparency and Accountability

The GDPR scraps the need for controllers to register with their Data Protection Authority (DPA), but replaces this with a requirement to both better inform data subjects about practices and rights, and to keep records that can be made available on request – such as in the event of a data breach or a compliance complaint.  Such records are about demonstrating that the organisation has thought through the impact of its systems and processes, and made informed choices about how to comply with the GDPR.  The Data Protection or Privacy Impact Assessment (PIA) is one example of such documentation.  It is intended that a PIA will show that an organisation has considered the risks associated with its particular personal data practices, and taken reasonable steps to control or mitigate them.

There are also new requirements on the level of detail that organisations must provide to data subjects about their practices, as well as a need to make sure that this information is both accessible and easy to understand. In particular there is a need to explain the logic behind decisions made on the basis of analysing personal data – which may have particular significance in some sectors that have relied on such processes being largely secret. Organisations are also expected to inform subjects about their rights and how to exercise them.

5.  Data Protection by Design and Default

Although references to this have been cut back in comparison with earlier versions of the text, the GDPR contains requirements that the design of systems and processes are required to give consideration to compliance with the principles of data protection. Particular emphasis is placed on the ideas of only collecting data necessary to fulfil specific purposes, discarding it when it is no longer required, and protecting data subject rights.

It also sets up the possibility for the development of certifications and codes of practice that organisations can follow to help meet these requirements.  Keep an eye for these as they develop.  In particular we expect DPAs to get involved in this area.  They will be losing their registration fees and therefore needing new sources of income.  In the UK the Information Commissioners Office (ICO) has already been developing this idea, so expect it to continue. Trade bodies are also likely to have a role to play here.

6.  The Right to Erasure and Data Portability

These new data subject rights are likely to pose challenges for many organisations. The right to erasure is a clarification of the much talked about ‘right to be forgotten’.   Although the circumstances when the right can be exercised have been made clearer, the balancing against other rights and obligations is still needed.

The right to have a copy of your data in a machine readable form to transfer to another provider may be difficult at first, but it could also lead to better systems interoperability in the longer term – which is already a growing technology trend.  In particular this provision may facilitate the development of the market for ‘personal data stores’, an idea that has long been talked about, but not yet fully realised as providers have struggled with sustainable and scalable business models.

7.  Removal of Subject Access Request Fees

Data subjects have a right to know whether or not an organisation is processing their personal data, what that data is and the purposes of the processing.  The GDPR removes the ability to charge an upfront fee for providing such information, and there is a risk requests will increase as a result of this, pushing up costs.  Current allowable fees don’t exactly cover the cost of  a Subject Access Request (SAR), but are seen as a deterrent to time wasters.  If companies are no longer able to charge fees, it is feared this could open the floodgates to many more SARs.

Companies will be allowed to charge for subsequent copies of the same data, which may reduce the risk of this to some extent. However, it may be worth investing in making sure you can respond to such requests as efficiently as possible, which will not be easy in many cases.

8.  Reporting Data Breaches

Data controllers will be required to report data breaches to their DPA, unless it is unlikely to represent a risk to the rights and freedoms of the individuals concerned. However this qualification may be difficult to judge, so in many cases, it will be safer to notify. The notice must be made within 72 hours of becoming aware of it, unless there are exceptional circumstances, which will have to be justified.

Where the risks to individuals is high, then the data subjects themselves will also need to be notified, although a specific time scale is not specified for this.  It is also worth noting that the DPA can instruct an organisation to inform data subjects if they haven’t already, so we can expect to see further guidance on the circumstances when it would be correct to do so.

9.  Fines

The GDPR very deliberately raises the bar in terms of the ability for DPAs to issue fines for breaches of the rules.  They can go as high as 4% of global turnover.  Not only are these designed to ensure data protection becomes a board level issue, by taking into account worldwide revenues, they seek to side step attempts by multinationals to engage in fine-avoidance business structures.

It is also worth noting that fines can be levied without the necessity to demonstrate harm – although the largest ones will likely be reserved for cases where data subjects have directly suffered damages.

10.  Data Processor Responsibilities

Organisations that only process data on instructions from their client are not directly covered by the current data protection regime.  Their actions were assumed to be governed by agreement with the customer who would be the data controller, and therefore directly responsible for the actions of the processor. However this all changes under the GDPR, and processors now have direct legal obligations and responsibilities.

In particular this means that processors can in certain circumstances be held directly liable and be required to pay compensation to a data subject. It will therefore become very important to establish the contractual relationships and liabilities of the different parties in a controller/processor relationship, and the costs of some services by processors may rise to offset additional risks and insurance costs.

 

We hope you find this useful.  In future posts we will look at more details of what you can do to prepare, as well as looking into each of these areas in more detail.

In the mean time, if you have any questions and would like to know more about how the GDPR might effect your business, do get in touch and we will be happy to help.

Right to be Forgotten Guidelines Published

The much anticipated guidance on the interpretation of the CJEU right to be forgotten ruling was published (PDF) by the Article 29 Working Party late last week.  This is an important document as it not only clarifies the impact and application of the ruling, it sets out the common set of rules that EU regulators will use when a citizen appeals against a refusal to remove links from a search engine index.  As such, the document represents the de-facto standard by which the decisions by search engines to remove links will be measured.

Global Scope

As was widely expected, the guidance confirms that view that any removal of links must be done across all domains owned by the search engine, not just those in the EU.

In its first set of actions, Google retained links to content on the google.com search domain, and publicised this fact, which had the effect of weakening the data protection rights of the individual, especially as the .com domain can be used by people within the EU.

The new guidance also supports the recent decision of a French court to fine Google France €1,000 per day when it limited the effect of a takedown request of a French lawyer to the google.fr domain.

Streisand Effect

The guidance also contains measures designed to limit the potential for de-linking decisions to have the effect of promoting the original content that the data subject is attempting to make obscure, the so called ‘Streisand Effect’. To this end, search engines have been told not to use any targeted messaging in search results that would lead a user to conclude that results related to a particular individual have been removed.

Similarly search engines are told that they ‘should not as a general practice’ tell webmasters when pages have been de-listed, as has been Google’s practice until now. This in turn has led publishers to re-promote those stories, often by reference to the request, even when they did not know which individual made the request.

However, the search engine may contact the webmaster prior to making a decision, in order to gain more information to makes its determination.  If they do this however, they will still have an obligation to safeguard the rights of the data subject.

13 Criteria for Complaints

If a search engine refuses a de-listing request from a data subject, then it must explain its reasons, and the subject then has a right to ask their local DPA to make a judgement on whether that decision was in line with data protection law.

In order to make this process as transparent as possible, a set of 13 common criteria have been drawn up, which will also help the DPAs from different countries make more consistent decisions.

Although these are not directly addressed to search engines, it is clear that if they want to minimise successful appeals by subjects to reverse a de-listing refusal, they will need to use the same criteria in their own decision processes.

Bing Creates Right to be Forgotten Form

bing-logo

Microsoft’s Bing has become the second search engine to put up a form to enable web users to ask for content about them to be removed.

Perhaps Microsoft has learned a little from Google – as they are asking for much more detail in order to make a decision.

In particular they ask the requester questions about their role in public life, and to justify why they think their privacy rights should outweigh the public right to know.

This will clearly help them better judge whether or not to accept the request – and puts more work in the hands of the requester.  It will almost certainly lead to fewer unjustifiable requests of the sort that Google were publicising when they first launched their solution.

Of course, Bing has a much smaller market share in the EU compared to Google, so fewer takedown requests would be expected anyway.

It will be very interesting to see if the companies start publishing any statistics – which would be very useful information from a transparency point of view.

Google Starts Forgetting

Google has started removing links in search results as part of its compliance with the recent court decision that search engines must respect the data protection rights of EU citizens and residents.

As part of its process it has been contacting publishers to tell them when pages have had link to them removed, but without revealing which search terms are affected, as that would then reveal the identity of the requester – itself a possible breach of privacy.

Almost inevitably the removals have included links to prominent media outlets, which sparked of further debates about the privacy vs. freedom of expression debate.

An article by BBC business writer Robert Peston was one early subject of a take down, who followed up by asking whether he had any opportunity to appeal.

The Guardian newspaper also found that some links to articles had been removed, but were reported later as subsequently being reinstated.

It seems likely that this will go on for some time, as search engines try to figure out how to implement the court ruling in a proportionate way.  However, the issue does raise the question of a right to appeal by publishers.

Meanwhile, the Data Protection Authorities across the EU, including the UK ICO, are also trying to decide how to handle the appeals likely to come from individuals who have had their removal requests rejected.

We expect this issue to run for some time to come – and will almost certainly influence the ongoing negotiations around the General Data Protection Regulation – where the ‘Right to Erasure’ as it is currently phrased is a key element of the package.

No-one ever said data protection was easy.

Extraterritoriality and the Data Protection Regulation

Following the latest negotiations by the Council of Ministers last week, DPR champion Vivienne Reding hailed progress in getting towards agreement on several fronts, but the extraterritorial provisions were given special attention.

Given the recent decision by the ECJ that search engines run by US companies must comply with EU Data Protection law already, it is worth looking further into what the impact of the DPR might have had on this decision, had it been in place in its current form.  It will give us some insight into what we might expect from the revision of the rules, or even if anything will change at all, given some commentary suggesting regulatory bodies are behaving as if it is in place already.

The principle behind the Data Protection Regulation (DPR) is that the law should protect people within the EU, regardless of the location of service providers.  So non-European companies, when processing data of EU residents, will be subject to EU Data Protection Law.

When looking at the Google case, the argument was that Google was established in the EU, even if the processing was in the USA, therefore EU rules should apply. Under the DPR, it would seem that a company would not even have to have an EU based business, for the rules to apply.  Setting aside the question of how it will be possible to enforce a ruling on a business with no legal presence inside the EU – this is still quite a big shift.

In addition it would seem to rule out the specific situation where Google is currently saying that search result changes from this ruling will only impact searches on EU domains.  The global google.com domain search results will not be changed.  This of course, along with the assertion that they will indicate when content has been removed from a search result – weakens the protection of the individual.  If you know that results have been removed from google.co.uk, you can go to google.com to find the missing content.

There has been no commentary so far from Europe about whether this is acceptable under current rules, but with the DPR in place it would seem that search results will have to be taken down, regardless of what domain the search is taking place on.  After all, wherever it is taking place, there is data being processed about an EU resident against their wishes.

Of course, we can expect the US to cry foul against such a position.  Except for the fact that, they take a very similar view to extraterritoriality when it suits their own interests.

One example right now is the huge fines being proposed for French bank BNP Paribas.  Paribas are being sanctioned by the US for a series of financial transactions with Iran and Sudan.  The trades in question are claimed to have been carried out by Paribas’ Swiss operations.  They are not illegal under Swiss law, although it is common for Switzerland to abide by such sanctions.

The reason that the US feels justified appears to be that several of the trades were conducted in US dollars.  Because of this BNP appears to be facing fines by the US running into the tens of billions.

Although the two issues are different in many ways – they both revolve around the extraterritoriality of laws. Given the increasingly interconnected nature of our global economy, my guess is we will see a lot more of this in the future in all sorts of fields.

Google Forgets and Search Protection Optimisation

In response to the recent decision by the European Court of Justice, Google has now released a ‘Search removal request’ form, available here: https://support.google.com/legal/contact/lr_eudpa?product=websearch

As anticipated, it is focussing on the removal of results associated with a name, rather than removal of content entirely from the index.

Key features include a requirement to upload some form of photo ID as proof of identity of the requester – which Google assures will only be used for prevention of fraud.

In making a request, you are also required to justify it, with specific reference to why the search result is ‘irrelevant, outdated, or otherwise inappropriate’ – the terms set out in the court ruling.

The company points out that it is still working on how to actually implement the take down process – so don’t expect any content to be removed immediately.

Google also indicates that it will be seeking advice on such requests from Data Protection Authorities – presumably because it wants to test its own decision making and the limits of what the regulators will allow it to refuse.

I would suggest what is likely to happen is that there will be a flurry of exchanges with DPA’s and Google lawyers in the initial weeks, perhaps months. Then I would expect them to build an algorithm that would largely automate the decision making process.

So rather like now where we have a Search Engine Optimisation industry that tries to figure out how the search algorithm works to promote links to the opt of search results.  We may see the emergence of the opposite – perhaps a Search Protection Optimisation (SPO) business, that works out the best methods of getting the new algorithm to remove links.

In fact, let it be known that I officially assert copyright over that phrase – Search Protection Optimisation.

 

800px-Google

Why the ECJ Google Decision is Smarter Than You Might Think

One of the central issues in the ECJ Google case was whether or not the search engine was acting as a data controller or data processor.  Google’s argument was comprehensively shot down by the court.  Google determines the nature and purpose of its indexing of websites, and therefore it acts as a data controller, subject to data protection law.

Google’s business is built on its well-publicised mission to organise the world’s information.  It revolutionised search through its ever evolving algorithm and its active crawling of the web to discover new links.

As a result there are a number of things that all search engines now do:

  • Crawl the web, looking for content to index.
  • Analyse it to try to understand its meaning (e.g. through keywords) and its value (e.g. by number of links to it).
  • Analyse user queries, to figure out what people want to discover.
  • Match the query to the available content, then organise the results according to value.

The concept of value is crucial to this activity.  It has resulted in the arms race that is Search Engine Optimisation (SEO).  ‘Getting to the top of Google’ is such a valuable thing for publishers that it has a whole industry behind it.

Search engines have become more ‘intelligent’ in order to combat attempts to game the system and inflate one publishers’ measured value above another.  This intelligence is at the core of Google’s business, which is why it is very secretive about how its algorithm works.  However it is this active intelligence in the system that results in its taking on the role of a data controller.

It didn’t use to be like that.  Back in the early days of the web, search engines were more passive things.  Publishers had to submit content to them, and more actively explain what it was about.  It would then more or less be added to their index automatically.  Now, although you can tell Google your content exists (such as through a site map), their algorithm decides whether or not to index it, how it will index it, and what value will be assigned to it.  This decision is also regularly re-evaluated.  Like a referee at a football match, although there are ways to influence it, Google’s decision is final.

So one solution to Google’s problem with this ECJ ruling is for the company to remove itself from the role of data controller.  This is not going to happen entirely of course, but it could potentially do so in respect of personal data.

Given its position in the EU market, it could actually do this relatively easily.  It could instruct publishers that it would not index content that it believed to contain personal data, unless specific conditions were met.

One of those conditions might be to tell Google when content contains personal data, and what the retention period of that data would be.  Given this information – which would be in the form of tags to the content, it could re-write the algorithm to take account of the publisher’s instruction.  In doing so, it becomes a data processor in respect of that data, and frees itself from responsibility.

It would of course be a massive change in its role, and a reduction in its market power.  And it would require publishers to change too.  However, it would resolve the problem.

Many commentators have said the ECJ decision is wrong because it puts the responsibility on the search engine, not the publisher.  I think it was absolutely right because search engines are relatively few in number, and they have the power to influence publishers on a large scale that regulators cannot.

In effect the ECJ has said there is a market problem, with the publication of personal data. By making it the responsibility of the markets biggest controllers, it can now sit back and watch the market take care of itself.

From the point of view of trying to find a way to tip the balance towards greater protection of personal data, its actually a pretty smart move.

What does the ECJ decision on Google mean for your business?

The decision by Europe’s highest court to require Google to remove links to out of date personal information from its search results has brought privacy into the headlines in a way that even the Snowden revelations fell short of at their height.  This is not really surprising because it is an issue that touches everyone in a much more obvious way.

Depending on who you listen to, the decision itself is a blow to free speech, a slippery slope to mass censorship, or a triumph for consumer privacy controls. It is difficult to say yet which of these will prove correct, if any, but one thing that almost everyone agrees on is, this could have far reaching consequences, and not just for Google.

So are there any broader implications for businesses other than Google and search engines.  I think the answer to that is yes.

First, it establishes that activity outside the EU can be subject to EU law if there is a local subsidiary that benefits from that activity.  This is big news for any data controller that sets up an EU sales office, even if the product/services being sold are themselves outside the EU.

Secondly it establishes that data aggregation or re-publishing can be seen as a separate processing activity that needs its own legal justification.  This could cause problems for social media services in particular – many of which rely on such activities.

Thirdly, it establishes that an individual has a right to prevent further use of information about them, even if it is already in the public domain.  This right is balanced with other interests, but it is still there.

It also seems that once an individual has made a request for a take down, it is up to the data controller to justify refusal.  The individual does not have to give their own reason.  So unless a company is prepared to spend time and money in making its case, the easiest solution will be to comply with the request.

It is perhaps telling on the last point that it appears from news stories that Google will soon be ready to unveil a tool to enable people to make requests.  Other companies will probably need to review if they need similar mechanisms.

These are some of the more obvious and immediate impacts, I am sure there will be many more.

ECJ Rules that Search Engines are Data Controllers

The European Court of Justice, the highest court in the EU, has made a decision against Google this week that may well prove to be a turning point for data protection rights in Europe, and provide a mechanism for individuals to exercise the Right to be Forgotten which is provided for in the draft Data Protection Regulation.

It has caused quite a stir, with many arguing that it marks a blow for freedom of expression.  However as much as anything it has also highlighted the cultural differences between the USA and Europe.  In Europe the right of free expression is more balanced against the right to privacy.

However, the overlooked factor in most of the stories on this issue, is that the ruling presents a fundamentally different view of the role of search engines as cataloguers of the web, than most people have, and as they themselves would like to be seen.

Google argued that it is not in control of the content of pages it indexes.  As a Data Processor it could not be held responsible for the personal data on the pages it indexes, and therefore would have no liability under EU data protection law.

The court by contrast ruled that in creating its index and generating a link as a result of the search, Google is re-using the data for a different purpose. It also spelled out that the purpose was in no small part to create a market for its advertising which also appears in the search results.

The change in purpose, and also because Google is in control of  how the index is formed, means that it has to be seen as a Data Controller when it displays search results.  Which in turn then automatically means that it is responsible for the protection of the personal data, and upholding the rights of the individual.

This is really the game changer here, and what may change the very nature of search in the future.  Or will it?

There has been an assumption that the court ruling means that Google must remove the page in question from its index. This is what has got people agitated and talking about censorship.  However, I don’t think the ruling suggests this.

Another point that is missed in a lot of commentary, is that this all stems from a search based on the person’s name.  It is the appearance of the page in the search result against the name that is problematic according to the court.

Google therefore may not need to remove the page itself from its index, only the link between the name and the page.  This would enable the page itself to continue to appear in other search results that did not make use of the persons name.

It would limit the ability to search for information about people directly, but it wouldn’t restrict the ability to find the same content on a different basis.

Of course, we are yet to see if such an interpretation is acceptable, but it would be a lot less radical than a requirement to remove the link to the content entirely.

Right to

The Right to be Forgotten

The right to be forgotten is part of the European Union’s January 2012 proposal to revise the Data Protection Directive. The proposed Data Protection Regulation would mandate that personal data belongs to the data subject (the person whose data is being collected) and does not belong to the data controller or the data processor; therefore, a right to be forgotten, or to erase the data collected, is easily seen as part of the protection because after all if one cannot delete the data, then how is it honestly their data?

The right to be forgotten is discussed in Article 17 of the new EU Data Protection Regulation, “Right to be forgotten and to erasure”. This article states that “The data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data which are made available by the data subject while he or she was a child.” In other words, at any time, the data subject must be able to contact the data controller or data processor and request that their data be forgotten and erased.

There are some limited exceptions, however. One of the four following criteria must be met. First, the data must be deleted if it is no longer needed. Second, the data must be deleted if “the data subject withdraws consent on which the processing is based…” Third, the data controller must erase the data if the individual objects to their data collection for a specific reason like it’s for direct marketing purposes. Fourth, if the data is not being processed in accordance with the data protection regulation, it must be forgotten.

However, there are some limited exceptions that trump this right to be forgotten: freedom of expression; public interest and public health; “historical, statistical and scientific research purposes”*; compliance to local or Union policies to “meet an objective of public interest”* given that these later policies respect the basic essence of this right to be forgotten.

Though it has remained within the proposal through some of the MEPs’ revisions, the right to be forgotten is a contentious subject currently being debated. Some of the MEPs think it is impossible to enforce the right to be forgotten and want to get rid of it entirely. But Albrecht has worked to clarify this right and explain more clearly how it would function with other rights. But only time will tell what the EU Data Protection Regulation will look like and whether or not the right to be forgotten will be included in it.