General Data Protection Regulation Top Ten Issues

The ink is barely dry on the draft, but the  EU General Data Protection Regulation (GDPR) looks set to change the regulatory environment for personal information not just in the EU, but around the world. Its aim is to create a legal infrastructure for the use of personal data that is fit for purpose, both today and in the future.

The GDPR was designed to increase legal certainty with regards to information flows both within the EU’s borders and beyond. It also introduces stronger consumer protections, with requirements for greater transparency and accountability about how data is used by businesses, not-for-profits and governments alike.

This is intended to give individuals increased trust in data practices.  Consumer research in the last few years has shown consistently high levels of concern and lack of trust in this area, and this is believed to be a potential brake on the future growth of digital technologies.

However, in order to achieve these goals the GDPR does come with some stings in its tail. It places much greater requirements on businesses to communicate effectively with customers, and obtain much clearer consent for the use of their data.  Organisations also have to provide customer choice mechanisms, and there is a greater emphasis on documenting data processing activity. And then of course there are the fines.

At over 200 pages it is a very wide ranging instrument.  However, for those who haven’t had time to read it yet, these are what we think the top 10 issues for most organisations will be.

1.  A broader definition of Personal Data

As we predicted earlier, the scope of what constitutes ‘personal data’ has explicitly been broadened to include any information ‘relating to’ an individual. This specifically includes ‘online identifiers’ so cookies and the advertising IDs seen in the mobile eco-system will be caught up, along with anything that contributes to identifying an individual, or links to such identifying information. This has some widespread implications for online tracking in particular.

2.  A higher bar for consent

Whilst the final text shied away from explicit consent as a requirement, except when special categories of (sensitive) data are concerned, there is still much emphasis on gaining consent through active user mechanisms like tick boxes.

A key part of the test of the validity of consent is whether consumers understand what they are agreeing to, and are given a meaningful choice. There is also a significant shift in the burden of proof.  You will need to be able to provide evidence that you obtained consent from specific data subjects, which is going to require much better record keeping for many organisations.

3.  Data Protection Officers

Although not a universal requirement, many organisations will be required to appoint a Data Protection Officer (DPO) to oversee data uses and ensure compliance with the law. They will be mandatory in the public sector, but for private sector organisations the key test will be whether the organisation is involved in “systematic monitoring of data subjects on a large scale“, however it is not clear at this time how ‘large scale’ will be interpreted.

Earlier, more detailed, requirements for the skills and experience of the DPO and guarantees over their employment, have been dropped but a key issue in the short to medium term will be a lack of the right people to fill such roles.

DPOs however can be outsourced, which may create a market for new services, especially to cater for the needs of smaller businesses.  The DPO responsibilities can also be given to someone alongside other work within the organisation, as long as this does not create a conflict of interest.  So training existing staff into the role could be a viable option for many.

4.  Transparency and Accountability

The GDPR scraps the need for controllers to register with their Data Protection Authority (DPA), but replaces this with a requirement to both better inform data subjects about practices and rights, and to keep records that can be made available on request – such as in the event of a data breach or a compliance complaint.  Such records are about demonstrating that the organisation has thought through the impact of its systems and processes, and made informed choices about how to comply with the GDPR.  The Data Protection or Privacy Impact Assessment (PIA) is one example of such documentation.  It is intended that a PIA will show that an organisation has considered the risks associated with its particular personal data practices, and taken reasonable steps to control or mitigate them.

There are also new requirements on the level of detail that organisations must provide to data subjects about their practices, as well as a need to make sure that this information is both accessible and easy to understand. In particular there is a need to explain the logic behind decisions made on the basis of analysing personal data – which may have particular significance in some sectors that have relied on such processes being largely secret. Organisations are also expected to inform subjects about their rights and how to exercise them.

5.  Data Protection by Design and Default

Although references to this have been cut back in comparison with earlier versions of the text, the GDPR contains requirements that the design of systems and processes are required to give consideration to compliance with the principles of data protection. Particular emphasis is placed on the ideas of only collecting data necessary to fulfil specific purposes, discarding it when it is no longer required, and protecting data subject rights.

It also sets up the possibility for the development of certifications and codes of practice that organisations can follow to help meet these requirements.  Keep an eye for these as they develop.  In particular we expect DPAs to get involved in this area.  They will be losing their registration fees and therefore needing new sources of income.  In the UK the Information Commissioners Office (ICO) has already been developing this idea, so expect it to continue. Trade bodies are also likely to have a role to play here.

6.  The Right to Erasure and Data Portability

These new data subject rights are likely to pose challenges for many organisations. The right to erasure is a clarification of the much talked about ‘right to be forgotten’.   Although the circumstances when the right can be exercised have been made clearer, the balancing against other rights and obligations is still needed.

The right to have a copy of your data in a machine readable form to transfer to another provider may be difficult at first, but it could also lead to better systems interoperability in the longer term – which is already a growing technology trend.  In particular this provision may facilitate the development of the market for ‘personal data stores’, an idea that has long been talked about, but not yet fully realised as providers have struggled with sustainable and scalable business models.

7.  Removal of Subject Access Request Fees

Data subjects have a right to know whether or not an organisation is processing their personal data, what that data is and the purposes of the processing.  The GDPR removes the ability to charge an upfront fee for providing such information, and there is a risk requests will increase as a result of this, pushing up costs.  Current allowable fees don’t exactly cover the cost of  a Subject Access Request (SAR), but are seen as a deterrent to time wasters.  If companies are no longer able to charge fees, it is feared this could open the floodgates to many more SARs.

Companies will be allowed to charge for subsequent copies of the same data, which may reduce the risk of this to some extent. However, it may be worth investing in making sure you can respond to such requests as efficiently as possible, which will not be easy in many cases.

8.  Reporting Data Breaches

Data controllers will be required to report data breaches to their DPA, unless it is unlikely to represent a risk to the rights and freedoms of the individuals concerned. However this qualification may be difficult to judge, so in many cases, it will be safer to notify. The notice must be made within 72 hours of becoming aware of it, unless there are exceptional circumstances, which will have to be justified.

Where the risks to individuals is high, then the data subjects themselves will also need to be notified, although a specific time scale is not specified for this.  It is also worth noting that the DPA can instruct an organisation to inform data subjects if they haven’t already, so we can expect to see further guidance on the circumstances when it would be correct to do so.

9.  Fines

The GDPR very deliberately raises the bar in terms of the ability for DPAs to issue fines for breaches of the rules.  They can go as high as 4% of global turnover.  Not only are these designed to ensure data protection becomes a board level issue, by taking into account worldwide revenues, they seek to side step attempts by multinationals to engage in fine-avoidance business structures.

It is also worth noting that fines can be levied without the necessity to demonstrate harm – although the largest ones will likely be reserved for cases where data subjects have directly suffered damages.

10.  Data Processor Responsibilities

Organisations that only process data on instructions from their client are not directly covered by the current data protection regime.  Their actions were assumed to be governed by agreement with the customer who would be the data controller, and therefore directly responsible for the actions of the processor. However this all changes under the GDPR, and processors now have direct legal obligations and responsibilities.

In particular this means that processors can in certain circumstances be held directly liable and be required to pay compensation to a data subject. It will therefore become very important to establish the contractual relationships and liabilities of the different parties in a controller/processor relationship, and the costs of some services by processors may rise to offset additional risks and insurance costs.


We hope you find this useful.  In future posts we will look at more details of what you can do to prepare, as well as looking into each of these areas in more detail.

In the mean time, if you have any questions and would like to know more about how the GDPR might effect your business, do get in touch and we will be happy to help.

The GDPR Has Landed

After nearly four years, and some of the most intense negotiations and lobbying in EU history, agreement on the text of what some are calling the most significant piece of privacy legislation ever was reached on Tuesday night (Dec 15th 2015).  Tweets from various key players even equated the agreement with the election of a new Pope, with talk of white smoke going up.

This was followed up this morning (Thursday 17th Dec.) with a vote by the European Parliament’s Justice Committee to formally adopt the text – although the support was not unanimous.  There will be a few more hurdles to get over but these are largely expected to be rubber stamping exercises.

It is expected that the Regulation will become EU law in the early part of 2016, with a two year lead in period before it becomes enforced.  The final wording of the document has been released, although it will be subject to cleaning up in terms of clause numbering, and of course it will also have to be translated into every language of the EU.

Time to Get to Work

With the text in place, the real work of organisations to make sure they will be in compliance with the new rules can now begin.  There will undoubtedly be a lot of analysis of the requirements in the months to come, to work out exactly it all means, and we of course will bring you as much of this as we can.

One of the things that makes the legislation so significant is its global reach.  Any organisation, regardless of its location in the world, that touches the personal data of EU residents and citizens in any way, is going to be affected by this law.

Although there are parts of it for everyone from privacy fundamentalists to Silicon Valley libertarians to dislike, almost all agree that this represents a new era of strong data protection and privacy for the digital age.

I suspect many privacy people around the world are going to be spending a big chunk of the holiday season working out how to hit the ground running come the New Year. I know I will.

Happy Christmas!

Collateral Damage in the Cloud: The Jurisdictional War over Personal Data

It may already be a little clichéd to talk of data as the new oil, but personal data is undoubtedly a lubricant of frictionless digital economics. The wheels of many free services would stop turning if consumers didn’t keep filling the tank with their Likes, tweets and cat videos.

However, just as both consumers and businesses have got used to the idea of sending all this information into the cloud without concerning themselves about where it actually goes, the business model of global services powered by distributed data is coming under attack. New legal frameworks are threatening to create or strengthen digital borders, stemming the flow of personal data migration.

Though legal restrictions on the global movement of personal data are not entirely new, the effectiveness of existing frameworks has more recently been called into question. As this data has become more important, valuable – and of course voluminous – tensions between different interests and cultural attitudes have increased to the point where ‘balkanisation’ of web services and the underlying infrastructure of the web is a very real possibility.

The EU-U.S. Safe Harbour programme

One of the biggest data trade deals is the EU-U.S. Safe Harbour programme, the most relied-upon legal instrument facilitating the exit of personal data from the EU to U.S. companies. That deal has been under threat ever since Edward Snowden went public over the collection and use of personal information by the U.S. and other allied intelligence agencies.

Alongside stories about lapses in regulation of U.S. companies signed up to Safe Harbour, existing arrangements have been the subject of intense negotiation over the last 2 years. As things currently stand, although no-one really wants it, the EU could pull the plug on Safe Harbour if its demands for change are not met by U.S. authorities.

If the data taps are forced off, what then? Much of the transatlantic movement of data would have to be brought to a halt. Even setting aside the economic consequences, the modifications to services required to ensure that data was prevented from flowing illegally would be significant.

Update: recent reports suggest agreement on a revised Safe Harbour deal is close.

One element of web balkanisation is the idea that companies may be forced to keep personal data within the jurisdictional boundaries of its original point of collection or risk not being allowed to trade within that country. In Russia, this is already a reality. From 1 September this year, all personal data on Russian citizens must be located in Russian data centres. Although the publicly-stated reason for this is to protect the privacy rights of Russians, there remain suspicions that the primary purpose is to ensure that State can better monitor its own citizens. Whatever the reason, international companies wanting to do business in Russia are expected to comply.

In the wake of Snowden, similar requirements were proposed in Brazil – although these were dropped at the last minute from an Internet Civil Rights Bill enacted in 2014. However data localisation laws do exist in parts of South East Asia, and India too is reportedly considering the idea.

Back in Europe there are the continuing negotiations over the draft General Data Protection Regulation (GDPR) to consider. This legislative juggernaut is also seeking to extend the jurisdictional boundaries of protection of personal data. Rather than go for a strict localisation approach, the GDPR is about attaching specific rights to the data regardless of where it ends up. Will companies need to develop solutions that tag location origination to personal data to then identify what rights apply?

Ireland at the centre of data sovereignty storms

There is much also talk about the ‘European Cloud’ but it is not really clear what this means. Twitter has recently announced that data of account holders outside of North America will be controlled from Dublin, and therefore subject to Irish Data Protection laws. Dropbox has also followed suit. However, what this may mean in the future in terms of where the data may need to be physically located, and the legal obligations of the data centres involved is very much unresolved.

Facebook has claimed Dublin as its EU regulatory home for years, but Belgian authorities have recently been challenging this assertion, demanding that Belgian law should apply to the personal data of Belgian residents.

Set against a general tide of increasing consumer privacy protections, there are conflicting demands for greater access to personal data for law enforcement. The UK government has talked about there being no place criminals can hide online, an apparent call for back doors and keys to encrypted services, with little recognition of how this might also make systems more vulnerable to the increasing volume and sophistication of malicious attacks on digital systems. Many of these are interested in getting their hands on the same data.

One case making its way through the U.S. courts at the moment involves Hotmail account data held in Dublin. U.S. law enforcement wants the information for an ongoing investigation and is arguing that Microsoft Corp. as the owner of the data centre, is obliged to hand it over under U.S. law. The company would also be obliged to keep that hand-over secret, which could be in breach of Irish law. Microsoft is fighting the case but so far they are losing the battle. If they do eventually have to hand the data over, there could also be a devastating loss of trust amongst its global customer base, as well as consequences for how Microsoft may have to re-structure its services.

Frontline responsibility for compliance with such laws normally lies with the service providers that are collecting the data, but they will look to their infrastructure vendors for both appropriate solutions and assurance of compliance. But the Microsoft case demonstrates it’s not always straightforward, and there are no indications that the stream of contentions and controversies are going to dry up anytime soon.


This article was originally published on the Stack:

DPR Progress: The Final Furlong?

Can it be true?  Three and a half years after first publication, it is starting to look and sound like we may get a finished piece of legislation by the end of the year.

We have this week seen the (almost) final text of the Council, with all changes consolidated into a coherent text:

Plus we have also seen a timetable for the trialogue between the Council, Commission and Parliament, which at least points to a conclusion in December –

We can’t be sure that this can be stuck to, there are a lot of differences between the three texts that need to be negotiated away, but if you will excuse the mixed metaphors, the light at the end of the tunnel seems to indicate we are on the final stretch.

Of course, then the hard work for organisations of actually making sure they can comply with the new rules will actually begin.

Belgium, Facebook, the One Stop Shop and Forum Shopping

Belgium’s data protection body the CPP is continuing its investigation of Facebook’s privacy policy by making an argument that it has the authority (“Competence” in Legal/Eurospeak) to demand changes to the social media giant’s tracking practices in Belgium.

It is an approach that flies in the face of established practice, tests out the limits of the One Stop Shop principle, and also highlights the view from some quarters that tech companies are exploiting differences in regulatory practices in the EU by engaging in “Forum Shopping” to minimise the effective impact of EU laws.

The CPP is part of a grouping of European DPAs, including regulators in Spain, Germany, France and the Netherlands, that has been looking into Facebook’s tracking practices for some time, with a big focus on how its plug-ins installed on other sites allow it to gather data about user behaviour.

It seems very clear from the tone as well as the amount of work that has been put in, that the CPP is not just unhappy with Facebook’s data practices, but is looking to do something to change them. However, under long established practice Facebook claims that as all data on EU users is controlled by its EU HQ in Dublin, it is only obliged to follow the rulings of the Irish DPA, seen by many as having a more business friendly approach than its continental counterparts.

Now the CPP is claiming that this is irrelevant. To make its case is has relied heavily on the (in)famous Google Spain ‘Right to be Forgotten’ decision made last year.  This judgement ruled that the location of the processing of the data, or the designation of the data controller was not relevant to establishing regulatory authority.  Instead, as long as a company has a presence in a country, and the activities of the local business were ‘inextricably linked’ to the data in question, then there is a requirement for the data processing to be compliant with local law.

Using this argument the CPP built a case for it competence by saying that the activity of the Facebook Belgium office, which largely focussed on lobbying and legal advice to Facebook Inc. about data protection issues, creates an obligation to comply with Belgian data protection law, and therefore gives authority to the CPP for enforcement with respect to the data of Belgian residents. Facebook has given its own response to this position.

One Stop Shop

All of which demonstrates how difficult the issues are for the idea of the One Stop Shop – which is a central pillar of the General Data Protection Regulation currently being negotiated.

The idea of this is that businesses operating in multiple EU countries should only have to deal with one regulator in one country for all of its EU data protection issues.  This only becomes possible with the Regulation because it creates one set of rules across all member states.  In theory, complete harmonisation.  Multi-nationals welcome this idea because it should make their lives much simpler.

However, the national regulators are currently run very differently, have widely varying budgets, and divergent views of their roles as enforcers. The fear amongst some is that this results in forum shopping, where companies seek to structure themselves so they deal with a regulator they believe will be least likely to adversely impact their data processing.

The other risk is that is leaves consumers in a weaker position to enforce their rights if they have to deal with foreign regulators in a foreign language and with a foreign legal system.

To counteract this problem, there are proposals for all sorts of rules around co-operation of national regulators but of course the fear is that not only will this prove horrendously complex and expensive to administer, but it could turn cultural differences into divisions and threaten the harmonisation that the GDPR sought to deliver.


It is hard to argue right now that Facebook’s EU HQ is in Ireland because of an easy going data protection environment, it’s much more likely to be the lower corporate tax rate on offer. However with the big fines proposed under the GDPR, the risks associated with such costs might conceivably create a market for data protection forum shopping.  Preventing the One Stop Shop from becoming an incentive for this, whilst still reducing administrative burdens to make the EU a better place for personal data oriented businesses, is going to be tricky.

And of course let’s not forget that Twitter, in a move that might be interpreted as a show of solidarity, has also recently changed its privacy policy to establish its Dublin operation as its service provider outside of North America, stating explicitly that account information handling will be handled under Irish law.  Let’s see what the other EU regulators make of that.


Lessons from London’s Leading Privacy Conference

The annual Data Protection Intensive, organised by the International Association of Privacy Professionals (IAPP), is a 2 day conference bringing together leading privacy experts from many different countries and industries. This year’s conference, which took place in mid-April, was my first and I found it very enjoyable and informative.

Privacy and Data Protection have been growing in importance to business in the last few years for a number of reasons.  Consumer data is now a key asset for many types of organisations. Its increasing availability, volume and granularity, coupled with the low cost of storage and analysis, has made it a valuable commodity and increasingly a source of competitive differentiation.  At the same time Edward Snowden, high profile hacking and cyber security breaches, debates about privacy vs. freedom of speech have all played their part in making data protection a main stream media story, raising both awareness and fear over the potential dangers of its misuse.

The proposals for the new EU General Data Protection Regulation, and what it might mean for compliance programmes were unsurprisingly very much top of the agenda at the conference this year, as we inch ever closer to an agreed text.  Much time has and continues to be given over to analysing the proposals in minute detail. However, to my mind the real take-aways from the conference were all about the big picture.

  • The hockey-stick curve of growth in IAPP membership is a testament to the fact that, contrary to what some would like us to believe, privacy is not dead but thriving, and some may even say it is on the edge of entering a golden era.
  • Even in the biggest and most privacy-mature businesses in attendance there was still a sense of plenty of room to improve and evolve.
  • The dominance of a legalistic approach to privacy management is on the wane, with a move towards more of a business-needs focus.
  • Privacy management activity is still quite low on the corporate agenda, which means budgets are very tight.
  • There is a need for tools and technologies to make privacy management more effective and efficient.
  • Many organisations think of their key privacy issues and solutions as being unique or special to them in some way.

This last point is critical in my view and represents a potential barrier to the one above it – development of new tools and technologies.  Whilst it is often true at the detail level, it is unlikely to be in the broader scope of organisation activity.

Take for example the use of customer data for marketing.  A large proportion of organisations will essentially hold the same data about customers (contact details, purchase histories etc.), and use it in very similar ways (segmenting, targeting, upselling), even if the details of what they have and the way they use it is different.

A lot of companies will also have very similar processes for handling employee data, and a similar set of partner relationships for payroll, recruitment etc. When you look at specific verticals and certain types of data use within them – like health providers and financial services, it is likely that the similarities become even more pronounced.

I believe that one of the biggest challenges for the privacy profession may be to get past the ‘Not Invented Here’ syndrome when it comes to privacy management.  This means learning to focus on those similarities rather than differences, which is key to opening up new opportunities for shared learning, better benchmarking, and a greater understanding of difficult issues like consumer privacy risks.

It is also when you have recognised similarities that you can start to leverage technology more to handle the standard, routine aspects of any task – which of course frees up human resources to deal with the more difficult, individual issues.

PIAs (Privacy Impact Assessments) are a good example of where technology can standardise and streamline the process of gathering information and enabling privacy teams to make better informed decisions.  By reducing the time and cost involved in managing PIAs, it becomes easier to carry them out more frequently and in smaller projects. This in turn could be one of the most effective ways of both increasing awareness of privacy issues within the organisation as well as encouraging the adoption of more privacy centric systems and processes.

Online Tracking and the Data Protection Regulation

By far and away the most pervasive collection of information from people is in the online space.  Just about every click, mouse movement, touch or keypress is captured as a stream of data that can be endlessly analysed, and ultimately monetised, by some organisation somewhere.

That raw ‘clickstream’ data becomes a behavioural profile which can then be fed back to the user as personalisation of the experience, reaction to which is further captured, measured and analysed in an almost endless feedback loop.

The ultimate goal of much of this is to influence the future of those same users. It may be just finding the right advert to show that will nudge them towards buying something. It could also be about creating or reinforcing views or opinions, which in turn might affect the outcome of elections.  It might even push people towards, or pull them away from, acting on darker instincts. The point is, its purpose is to change the behaviour of people in some way.

To argue that most of it is beneficial, or simply benign, ignores the fact that the vast majority of it goes on with users at best having only a vague notion it is occurring, at worst against their wishes and reasonable attempts to stop it.

So when it comes to looking at the proposals for the new EU Data Protection Regulation (DPR), it seems logical to ask how this largest data processing activity is handled, because if it doesn’t get this bit right, then it will be a failure.  We know this because the very reason it was decided that we needed new rules, is because the current ones are not deemed fit for the digital age of ubiquitous data collection and analysis.

You therefore might reasonably assume that the DPR would be pretty clear on this issue, but unfortunately you would be very wrong. In fact there is a fundamental disagreement between the various negotiating bodies as to whether this type of data would even be covered by the regulation.

I called this clickstream data ‘information from people’ earlier because the real issue at stake is whether this gets to be defined as personal data at all.  If not then the Regulation won’t apply and there will be no new effective protections for individuals when it comes to such data.

The good part is, to get to the bottom of this issue, you only have to really look at one paragraph fairly early in the text, Recital 24.

In the original text from the Commission in 2012 it said:

When using online services, individuals may be associated with online identifiers provided by their devices, applications, tools and protocols, such as Internet Protocol addresses or cookie identifiers. This may leave traces which, combined with unique identifiers and other information received by the servers, may be used to create profiles of the individuals and identify them. It follows that identification numbers, location data, online identifiers or other specific factors as such need not necessarily be considered as personal data in all circumstances.

The text is a badly worded, but it seems to be saying that identifiers (like cookies) are not always on their own personal data (which is true), but where those identifiers are unique (as cookies often are to a user) and are associated with, for example a profile of pages visited, then this would be personal data, and therefore subject to the new rules.  It is really difficult to say however whether it improves protections for individuals, because it doesn’t really provide any clarity because the language is too muddled.

It seems others agree with this, because the revision by the Parliament did a pretty good job of providing clarity, and basically said: ‘[The] Regulation should be applicable to…cookie [type] identifiers…unless those identifiers do not relate to an identified or identifiable natural person.’

This was clearly meant to make sure that clickstream data was defined as personal data – because it would be relatively easy to show that it relates to a person through the unique cookie.  Regardless of anything else, it would mean that there would have to be a lawful and therefore justifiable reason for collecting it.  Of course that would in itself not stop it being collected, and nor should it, but the other controls defined by the Regulation would then clearly apply.  Critically it would require the individual to be made very aware of the collection – and that then gives them some level of direct control.

Then we come to the Council version which was published recently, though it is only a still draft.  It says: ‘online identifiers… should not be considered as personal data if they do not identify an individual or make an individual identifiable’.

This goes in the opposite direction and provides for only a very weak and limited protection.  In most cases, clickstream data, even though it relates to an individual, does not of itself make them identifiable (unless the user is logged in for example).  This would mean there would be no restraint in collecting the data or using it, unless it were to be combined with other data that then rendered an individual identifiable.  This last is easily done, but crucially it usually happens further upstream than the point of original collection and away from the user.  This in turn means that the ability for the user to exercise their rights when they do become identified or identifiable, is heavily curtailed.  They would likely have limited options to give or withdraw consent, and the ‘legitimate interests’ justification for processing would almost certainly dominate.  Essentially this preserves the status quo, offering no new protections at least at the point of first collection.

So what does this mean?  In my view, if the goal of the regulation is to improve the protections for individuals in the face of technological advance then based on this analysis only the Parliament version is capable of doing this for the vast majority of data being collected on the web.

Germany Proposes Class Actions for Data Protection

One of the big differences between the USA and Europe when it comes to privacy law is in the respective enforcement regimes.

In the EU, breaches of data protection laws are investigated by Data Protection Authorities (DPAs), whose maximum powers are generally to hand out fines of a fairly limited nature.   This money either funds the activity of the DPA, or as in the case of the UK, goes into the public purse.  There is no effective redress or restitution for the victims of the infringement.

By contrast, in the USA there are several mechanisms designed to directly compensate consumers and try to ensure they are protected from future harms, notably in the case of data breaches.  The costs to companies can run into hundreds of millions in any currency, and the argument is that this can act as a significant incentive to get things right in the first place.

I have heard more than one data protection lawyer argue that some system of individual redress in European data protection law would be a significant step forward in improving enforcement.  And this was coming from people who would be as likely to represent the company in question, as they would any individual victims.

Now it seems the Germans have taken a lead, as they often do in issues of privacy in the EU, and have proposed changes to their law which will open the door to collective, or as it is commonly known, class action.

This would mean that in the case of a breach of data protection law that had an impact on large numbers of people, they would be able to appoint a representative to sue for compensation.  The idea being that the potential for this to happen, and the possible size of that compensation, should act to improve practices, especially those that might only have a small impact, but on large numbers of people.

These of course are exactly the kind of data practices that are common in new technology businesses, where risks are often little known because they are new or may be overlooked in the interests of growth and profit.

If these proposals become law, many organisations may be forced to up their game, or face huge damages to both profits and reputation.

New Draft of Data Protection Regulation Released

Shortly before Christmas a new draft version of the Data Protection Regulation was released by the Council of Ministers.  The text is still being debated but this certainly shows the direction the ministers are heading in, so is worth some analysis.

Once it is approved, this will become the third version of the law, following on from the original produced by the Commission in 2012, then the one approved by the parliament in 2014.

Once the Council version is finished, there will then be a final trilateral negotiation to reach the final piece of legislation. Comparing this latest Council draft with the version produced by the Parliament in particular gives some indication of how difficult that negotiation might be, and therefore how long it will take.

Key Issues:

Definition of Consent.  The council text weakens consent by removing the requirement that it must be ‘explicit’, preferring the use of the term ‘unambiguous’, a significant departure from both the Commission and Parliament. Although all texts support the interpretation in Recital 25 that consent should be indicated by ‘affirmative action, the Parliament further strengthened this by adding that ‘mere use of a service’ should not constitute consent.

This issue is particularly relevant to web services, which often seek to rely on continuation of browsing a site as an indicator of consent to privacy practices. The traditional alternative is putting some mechanism in place to require users to signify consent – such as tick boxes.  However this can put some people off from using a service by creating a barrier to entry, or lead to ‘consent fatigue’ – where they blindly agree to terms and conditions they haven’t read.

We have seen this battle played out before – most recently with the consent requirements in the cookie law.  I think it is safe to say that this is going to continue to be a key battleground right down to the wire.

Information Requirements. Allied to consent is the need to provide information so that data subjects can understand what it is they are consenting to. Here the Council text is far less prescriptive than the Parliament one, which sought to create a highly standardised format for information notices, with clear and consistent language and iconography. The aim was to find a model that would make privacy notices easier to understand, which many have argued is a highly laudable goal.  However the format of the notice, and especially the design of the icons, was not well received in the design community in particular.

Data Protection Impact Assessments and Data Protection Officers. The Council has embraced the ‘risk based approach’ to data protection, and this is nowhere more clear in the modifications to the requirements for carrying out Data Protection Impact Assessments and employing DPOs.  The Parliament version of the text is prescriptive in its requirements, with DPIAs and DPOs being required in most circumstances, with exceptions for small business and small scale data usage.  By contrast the Council makes DPOs voluntary for most organisations and requires DPIAs only for ‘high risk’ data processing activities.

Whilst this may lift administrative burdens in many circumstances, it also leaves much greater room for interpretation, especially around what constitutes ‘high risk’, and this potentially results in greater uncertainty and widely differing practices, which in turn could lead to weaker consumer protections.

Harmonisation.  One of the original stated goals of the Regulation was to harmonise both rules and practices across the EU – creating a level competitive playing field and contributing to the Digital Single Market initiative.  This idea is particularly attractive to multi-national operators – but one of the hardest to deliver, because it reduces the authority of individual countries through their national regulator.

That makes it a highly politicised issue.  True harmony might weaken rules in one country, whilst strengthening them in others, and this has resulted in objections to the same wording, but for very different reasons – Germany and the UK being prominent examples.  The Council text has a number of provisions in it which appear designed to increase the autonomy of individual country regulators in comparison with the Parliament and Commission texts, leading to a weakening of the ‘one stop shop’ principle.

Also of significant interest in this draft are the sheer number of notes indicating the continued concerns of individual member states.  This tells us that agreement on this document may still be a long way from being reached.

January 2015 saw the start of the 6 month Latvian presidency of the EU, and whilst they have put getting a final position from the Council as their top priority, the continuing differences have already led prominent MEP Jan Albrecht, who led the Parliament work on the legislation, to predict that we won’t see finalisation of the Regulation much before the end of this year.

Cecilia Malmstrom

EU Data Protection Directive Safe Harbor

It is intriguing that with all the recent discussion of reforming the EU Data Protection Directive, Safe Harbor, the US framework to comply with Directive 95/46/EC, has not been discussed very often. Particularly when the necessity for US-based businesses to comply to the new EU Data Protection Regulation is such a hotly debated subject. So, let’s delve in and learn a bit about the EU Data Protection Directive and Safe Harbor.

The EU Data Protection Directive, Directive 95/46/EC, was finalized in 1995. As part of the larger framework of policies about privacy and human rights, the directive regulates the sharing of personal data between citizens of the EU and others. In a nutshell, it demands that personal data only be shared if it is processed transparently (the individual knows and consented to sharing that data); it is only taken for an explicit, legitimate purpose that is clearly defined; and it is only processed in accordance with its original purpose for being collected.

The EU Data Protection Directive also mandates that personal data only be shared with countries that have similar data protection regulations; however, this was not a very big concern until after the Internet became more prevalent. But by 2000, there were already over 360 million people online worldwide and the number of users was increasing everyday. Because of these issues, however, concern arose about what this meant for EU citizens and the private data they share with US-based organizations. So, the European Commission and the US Department of Commerce produced a framework for how US-based companies could comply with these regulations.

US-based companies who wish to comply with the US version of the EU Data Protection Directive, Safe Harbor, must uphold seven principles–notice, choice, onward transfer, access, security, data integrity, and enforcement. Notice and choice are connected: people must be informed about the data collection, its uses, transfers to third parties, and how to opt out of data collection.

Onward transfer is the policy that one entity may only pass data onto a third party if they are both already following all of these principles (and of course the original collector gave notice and got consent)–unless that third party is contracted by the data collector to process data solely for the data collector. Security means that the company must take reasonable measures to secure private data. Data integrity means that the data must be helpful to and about the purpose it was taken for. Access means that people must have access to their data and easily be able to correct incorrect personal data. Finally, enforcement is the policy that these principles must be enforced by a third party.

After this legislation passed in the US, the EU let out a final commission decision, 2000/520/EC, declaring “the adequacy of the protection provided by the safe harbour privacy principles“.* However, since then, the Safe Harbor framework has been heavily criticized. Leaving one to wonder if after reforming the EU Data Protection Directive, Safe Harbor will reformed or completely replaced with a new framework or…?