Courtesy cliparts.io

Pokémon Go and Location Privacy

There is one species Pokémon that even the most dedicated of Pokémon Go players are unlikely to ever catch, and that of course makes it all the desirable.

Privachu like to be left alone to go about their lives. They are not unfriendly and can be quite gregarious. They are also not as rare as one might think given how difficult they are to get hold of. What makes Privachu different from all other Pokémon, is that they choose when and how to reveal themselves, rather than just broadcast their location to anyone that might want to find them. And of course they will only reveal themselves to others they trust not to pass the information on to people they do not want to be found by.

OK, they don’t exist really, I’ve just made them up (though if anyone from Niantic wants to create Privachu, I am willing to be reasonable on the royalties – do get in touch).

Pokémon Go, the augmented reality mobile location based game, is currently taking the world by storm, but has been the source of some significant concern around the amount of personal data collected by the app, and how this may be shared. This is especially important because it is played largely by children.

Much of the early privacy concern focussed around the fact that users appeared to be required to give Niantic, the company behind the game, full access to their Google account (one of the main ways of registering in the game), which would include all their contacts and any documents stored in Google Docs.

However, it was fairly quickly revealed that this was actually the result of a configuration error, which was rapidly corrected, and that Niantic did not make use of or tried to access any of the extra information it didn’t need to verify the identity of the player. Nevertheless, even this short lived issue might have impacted millions of people and should provide a summary lesson in putting privacy thinking at the heart of the user experience design process.

The long term privacy issues with Pokémon Go however clearly focus on the location issue. Of course location based digital services have been around for at least as long as the smartphone itself. Aside from the obvious ubiquity of connectivity, location driven services are the smartphones killer app, the one that makes it worth all the investment in many ways.

What is perhaps different about Pokémon Go, is that it is not simply collecting location data – but it is actively incentivising large numbers of people to visit particular locations where Pokémon can be caught.

Yes there are big questions around the privacy concerns of sharing (selling) of location information with third parties, and those questions are already giving rise to investigations, notably in the USA and Germany.

What I think is more interesting is – how are decisions made about where to place PokéStops, and what Pokémon are to be found there? There is a huge potential here for a kind of targeted manipulation, the encouragement of particular audiences and profiles to visit specific locations. Niantic would be crazy if they didn’t see the potential in selling this capability, and I would be very surprised if on some level they are not already either doing it or thinking about doing it. There will be a powerful profit motive for it. Want to drive more visitors to your location? Pay for a particular Pokémon to make an appearance, or your competitor will.

Then of course there are also the unintended applications of the data. There have already been stories of crimes, even a murder, linked to the location data elements of the game. How long before the first major hack is uncovered?

Pokémon Go is going to be an interesting privacy story for quite some time I think. Not simply because of its huge popularity, though in no small part because of that, but the use of location data is only going to grow over the coming years, and the issues are only going to get more complex. The popularity of Pokemon Go and the huge data it generates, will almost certainly make it a pioneering proving ground for both the problems, and hopefully the solutions.

Meanwhile, if you’d like to know where to find Privachu, you will have to wait for them to reach out, when they have learnt to trust you.

GDPR Compliance Means Re-visiting Cookie Consent

IagreeRemember the cookie law?  Ticked that box ages ago and not thought about it since?

Well, it is now time to re-evaluate your solution, because the game has changed.   The ePrivacy Directive which gave us the cookie law is currently being looked at in a public consultation, but that is not really the issue.

The fact is that the GDPR, which is now law but subject to a two-year grace period before enforcement, has already tightened up the rules as well as increased the penalties for getting it wrong.

There may be a while to go yet, and we may see some guidance from regulators, but I think they will have other issues on their collective agendas.  So it is really important to start thinking about the changes you will need to make now, especially for companies that have a lot of websites.

So what does GDPR mean for cookie consent?

Cookies can be personal data. The GDPR explicitly states that online identifiers, even if they are pseudonymous, even if they do not directly identify an individual, will be personal data if there is potential for an individual to be identified or singled out.  Any persistent cookie that is unique to the device by virtue of its attributes or stored values fits the criteria for personal data.  That means most cookies, and certainly the most useful ones for site owners. This is the basis for cookie consent being about GDPR compliance now, as well as the existing cookie laws. For more of the details on this argument, see the blog post on the Cookie Law website.

Implied consent is no longer going to be compliant. There are several reasons for this. Mainly its because the GDPR requires the user to make an ‘affirmative action’ to signal their consent. Simply visiting a site for the first time would not qualify.

Advice to adjust browser settings won’t be enough. The GDPR says it must be as easy to withdraw consent as give it. Telling people to block cookies if they don’t consent would not meet this criteria – it is both difficult and ineffective against non-cookie based tracking.

‘By using this site, you accept cookies’ statements will not be compliant. If there is no genuine and free choice, then there is no valid consent. Also people who don’t consent also cannot suffer detriment, which means you have to provide some service to those who don’t accept those terms.

Sites will need an always available opt-out. Even after getting valid consent, there must be a route for people to change their mind.  Again this comes down to the requirement that withdrawing consent must be as easy as giving it.

Soft opt-in is likely the best consent model.  This means giving an opportunity to take action before cookies are set on a first visit to a site.  As long as there is then a fair notice, continuing to browse can in most circumstances be valid consent via affirmative action.  Although see above about a persistent opt-out route. This however may not be sufficient for sites that contain health related content, or other sites where the browsing history may reveal sensitive personal data about the visitor.

You need a response to Do Not Track browser requests. A DNT:1 signal is a valid browser setting communicating a visitor preference.  It could also be seen as an exercise of the right to object to profiling.

Consent will need to be specific to different cookie purposes. Sites that use different types of cookies with different processing purposes will need valid consent mechanisms for each purpose.  This means granular levels of control, with separate consents for tracking and analytics cookies for example.

Most sites right now would fail on many of these criteria.  But you will only need to fail on one of them to risk getting a fine under the GDPR.  It’s time to take action.

Consent under the General Data Protection Regulation

The consent of the individual for use of their information has long been a cornerstone of privacy and data protection law around the world.  It is widely seen as one of the simplest and most transparent way to ensure fair and legal processing.  Yet in many ways consent has come under increasing attack in terms of its suitability to achieve this in a balanced way.  In a digital world, with ever more personal data being collected and analysed, on ever smaller screens, or in the case of many Internet of Things (IoT) devices no screen at all, the utility, validity and viability of consent based data processing is regularly questioned, even if the alternatives seem paternalistic or sneaky.

With this in mind it only seems right to delve into the consent provisions laid out in the General Data Protection Regulation (GDPR) and see what we find.  I’m not going to promise a complete analysis here of all the aspects of the regulation that touch on or are touched by the issue of consent, but hopefully will cover the most salient, practical points of concern.

The Definition

Article 4 of the GDPR provides the core definition of consent as:

any freely given, specific, informed and unambiguous indication of his or her wishes by which the data subject, either by a statement or by a clear affirmative action, signifies agreement to personal data relating to them being processed;

Although the final text only requires consent to be explicit for certain types of data processing, the definition here sets quite a high bar for all forms of consent.

Affirmative Action

Notably, we have this idea of “a clear affirmative action”, and in Recital 25 this is spelled out in terms of both what is and isn’t valid so:

This could include ticking a box when visiting an Internet website, choosing technical settings for information society services or by any other statement or conduct which clearly indicates in this context the data subject’s acceptance of the proposed processing of their personal data.

Silence, pre-ticked boxes or inactivity should therefore not constitute consent.

This last element particularly seems to destroy the notion of ‘implied consent’ where simply using a service, particularly a digital one, can be taken as an indication of agreement.

So the subject must take an action, and that action will have to be a clear indication of consent.  This would appear to rule out any other actions a user might make on their device that could easily be misinterpreted, a subject I may return to at a later date.

Freely Given

There is a particularly high bar for determining whether or not consent is freely given and this may create the greatest difficulties for certain types of digital services.

There must be a “genuine and free choice”, which is particularly emphasised in Article 7(4):

When assessing whether consent is freely given, utmost account shall be taken of the fact whether, among others, the performance of a contract, including the provision of a service, is made conditional on the consent to the processing of data that is not necessary for the performance of this contract.

Many so-called ‘free’ web services rely on monetisation through behavioural advertising, which itself means profiling of visitors.  If access to those services is made conditional on allowing profiling – then there can be no valid consent for the profiling activity.

One of the recent trends we have seen is publishers preventing visitors using Ad-Blockers from viewing content.  This strategy may have to be re-thought, particularly as Recital 32 makes clear: “consent should not be regarded as freely-given if the data subject… is unable to refused or withdraw consent without detriment.

Article 7(3) also makes the point that “It shall be as easy to withdraw consent as give it.

When taken in conjunction with the first point about affirmative action, this suggests that if consent is provided through an action like a click on a button or link, then to be freely given it must also be withdrawn through a similarly simple and easily accessible action.

Specific and Informed

For consent to data processing to be specific, it must be separated from other type of consent and actions.  This might mean for example that agreeing to the terms of service for delivery of an item you have bought online, should be a separate action from agreeing to have your data shared with third parties for marketing purposes.

In addition, being informed means knowing about all the different purposes of processing, and knowing the identity of the data controller, as a bare minimum.  It also means being informed of ones rights, such as the ability to withdraw consent or object to some types of processing, like profiling.

Although these kind of provisions have been around a long time – the requirements to meet them are much more defined in the GDPR.  There has been a long history of smaller websites in particular cutting and pasting privacy notices from other sources without much thought.  That kind of approach will be much higher risk under the GDPR.  To produce a valid notice, organisations will have to have a thorough knowledge of their uses of personal data.

Demonstrating Consent

One of the many significant changes introduced by the GDPR is the move towards greater organisational accountability and a shifting of the burden of proof for compliance.

So one of the conditions for valid consent, in Article 7(1) states “the controller shall be able to demonstrate that consent was given by the data subject to the processing of their personal data.

This means not just recording the fact that someone ticked a box in a form, but having an audit trail that links the action to any notice and the actual processing of the data concerned.

Failure to be able to verify consent records in some way will itself be a breach of the requirements for legal consent. This not only exposes the organisation to a risk of enforcement, it can also potentially render large swathes of personal data useless for any purposes that are reliant on consent.

Administrative Fines

It is well known that the GDPR creates the ability for regulators to impose huge fines on organisations for compliance failures.  What has been less publicised is the granularity of detail of how these fines might be meted out.

In the UK we saw throughout 2015 how the ICO handed out its largest fines for unsolicited (read unconsented) marketing.  The GDPR strengthens the hand of regulators for this type of enforcement.

So in Article 79 we see that infringements of the basic principles of processing “including conditions for consent” can be subject to the highest level of fines, which may be the higher of 20 Million Euros or 4% of  “total worldwide turnover of the preceding financial year”. Ouch.

Conclusion

This area of compliance has until now and for many businesses been the least likely to be well managed, and most likely to be bending or breaking the rules.  Under the GDPR legally valid, documented consent could well become one of the most important things to get right.

If you need any help preparing for the GDPR, and particularly with issues around use and proof of consent, please get in touch today.

 

Optanon GDPR Compliance Manager

We have been working for several months now on a new platform to help organisations assess their readiness to comply with the EU General Data Protection Regulation (GDPR).

GDPR Compliance Manager will be released later this year as part of the stable of Optanon brand products that currently includes our Website Auditor and Cookie Consent solutions.

The platform will enable organisations to work out what changes they will need to put in place to meet the requirements of the GDPR before it comes into force.  In addition it provides planning and documentation functionality to support a change programme as well as produce the accountability documentation that will be required.

We will be releasing more information in the coming weeks and months, but for now, here is a preview screen shot.

gdpr-comp-mgr

If you would like to know more about how Optanon GDPR Compliance Manager might help you, and arrange a demo, please give us a call or drop us an email.

General Data Protection Regulation Top Ten Issues

The ink is barely dry on the draft, but the  EU General Data Protection Regulation (GDPR) looks set to change the regulatory environment for personal information not just in the EU, but around the world. Its aim is to create a legal infrastructure for the use of personal data that is fit for purpose, both today and in the future.

The GDPR was designed to increase legal certainty with regards to information flows both within the EU’s borders and beyond. It also introduces stronger consumer protections, with requirements for greater transparency and accountability about how data is used by businesses, not-for-profits and governments alike.

This is intended to give individuals increased trust in data practices.  Consumer research in the last few years has shown consistently high levels of concern and lack of trust in this area, and this is believed to be a potential brake on the future growth of digital technologies.

However, in order to achieve these goals the GDPR does come with some stings in its tail. It places much greater requirements on businesses to communicate effectively with customers, and obtain much clearer consent for the use of their data.  Organisations also have to provide customer choice mechanisms, and there is a greater emphasis on documenting data processing activity. And then of course there are the fines.

At over 200 pages it is a very wide ranging instrument.  However, for those who haven’t had time to read it yet, these are what we think the top 10 issues for most organisations will be.

1.  A broader definition of Personal Data

As we predicted earlier, the scope of what constitutes ‘personal data’ has explicitly been broadened to include any information ‘relating to’ an individual. This specifically includes ‘online identifiers’ so cookies and the advertising IDs seen in the mobile eco-system will be caught up, along with anything that contributes to identifying an individual, or links to such identifying information. This has some widespread implications for online tracking in particular.

2.  A higher bar for consent

Whilst the final text shied away from explicit consent as a requirement, except when special categories of (sensitive) data are concerned, there is still much emphasis on gaining consent through active user mechanisms like tick boxes.

A key part of the test of the validity of consent is whether consumers understand what they are agreeing to, and are given a meaningful choice. There is also a significant shift in the burden of proof.  You will need to be able to provide evidence that you obtained consent from specific data subjects, which is going to require much better record keeping for many organisations.

3.  Data Protection Officers

Although not a universal requirement, many organisations will be required to appoint a Data Protection Officer (DPO) to oversee data uses and ensure compliance with the law. They will be mandatory in the public sector, but for private sector organisations the key test will be whether the organisation is involved in “systematic monitoring of data subjects on a large scale“, however it is not clear at this time how ‘large scale’ will be interpreted.

Earlier, more detailed, requirements for the skills and experience of the DPO and guarantees over their employment, have been dropped but a key issue in the short to medium term will be a lack of the right people to fill such roles.

DPOs however can be outsourced, which may create a market for new services, especially to cater for the needs of smaller businesses.  The DPO responsibilities can also be given to someone alongside other work within the organisation, as long as this does not create a conflict of interest.  So training existing staff into the role could be a viable option for many.

4.  Transparency and Accountability

The GDPR scraps the need for controllers to register with their Data Protection Authority (DPA), but replaces this with a requirement to both better inform data subjects about practices and rights, and to keep records that can be made available on request – such as in the event of a data breach or a compliance complaint.  Such records are about demonstrating that the organisation has thought through the impact of its systems and processes, and made informed choices about how to comply with the GDPR.  The Data Protection or Privacy Impact Assessment (PIA) is one example of such documentation.  It is intended that a PIA will show that an organisation has considered the risks associated with its particular personal data practices, and taken reasonable steps to control or mitigate them.

There are also new requirements on the level of detail that organisations must provide to data subjects about their practices, as well as a need to make sure that this information is both accessible and easy to understand. In particular there is a need to explain the logic behind decisions made on the basis of analysing personal data – which may have particular significance in some sectors that have relied on such processes being largely secret. Organisations are also expected to inform subjects about their rights and how to exercise them.

5.  Data Protection by Design and Default

Although references to this have been cut back in comparison with earlier versions of the text, the GDPR contains requirements that the design of systems and processes are required to give consideration to compliance with the principles of data protection. Particular emphasis is placed on the ideas of only collecting data necessary to fulfil specific purposes, discarding it when it is no longer required, and protecting data subject rights.

It also sets up the possibility for the development of certifications and codes of practice that organisations can follow to help meet these requirements.  Keep an eye for these as they develop.  In particular we expect DPAs to get involved in this area.  They will be losing their registration fees and therefore needing new sources of income.  In the UK the Information Commissioners Office (ICO) has already been developing this idea, so expect it to continue. Trade bodies are also likely to have a role to play here.

6.  The Right to Erasure and Data Portability

These new data subject rights are likely to pose challenges for many organisations. The right to erasure is a clarification of the much talked about ‘right to be forgotten’.   Although the circumstances when the right can be exercised have been made clearer, the balancing against other rights and obligations is still needed.

The right to have a copy of your data in a machine readable form to transfer to another provider may be difficult at first, but it could also lead to better systems interoperability in the longer term – which is already a growing technology trend.  In particular this provision may facilitate the development of the market for ‘personal data stores’, an idea that has long been talked about, but not yet fully realised as providers have struggled with sustainable and scalable business models.

7.  Removal of Subject Access Request Fees

Data subjects have a right to know whether or not an organisation is processing their personal data, what that data is and the purposes of the processing.  The GDPR removes the ability to charge an upfront fee for providing such information, and there is a risk requests will increase as a result of this, pushing up costs.  Current allowable fees don’t exactly cover the cost of  a Subject Access Request (SAR), but are seen as a deterrent to time wasters.  If companies are no longer able to charge fees, it is feared this could open the floodgates to many more SARs.

Companies will be allowed to charge for subsequent copies of the same data, which may reduce the risk of this to some extent. However, it may be worth investing in making sure you can respond to such requests as efficiently as possible, which will not be easy in many cases.

8.  Reporting Data Breaches

Data controllers will be required to report data breaches to their DPA, unless it is unlikely to represent a risk to the rights and freedoms of the individuals concerned. However this qualification may be difficult to judge, so in many cases, it will be safer to notify. The notice must be made within 72 hours of becoming aware of it, unless there are exceptional circumstances, which will have to be justified.

Where the risks to individuals is high, then the data subjects themselves will also need to be notified, although a specific time scale is not specified for this.  It is also worth noting that the DPA can instruct an organisation to inform data subjects if they haven’t already, so we can expect to see further guidance on the circumstances when it would be correct to do so.

9.  Fines

The GDPR very deliberately raises the bar in terms of the ability for DPAs to issue fines for breaches of the rules.  They can go as high as 4% of global turnover.  Not only are these designed to ensure data protection becomes a board level issue, by taking into account worldwide revenues, they seek to side step attempts by multinationals to engage in fine-avoidance business structures.

It is also worth noting that fines can be levied without the necessity to demonstrate harm – although the largest ones will likely be reserved for cases where data subjects have directly suffered damages.

10.  Data Processor Responsibilities

Organisations that only process data on instructions from their client are not directly covered by the current data protection regime.  Their actions were assumed to be governed by agreement with the customer who would be the data controller, and therefore directly responsible for the actions of the processor. However this all changes under the GDPR, and processors now have direct legal obligations and responsibilities.

In particular this means that processors can in certain circumstances be held directly liable and be required to pay compensation to a data subject. It will therefore become very important to establish the contractual relationships and liabilities of the different parties in a controller/processor relationship, and the costs of some services by processors may rise to offset additional risks and insurance costs.

 

We hope you find this useful.  In future posts we will look at more details of what you can do to prepare, as well as looking into each of these areas in more detail.

In the mean time, if you have any questions and would like to know more about how the GDPR might effect your business, do get in touch and we will be happy to help.

The GDPR Has Landed

After nearly four years, and some of the most intense negotiations and lobbying in EU history, agreement on the text of what some are calling the most significant piece of privacy legislation ever was reached on Tuesday night (Dec 15th 2015).  Tweets from various key players even equated the agreement with the election of a new Pope, with talk of white smoke going up.

This was followed up this morning (Thursday 17th Dec.) with a vote by the European Parliament’s Justice Committee to formally adopt the text – although the support was not unanimous.  There will be a few more hurdles to get over but these are largely expected to be rubber stamping exercises.

It is expected that the Regulation will become EU law in the early part of 2016, with a two year lead in period before it becomes enforced.  The final wording of the document has been released, although it will be subject to cleaning up in terms of clause numbering, and of course it will also have to be translated into every language of the EU.

Time to Get to Work

With the text in place, the real work of organisations to make sure they will be in compliance with the new rules can now begin.  There will undoubtedly be a lot of analysis of the requirements in the months to come, to work out exactly it all means, and we of course will bring you as much of this as we can.

One of the things that makes the legislation so significant is its global reach.  Any organisation, regardless of its location in the world, that touches the personal data of EU residents and citizens in any way, is going to be affected by this law.

Although there are parts of it for everyone from privacy fundamentalists to Silicon Valley libertarians to dislike, almost all agree that this represents a new era of strong data protection and privacy for the digital age.

I suspect many privacy people around the world are going to be spending a big chunk of the holiday season working out how to hit the ground running come the New Year. I know I will.

Happy Christmas!

Touchnote Hack – A Notification Failure?

The personalised postcard company reported it has become the latest UK online service to have lost the personal information of millions of individuals as a result of an attack on its systems.

It seems I am one of them, but it looks like they are not planning on telling me.  This seems to me a big hole in their breach notification strategy.

I am not one of their customers, but somebody I know is and I have received postcards via their service.  So my address details are in their systems.  I know that my data is part of the breach only because the person who is the account holder told me they had received an email confirming that the information in their account was amongst the stolen records – and that includes my address.

However, Touchnote seems not to be too concerned about this, as its online notice about the incident contains the following:

We also confirm the card recipient’s name and postal address regrettably has been stolen as part of this data theft. However there is no action required by the recipient as this information alone cannot cause identity breach.

Now, I have taken care to avoid get my address information being widely available.  I opt out of the edited electoral register, and registered with the Mail Preference Service.  As a result I don’t get much promotional mail – which I am happy about as it saves everybody time and money, not to mention the environment.

I am now expecting this to change as the criminals will no doubt seek to get a return on their investment by selling my details on.

Worse than this, an obvious attack would now be to send me some mail, perhaps pretending to be Touchnote, to try and sell me some kind of identity protection.  It is not too hard to imagine how a cleverly worded letter, perhaps referencing the online news about the hack, could find enough people who would fall for some con that will result in them handing over money to the criminals. It would be even more effective if the criminals also had copies of the images used in past postcards – although this appears not to be the case.

Consider that the nature of the Touchnote service probably means the addresses of lots of elderly relatives who love to get personalised postcards of the grandchildren on holiday. There are already too many stories of the exploitation of the elderly and vulnerable in this way to not consider this a serious threat. Then of course such people also get added to ‘suckers’ lists and further monetised.

So Touchnote may be doing a reasonable job informing its customers, but logically, this represents less than half of the people who have had their information compromised.  At the moment it appears they have not thought about risks to them.

It is worth noting that under new breach notification rules in the forthcoming EU GDPR – all data subjects, have a right to be notified when a breach takes place.  Companies need to realise that this means a lot more people than just their paying customers.

Editorial Note: A correction was made at 15.30 on the date of publication.  The original article stated it was not known if Touchnote had lost uploaded photos.  It was later stated that no photos were accessed illegally in the hack. This is now clear in the article.

Building Websites on Privacy by Design Principles

One of the many significant changes being introduced by the European General Data Protection Regulation (GDPR) is the requirement to adopt principles of privacy by design (PbD) when creating or revising processes or technology.

Given that websites are regularly being re-designed and developed, often by out-sourced agencies, it is quite likely that when the requirements become law, web development projects will be the biggest, most immediate category of impacted software development activities.

Websites are also often the first and only point of contact between an organisation and its prospects and customers, who are also usually the most numerous category of data subject.  The website therefore sets the tone for the brand and its attitude to privacy principles.

With this in mind, it is rapidly going to become important to have an understanding of what this will mean for web design. This article provides a brief overview of the issues the businesses and web designers will need to think about.

Overview of PbD

Much more than an empty phrase buried in a long legal document, Privacy by Design (PbD) was a concept started by respected Canadian privacy regulator Ann Cavoukian back in the 1990s.  It provides a framework set of seven principles to guide development of new systems and processes handling personal information.

In brief, following these principles means building privacy into the design of a system as the default setting, ensuring personal data is kept secure and destroyed when it is no longer needed, providing users with transparency and meaningful choice with respect to the use of their data, and avoiding unnecessary trade-offs between privacy and other interests.

More detail is available here (PDF).

Looking at these principles, it is easy to see that the vast majority of websites would fail even the most lenient test of their application. More than that, when presented recently with a requirement for a partial implementation of these principles, pushed on to them through the law, many website owners exhibited a strong resistance to the principles.

I am of course talking about the ePrivacy Directive, aka cookie law.  The requirement for consent for the use of cookies embodies the PbD principles of privacy by default, transparency and meaningful choice.

More than 4 years after first coming into effect, although more and more responsible companies are moving in the direction of greater privacy choices, the law is given little more than lip service by millions of other sites.

More than that it is frequently derided by web professionals, many of whom have as little understanding of the law, as they accuse law makers having of technology.

The Impact on Website Design

A privacy by design approach to web design and development needs to take into account the two broad modes by which visitor privacy is impacted:

  • Volunteered personal data
  • Automated personal data collection

The volunteered data part is relatively easy, and many websites tackle this reasonably well, although there are a few things to look out for.  It is the automated bit that presents more challenges.  We will however look at both.

Volunteered Data

The most obvious source of volunteered data is when visitors submit their information through web forms.  Though on the surface this is a straight forward case of getting consent through a privacy policy and checkbox, there are a few things to consider:

The site must be clear about all the potential uses for the data, not just the uses the subject expects or is providing their details for.  Where any of those uses might be additional to the core reason the ‘privacy as the default’ principle would require data the subject opt-in to those additional uses, and not just to all future uses, but each specific use.

Even where opt-in consent has been obtained, there would also need to be an easily accessible option/control mechanism to opt out again at any time.

What happens to the submitted form data is another crucial design issue.  Is it emailed, sent to a CRM, stored in the website database? It is common for all three to happen, resulting in multiple copies of the data.  But if you are sending the data to another system – leaving it in the web database is an unnecessary security vulnerability, which many sites are exposed to. If you are not using the data operationally in your site (such as for logging in), clear down the data submitted through forms on a regular basis.

Not all volunteered data is captured directly through web forms however.  Can people set a language preference on your site? Do any interactions result in content being personalised? This could be considered volunteered personal data.  How are people notified about this? How is the information saved, for how long?  These are all valid PbD considerations.

Automatically Collected Data

This is generally the largest volume of data generated from your website.  Much of it happens through the use of cookies and other mechanisms which are set and read by the various applications your website uses.

In Europe there are specific rules around visitor consent and the use of cookies.  We will not go into the detail of those requirements here except to point out that by applying privacy by design thinking to the use of cookies, you will likely also to be largely compliant with the cookie rules.

Of course not all cookies contain or can be considered as personal data.  However the extended scope of the definition of personal data under the GDPR means that many types of cookies will likely fall directly and clearly in scope of the new rules.

In particular, cookies that act as unique device or user identifiers – such as those used for online tracking and user login, are likely to be considered as personal data under the EU GDPR.

This will therefore mean a need to evaluate all elements of your website that set cookies, identify whether these carry personal data.  The next stage would be to consider whether privacy-friendly alternatives exist, or if not, how to implement user controls.

This has particular implications for technologies that set third party cookies.  In particular it would no longer be possible to make the argument that ‘we are not responsible for third party cookies’. PbD requires site owners to shift the focus from cookies per se to decision to use the underlying technology. No website owner can realistically say ‘we are not responsible for the technology we add to our website’

A PbD approach to site design therefore requires that every bit of the technology infrastructure of a site will need to be evaluated for its impact on privacy, and require the provision of suitable default settings, notice and control.

PbD principles would suggest that you can’t just add a standard Facebook Like button to your pages by default. You would need to ask users to opt-in to such features, whilst also making sure that they are aware of the privacy implications of doing so.

This also holds true for a vast array of technologies and services that are provided by third parties as scripts and code embedded into pages.  Analytics, videos, music, discussion forums, and of course advertising – all of these page elements are typically served up from separate host domains that are more or less invisible to the average visitor.

All of the most common examples of these services involve some level of personal data collection. A requirement to follow PbD principles means giving consideration to the impact of these throughout the process of development.  This is no easy task as many technologies designed to be integrated into other sites, are not clear about their data collection practices.

The Impact on the User Interface

PbD requires a thorough examination of the architecture of a website and its privacy impacts. It also requires mechanisms for visitors to be able to make realistic privacy choices.

This of course means that there is a need for interfaces to support such choices.  And this may be one of the greatest challenges for web design.

The kind of notices that we have seen arising from attempts to comply with the cookie law will not readily suffice – they are neither granular enough nor present enough choice.  What will be needed is more dynamic interfaces, showing and hiding content and functionality based on choices made.

Such interfaces are not uncommon – the best web design already configures content and services around users, this is what ‘personalisation’ is. However, interface personalisation is generally not clear to the user, especially when and why it occurs.  Privacy by Design means not only making the fact of personalisation explicit, but providing explicit choices to visitors about whether or not it should take place.

Allied to this designers will also have to take into account whether or not they want to give access to content and services to people who make privacy choices that go against the economics of the services being provided.

So if a visitor comes to a free news site, which is paid for by privacy invasive advertising, yet chooses not to have the advertising, designers will have to decide if they should not be given access to the content.

Conclusions

The aim of this article has been to raise just a few of the issues that are going to face the web design profession once the new European data protection rules are finalised.

Clearly of course these are not just decisions for ‘designers’ in the traditional sense – they are also examples of some fundamental questions for digital strategy.

The new law will mean there will be no getting away from questions like these when it comes to a new web build.  So the time is also fast approaching when some answers will be needed.

Safe Harbour Update

After days of anticipation following the decision by the EU’s highest court to strike down the Safe Harbour mechanism for transferring personal data to the US, the Data Protection Authorities have now spoken.

The Article 29 Working Party, the body that represents the collective voice of the EU’s privacy enforcers issued a statement on the 16 October.  They have promised not to rock the boat just yet, but that if a viable alternative to Safe Harbour is not found by the end of January 2016, there is a clear warning to batten down the hatches and prepare for a storm of enforcement.

OK, enough of the puns.

What this amounts to is more pressure to finish negotiations on Safe Harbour 2.0.  Transfers based on Safe Harbour are now unlawful they state, however Standard Contractual Clauses and Binding corporate rule are still valid tools.

Nonetheless, there is a recognition it seems that these are not completely watertight (OK, one more), as they will continue to consider what the court judgement means for these other transfer tools, and the continued use of them is qualified as allowable ‘during this period’.

They also emphasise the fact that the ‘massive and indiscriminate surveillance’ unearthed by Edward Snowden remains an unresolved issue at the heart of the problem.

With this in mind, readers should also take a look at the blog from Microsoft’s respected Chief Legal Officer, Brad Smith.  Addressing the issues that we have touched on ourselves, about the problems of jurisdictional boundaries and the global web, his suggestions for a way forward are highly practical.

At the heart is the idea that a citizens legal protections should follow their data wherever it is stored.  If this could become the basis of new international agreements, many of the issues could be resolved, including processes for lawful access by security and government agencies.

Sounds like plain sailing to me.

Safe Harbour Falls into the Atlantic

The Safe Harbour scheme that provides the legal underpinning for significant volumes of personal data use by the world’s largest technology companies has just been declared invalid by Europe’s highest court.  So does that mean the internet is going to grind to a halt as billions of data transactions get held up at the border? No, but there are going to be some changes in the background to make sure the information keeps flowing.

First, a bit of back story

The EU-US Safe Harbour (or Harbor on the other side of the pond) scheme was put in place about 15 years ago to make up for the fact that US privacy laws were judged to not provide an ‘adequate’ level of protection for EU residents when their personal data was transferred to US businesses for any reason.

Basically it requires the US firms to self-certify that they will be held to a set of privacy principles designed to provide the protections that are lacking in US law.  Some 4,500 firms have until now been relying on the scheme, including many of the internet’s technology giants.

The decision by the EU’s highest legal authority, the European Court of Justice (ECJ), to kill off Safe Harbour has come about through a case brought against Facebook by an Austrian student, Max Schrems, now being hailed as a hero by many privacy advocacy groups.

In the light of Edward Snowden’s 2013 revelations about the extent of mass surveillance by the US security agencies, which allegedly involved unrestricted access to personal data held by Facebook and others, Schrems argued that the protections of Safe Harbour were inadequate.

The Court essentially agreed, noting that the NSA having both unlimited access to personal data, and there being no provision for an EU resident to take legal action against that access, represent a compromise of fundamental rights to privacy enshrined in the EU.

With that decision the walls of the Safe Harbour crumbled into the Atlantic Ocean.

So what happens now?

Safe Harbour has been heavily relied upon, largely because it was the easiest route for US companies to be legally import personal information from the EU, but it was never the only route.  What happens now is that those companies will need to put other mechanisms in place.  The next best method is what is known as ‘Model Contract Clauses’ – standardised terms and conditions.  Although not complex to adopt for most companies – it does involve a lot of paperwork and admin – so it can take time and be costly.

For larger companies, and especially those for whom data is their stock in trade, the disruption is likely to be minimal.  It is likely to be smaller US businesses for whom this decision will be a bigger additional burden.  Fortunately the EU Data Protection Authorities (DPAs) who will be charged with policing the transition, look likely to be reasonable in giving time for changes to be made.

However, this is unlikely to be the end of the story.  As other notable commentators have pointed out, neither model contract clauses, nor their more difficult cousin, Binding Corporate Rules, contain any protections against US intelligence intrusion greater than Safe Harbour.  So, in the short term, these are equally at risk of being legally challenged.

There is however some light at the end of the tunnel. Negotiations for a replacement to Safe Harbour have been under way now for 2 years.  Although seemingly bogged down in the end game for some months, this decision is likely to put increased pressure on to get them finalised.  This new agreement does contain critical rights of legal redress for EU residents that were missing in the original scheme.

However, the light is not all that bright.  Another part of the decision was to clarify that national DPAs have complete freedom to decide if their laws are being complied with or not.  Which means that even with a new scheme in place and agreed to by the majority, a single DPA could still challenge standardised agreements if they felt national law was being infringed.

Of course all if this is also set to change again when the Data Protection Regulation gets finalised – and who knows what impact this decision will have on those negotiations. As for Max Schrems  and Facebook – their battle is also not yet over.  The decision on whether or not Facebook has actually breached EU law now goes back to Ireland’s Data Protection Commissioner due to the fact that Facebook’s EU operations are based there.

Suffice to say – we are a long way from hearing the end of this story.