Pokémon Go and Location Privacy

There is one species Pokémon that even the most dedicated of Pokémon Go players are unlikely to ever catch, and that of course makes it all the desirable.

Privachu like to be left alone to go about their lives. They are not unfriendly and can be quite gregarious. They are also not as rare as one might think given how difficult they are to get hold of. What makes Privachu different from all other Pokémon, is that they choose when and how to reveal themselves, rather than just broadcast their location to anyone that might want to find them. And of course they will only reveal themselves to others they trust not to pass the information on to people they do not want to be found by.

OK, they don’t exist really, I’ve just made them up (though if anyone from Niantic wants to create Privachu, I am willing to be reasonable on the royalties – do get in touch).

Pokémon Go, the augmented reality mobile location based game, is currently taking the world by storm, but has been the source of some significant concern around the amount of personal data collected by the app, and how this may be shared. This is especially important because it is played largely by children.

Much of the early privacy concern focussed around the fact that users appeared to be required to give Niantic, the company behind the game, full access to their Google account (one of the main ways of registering in the game), which would include all their contacts and any documents stored in Google Docs.

However, it was fairly quickly revealed that this was actually the result of a configuration error, which was rapidly corrected, and that Niantic did not make use of or tried to access any of the extra information it didn’t need to verify the identity of the player. Nevertheless, even this short lived issue might have impacted millions of people and should provide a summary lesson in putting privacy thinking at the heart of the user experience design process.

The long term privacy issues with Pokémon Go however clearly focus on the location issue. Of course location based digital services have been around for at least as long as the smartphone itself. Aside from the obvious ubiquity of connectivity, location driven services are the smartphones killer app, the one that makes it worth all the investment in many ways.

What is perhaps different about Pokémon Go, is that it is not simply collecting location data – but it is actively incentivising large numbers of people to visit particular locations where Pokémon can be caught.

Yes there are big questions around the privacy concerns of sharing (selling) of location information with third parties, and those questions are already giving rise to investigations, notably in the USA and Germany.

What I think is more interesting is – how are decisions made about where to place PokéStops, and what Pokémon are to be found there? There is a huge potential here for a kind of targeted manipulation, the encouragement of particular audiences and profiles to visit specific locations. Niantic would be crazy if they didn’t see the potential in selling this capability, and I would be very surprised if on some level they are not already either doing it or thinking about doing it. There will be a powerful profit motive for it. Want to drive more visitors to your location? Pay for a particular Pokémon to make an appearance, or your competitor will.

Then of course there are also the unintended applications of the data. There have already been stories of crimes, even a murder, linked to the location data elements of the game. How long before the first major hack is uncovered?

Pokémon Go is going to be an interesting privacy story for quite some time I think. Not simply because of its huge popularity, though in no small part because of that, but the use of location data is only going to grow over the coming years, and the issues are only going to get more complex. The popularity of Pokemon Go and the huge data it generates, will almost certainly make it a pioneering proving ground for both the problems, and hopefully the solutions.

Meanwhile, if you’d like to know where to find Privachu, you will have to wait for them to reach out, when they have learnt to trust you.

Optanon GDPR Compliance Manager

We have been working for several months now on a new platform to help organisations assess their readiness to comply with the EU General Data Protection Regulation (GDPR).

GDPR Compliance Manager will be released later this year as part of the stable of Optanon brand products that currently includes our Website Auditor and Cookie Consent solutions.

The platform will enable organisations to work out what changes they will need to put in place to meet the requirements of the GDPR before it comes into force.  In addition it provides planning and documentation functionality to support a change programme as well as produce the accountability documentation that will be required.

We will be releasing more information in the coming weeks and months, but for now, here is a preview screen shot.


If you would like to know more about how Optanon GDPR Compliance Manager might help you, and arrange a demo, please give us a call or drop us an email.

Privacy and Social Media: Incompatible or Indispensable?

The growth of social media platforms, and particularly their seeming indispensability to the lives of the digital natives, is often used as evidence of the death of both the desire for privacy and its attendant social relevance. In a post-Facebook world, aren’t privacy worries increasingly confined to the old folks’ home and a few wonks? Nobody reads privacy policies, so nobody cares.  QED.

Europe’s data privacy rules are about to be updated for the social media age.  A lot of effort over many years has gone into re-writing them.  Some say they will become too restrictive, others not protective enough of consumers’ interests, but all agree they will include the potential for massively increased fines for non-compliance.  But why go to all that effort if nobody really cares anymore?

In October 2014 the highly respected Samaritans, a charity trying to stop vulnerable people from hurting and killing themselves, released the Samaritans Radar app with no small amount of fanfare.  Anyone worried about a friend could sign up to get an alert if they posted something on Twitter that the Radar algorithm interpreted as a need for help.  Sounds great doesn’t it?  The Samaritans were very proud, taking the public data of tweets and putting it to good use to look out for vulnerable people.

There was an immediate outcry from privacy experts, the app was taken down within a few days under public pressure, and was also investigated by the UK data protection regulator, the Information Commissioners Office (ICO).

Why? All they wanted to do was to use publicly available information to help people help friends they might be concerned about.

The problem was a failure to look at the full picture.  The app was making judgements about the mental health of people without their knowledge and sharing it with a third party.  Anyone could get this analysis on anyone else, regardless of their actual motives and relationship with the person concerned.

The app was withdrawn before a full investigation could take place, not because of the risk of enforcement but the much bigger potential risk to reputation, which might have undermined the trust the Samaritans rely on to do their very valuable and important work. However the ICO still concluded that the app “did risk causing distress to individuals and was unlikely to be compliant with the DPA” [The UK Data Protection Act].

This extreme example highlights some important issues.  Data privacy laws are complex, and though they may fail to keep up with changes in technology, there are some underlying principles that reflect long established social norms and cultural expectations.  Practices may change quickly on the surface, but deep seated values shift much more slowly.

The world of social media sits at the fulcrum of the balance between the private and the public. This means that having a sophisticated understanding of what is both legal and acceptable is vital to the success of social platforms. People don’t read privacy policies because they rely on trust much more than terms and conditions.  Established privacy principles and laws play a vital role in building and maintaining that trust.  However trust can be lost very quickly, at a cost much higher than any regulatory fine, if the platform is perceived to have breached it.

Social platforms should pay attention to data privacy laws not just to avoid enforcement but because they say something very important about culture and expectations.  They might be able to ignore the some of the rules some of the time and get away with it for a while, but in the long term my bet is that faced with a choice between privacy and any individual platform, privacy will win out.

This article was originally published on the Global Marketing Alliance website:

New Draft of Data Protection Regulation Released

Shortly before Christmas a new draft version of the Data Protection Regulation was released by the Council of Ministers.  The text is still being debated but this certainly shows the direction the ministers are heading in, so is worth some analysis.

Once it is approved, this will become the third version of the law, following on from the original produced by the Commission in 2012, then the one approved by the parliament in 2014.

Once the Council version is finished, there will then be a final trilateral negotiation to reach the final piece of legislation. Comparing this latest Council draft with the version produced by the Parliament in particular gives some indication of how difficult that negotiation might be, and therefore how long it will take.

Key Issues:

Definition of Consent.  The council text weakens consent by removing the requirement that it must be ‘explicit’, preferring the use of the term ‘unambiguous’, a significant departure from both the Commission and Parliament. Although all texts support the interpretation in Recital 25 that consent should be indicated by ‘affirmative action, the Parliament further strengthened this by adding that ‘mere use of a service’ should not constitute consent.

This issue is particularly relevant to web services, which often seek to rely on continuation of browsing a site as an indicator of consent to privacy practices. The traditional alternative is putting some mechanism in place to require users to signify consent – such as tick boxes.  However this can put some people off from using a service by creating a barrier to entry, or lead to ‘consent fatigue’ – where they blindly agree to terms and conditions they haven’t read.

We have seen this battle played out before – most recently with the consent requirements in the cookie law.  I think it is safe to say that this is going to continue to be a key battleground right down to the wire.

Information Requirements. Allied to consent is the need to provide information so that data subjects can understand what it is they are consenting to. Here the Council text is far less prescriptive than the Parliament one, which sought to create a highly standardised format for information notices, with clear and consistent language and iconography. The aim was to find a model that would make privacy notices easier to understand, which many have argued is a highly laudable goal.  However the format of the notice, and especially the design of the icons, was not well received in the design community in particular.

Data Protection Impact Assessments and Data Protection Officers. The Council has embraced the ‘risk based approach’ to data protection, and this is nowhere more clear in the modifications to the requirements for carrying out Data Protection Impact Assessments and employing DPOs.  The Parliament version of the text is prescriptive in its requirements, with DPIAs and DPOs being required in most circumstances, with exceptions for small business and small scale data usage.  By contrast the Council makes DPOs voluntary for most organisations and requires DPIAs only for ‘high risk’ data processing activities.

Whilst this may lift administrative burdens in many circumstances, it also leaves much greater room for interpretation, especially around what constitutes ‘high risk’, and this potentially results in greater uncertainty and widely differing practices, which in turn could lead to weaker consumer protections.

Harmonisation.  One of the original stated goals of the Regulation was to harmonise both rules and practices across the EU – creating a level competitive playing field and contributing to the Digital Single Market initiative.  This idea is particularly attractive to multi-national operators – but one of the hardest to deliver, because it reduces the authority of individual countries through their national regulator.

That makes it a highly politicised issue.  True harmony might weaken rules in one country, whilst strengthening them in others, and this has resulted in objections to the same wording, but for very different reasons – Germany and the UK being prominent examples.  The Council text has a number of provisions in it which appear designed to increase the autonomy of individual country regulators in comparison with the Parliament and Commission texts, leading to a weakening of the ‘one stop shop’ principle.

Also of significant interest in this draft are the sheer number of notes indicating the continued concerns of individual member states.  This tells us that agreement on this document may still be a long way from being reached.

January 2015 saw the start of the 6 month Latvian presidency of the EU, and whilst they have put getting a final position from the Council as their top priority, the continuing differences have already led prominent MEP Jan Albrecht, who led the Parliament work on the legislation, to predict that we won’t see finalisation of the Regulation much before the end of this year.

What is High Risk Data Processing?

The idea of a ‘risk based approach’ to privacy and data protection compliance issues has been around for a number of years, and increasingly being embraced by regulators and legislators.

The latest draft wording of Chapter IV of the GDPR agreed by the Council of Ministers puts the risk based approach in a very central role. Under this draft, a significant range of legal obligations only come into effect if the data processing represents a high risk to the rights and freedoms of the individual.  This includes the need to conduct a Privacy Impact Assessment, report data breach, or in some cases appoint a Data Protection Officer.

So working out whether or not your organisation is doing any processing that could be seen as high risk is very important.  Which means there needs to be some kind of objective measure of what high risk activity is.

We get some steers from the Regulation in this respect.  Activities that create a risk of ‘discrimination, identity theft, fraud or financial loss’ are given as clear examples of high risks.  So lets look at one of the most common of these problems, identity theft.  What kind of processing can create a risk of identity theft?

Traditionally identity theft is thought of as activities like opening a bank account, taking out a loan, getting a passport, driving licence, or obtaining state benefits, all done in someone else’s name, principally for personal gain or wider fraud purposes.

It is easy to see how this can be damaging, and there are various existing checks and balances at banks and government agencies to make this difficult, including a requirement to provide quite rich and varied data when first establishing or proving your identity to the agency involved.

However, identity increasingly also encompasses online.  Someone else being able to take control of your social media presence, or to impersonate you in places where you have no pre-existing social identity, especially if that involves aspects of your real world identity (such as a photo) could be seen as a form of identity theft that is equally or even more damaging to the individual.  If someone can take over and damage my reputation using my stolen online identity, that could actually have more long term damage financially, due to loss of earnings opportunity, than a one-time fraudulent charge on my credit card.

So how easy would it be to take control of some aspect of my online identity, or impersonate me online?

Online identities are generally protected by login gateways, and these are primarily limited to a username and a password. Often the username is an email address, partly because it is then guaranteed to be unique. We are also told frequently how much we re-use password across different services, as well how easy those passwords can be to crack.

It is common practice amongst online criminals that when they have obtained login details from one service, they attempt to re-use them across a multitude of others.  This means that your online identity is only as secure as the most insecure site you use it on.

So even if as an organisation you are confident in your own security and use appropriate encryption standards, you have no way of defending against the same login credentials being obtained via another, less secure service, and therefore used for identity theft.

It therefore follows that any application that relies on a user-created login identity, especially if that includes an email address, should then be considered as high risk data processing leading to a significant risk of identity theft.

It might also be argued that any processing of email addresses, even on their own, creates a risk of identity theft, given the general role they play in online identity.

So that would impact almost all organisations that are operating online – as the vast majority will at some point collect email addresses.

And what are the organisations least likely to adequately secure that information from loss or theft?  The answer is small organisations and start-ups with limited experience and budgets, and who have a prime focus on getting their innovations to market, or marketing to customers.

And yet, these are precisely the organisations that the Council of Ministers also argued needn’t be subject to the same level of scrutiny or administrative burden.  However, the reality is if they don’t get it right, they increase risks elsewhere in the online ecosystem.

The idea of a risk based approach to data protection is a very interesting one.  It encourages a focus on those aspects of operations that have the most potential to create harm. However you have to take into account that high risk data processing does not necessarily mean rare or uncommon, or that it only takes place within large companies.

Privacy Impact Assessments and the DPR

One of the key obligations the EU Data Protection Regulation will impose on organisations is a requirement to conduct what are officially called Data Protection Impact Assessments but are more commonly known as Privacy Impact Assessments, or PIAs for short.

PIAs are not a new concept, they have in fact been used in some countries and specific industry sectors for several years.  For example they are used widely in big IT companies like IBM and HP, and they are already mandatory in the UK for most public sector bodies.

The big change however is that under the DPR, many more smaller organisations will have to carry them out for a lot of their data processing activities, or at least be able to justify when a PIA is not necessary in certain circumstances (spoiler alert: cost alone will not be a valid reason).

The problem with this for many people is that Privacy Impact Assessments have a reputation as being time consuming, requiring  a lot of managerial and expert input,  extensive analysis by privacy law experts, and as a result expensive.  Those who would rather avoid having to produce them, often paint a picture of a box-ticking exercise that gets in the way of innovation and progress, particularly for small companies.

However there is a bit of a chicken-and-egg situation going on here.  PIAs are generally big and expensive because they are used for big budget projects where the privacy issues are complex and decisions taken could impact thousands if not millions of people.  In such situations it is absolutely right that considerable effort is taken to reduce risks that could lead to significant problems for large numbers of people.

However, it is perfectly possible to apply the same principles to smaller projects in a way that is both manageable and proportionate.

By asking the right questions to the right people at the right stage in the development cycle and using the principles of triage-based assessment, an organisation can quickly distinguish between different levels of risk – and then use that information to decide where more effort is justified.

As the use of personal data becomes ever more central to economic growth and society at large, the organisational costs of losing or misusing it are increasing.  The headlines are about regulatory fines, but the real cost is loss of trust from citizens and customers. This is something which small companies in particular can struggle to recover from as they tend to have a less robust brand reputation to see them through.

A well designed PIA can quickly and efficiently distinguish between high and low risk data practices and allows smaller organisations to focus precious resources where they can have the biggest positive impact, whilst avoiding being side-tracked by trivia.

Far from being a threat to innovation in small businesses, a PIA can actually help them learn from the experience of large companies, and even help them punch above their weight.

New Online Audit Powers in France

France’s Data Protection Authority, the CNIL, has been given powers to carry out online audits  to detect violations of the Data Protection Act, and instruct data controllers to make changes.

Back in April this year, it signalled that this was coming when it announced its intentions to use such audits in its plan for the coming year.

This effectively means they will be able to scan websites and mobile apps for infringements at a distance, and the first time that a company might know there is a problem is when the get an enforcement notice.

It is unclear yet how these remote audits will take place and what technology might be involved, but it has been indicated that they will carry out around 200 online audits this year.

This may be seen as a signal for a more pro-active approach to data protection enforcement – looking for compliance issues before they result in harms, or any complaints are received.

If this approach becomes more widespread – it will become more important than ever for companies to make sure they are conducting their own regular audits and reviews of the online applications – to make sure they spot any problems first.

Using our Privacy Auditor – which can be trialled as part of our free DPR toolkit, will help companies do just that.