General Data Protection Regulation Top Ten Issues

The ink is barely dry on the draft, but the  EU General Data Protection Regulation (GDPR) looks set to change the regulatory environment for personal information not just in the EU, but around the world. Its aim is to create a legal infrastructure for the use of personal data that is fit for purpose, both today and in the future.

The GDPR was designed to increase legal certainty with regards to information flows both within the EU’s borders and beyond. It also introduces stronger consumer protections, with requirements for greater transparency and accountability about how data is used by businesses, not-for-profits and governments alike.

This is intended to give individuals increased trust in data practices.  Consumer research in the last few years has shown consistently high levels of concern and lack of trust in this area, and this is believed to be a potential brake on the future growth of digital technologies.

However, in order to achieve these goals the GDPR does come with some stings in its tail. It places much greater requirements on businesses to communicate effectively with customers, and obtain much clearer consent for the use of their data.  Organisations also have to provide customer choice mechanisms, and there is a greater emphasis on documenting data processing activity. And then of course there are the fines.

At over 200 pages it is a very wide ranging instrument.  However, for those who haven’t had time to read it yet, these are what we think the top 10 issues for most organisations will be.

1.  A broader definition of Personal Data

As we predicted earlier, the scope of what constitutes ‘personal data’ has explicitly been broadened to include any information ‘relating to’ an individual. This specifically includes ‘online identifiers’ so cookies and the advertising IDs seen in the mobile eco-system will be caught up, along with anything that contributes to identifying an individual, or links to such identifying information. This has some widespread implications for online tracking in particular.

2.  A higher bar for consent

Whilst the final text shied away from explicit consent as a requirement, except when special categories of (sensitive) data are concerned, there is still much emphasis on gaining consent through active user mechanisms like tick boxes.

A key part of the test of the validity of consent is whether consumers understand what they are agreeing to, and are given a meaningful choice. There is also a significant shift in the burden of proof.  You will need to be able to provide evidence that you obtained consent from specific data subjects, which is going to require much better record keeping for many organisations.

3.  Data Protection Officers

Although not a universal requirement, many organisations will be required to appoint a Data Protection Officer (DPO) to oversee data uses and ensure compliance with the law. They will be mandatory in the public sector, but for private sector organisations the key test will be whether the organisation is involved in “systematic monitoring of data subjects on a large scale“, however it is not clear at this time how ‘large scale’ will be interpreted.

Earlier, more detailed, requirements for the skills and experience of the DPO and guarantees over their employment, have been dropped but a key issue in the short to medium term will be a lack of the right people to fill such roles.

DPOs however can be outsourced, which may create a market for new services, especially to cater for the needs of smaller businesses.  The DPO responsibilities can also be given to someone alongside other work within the organisation, as long as this does not create a conflict of interest.  So training existing staff into the role could be a viable option for many.

4.  Transparency and Accountability

The GDPR scraps the need for controllers to register with their Data Protection Authority (DPA), but replaces this with a requirement to both better inform data subjects about practices and rights, and to keep records that can be made available on request – such as in the event of a data breach or a compliance complaint.  Such records are about demonstrating that the organisation has thought through the impact of its systems and processes, and made informed choices about how to comply with the GDPR.  The Data Protection or Privacy Impact Assessment (PIA) is one example of such documentation.  It is intended that a PIA will show that an organisation has considered the risks associated with its particular personal data practices, and taken reasonable steps to control or mitigate them.

There are also new requirements on the level of detail that organisations must provide to data subjects about their practices, as well as a need to make sure that this information is both accessible and easy to understand. In particular there is a need to explain the logic behind decisions made on the basis of analysing personal data – which may have particular significance in some sectors that have relied on such processes being largely secret. Organisations are also expected to inform subjects about their rights and how to exercise them.

5.  Data Protection by Design and Default

Although references to this have been cut back in comparison with earlier versions of the text, the GDPR contains requirements that the design of systems and processes are required to give consideration to compliance with the principles of data protection. Particular emphasis is placed on the ideas of only collecting data necessary to fulfil specific purposes, discarding it when it is no longer required, and protecting data subject rights.

It also sets up the possibility for the development of certifications and codes of practice that organisations can follow to help meet these requirements.  Keep an eye for these as they develop.  In particular we expect DPAs to get involved in this area.  They will be losing their registration fees and therefore needing new sources of income.  In the UK the Information Commissioners Office (ICO) has already been developing this idea, so expect it to continue. Trade bodies are also likely to have a role to play here.

6.  The Right to Erasure and Data Portability

These new data subject rights are likely to pose challenges for many organisations. The right to erasure is a clarification of the much talked about ‘right to be forgotten’.   Although the circumstances when the right can be exercised have been made clearer, the balancing against other rights and obligations is still needed.

The right to have a copy of your data in a machine readable form to transfer to another provider may be difficult at first, but it could also lead to better systems interoperability in the longer term – which is already a growing technology trend.  In particular this provision may facilitate the development of the market for ‘personal data stores’, an idea that has long been talked about, but not yet fully realised as providers have struggled with sustainable and scalable business models.

7.  Removal of Subject Access Request Fees

Data subjects have a right to know whether or not an organisation is processing their personal data, what that data is and the purposes of the processing.  The GDPR removes the ability to charge an upfront fee for providing such information, and there is a risk requests will increase as a result of this, pushing up costs.  Current allowable fees don’t exactly cover the cost of  a Subject Access Request (SAR), but are seen as a deterrent to time wasters.  If companies are no longer able to charge fees, it is feared this could open the floodgates to many more SARs.

Companies will be allowed to charge for subsequent copies of the same data, which may reduce the risk of this to some extent. However, it may be worth investing in making sure you can respond to such requests as efficiently as possible, which will not be easy in many cases.

8.  Reporting Data Breaches

Data controllers will be required to report data breaches to their DPA, unless it is unlikely to represent a risk to the rights and freedoms of the individuals concerned. However this qualification may be difficult to judge, so in many cases, it will be safer to notify. The notice must be made within 72 hours of becoming aware of it, unless there are exceptional circumstances, which will have to be justified.

Where the risks to individuals is high, then the data subjects themselves will also need to be notified, although a specific time scale is not specified for this.  It is also worth noting that the DPA can instruct an organisation to inform data subjects if they haven’t already, so we can expect to see further guidance on the circumstances when it would be correct to do so.

9.  Fines

The GDPR very deliberately raises the bar in terms of the ability for DPAs to issue fines for breaches of the rules.  They can go as high as 4% of global turnover.  Not only are these designed to ensure data protection becomes a board level issue, by taking into account worldwide revenues, they seek to side step attempts by multinationals to engage in fine-avoidance business structures.

It is also worth noting that fines can be levied without the necessity to demonstrate harm – although the largest ones will likely be reserved for cases where data subjects have directly suffered damages.

10.  Data Processor Responsibilities

Organisations that only process data on instructions from their client are not directly covered by the current data protection regime.  Their actions were assumed to be governed by agreement with the customer who would be the data controller, and therefore directly responsible for the actions of the processor. However this all changes under the GDPR, and processors now have direct legal obligations and responsibilities.

In particular this means that processors can in certain circumstances be held directly liable and be required to pay compensation to a data subject. It will therefore become very important to establish the contractual relationships and liabilities of the different parties in a controller/processor relationship, and the costs of some services by processors may rise to offset additional risks and insurance costs.

 

We hope you find this useful.  In future posts we will look at more details of what you can do to prepare, as well as looking into each of these areas in more detail.

In the mean time, if you have any questions and would like to know more about how the GDPR might effect your business, do get in touch and we will be happy to help.

ashleymadison

Learning From Ashley Madison

 

The recent theft and subsequent leaking of the personal information of users of the Ashley Madison dating site for married people and its other stable brands is not the biggest data breach the world has seen in the last few years, but it is quite probably the most controversial.

Whatever people choose to think about the basic premise of the business, or the people that signed up to its services, the hacking and subsequent release of the data is illegal and quite likely to lead to serious harm for some of the users of the site.

It is therefore right that everybody who deals in the handling of personal data should look to see what they can learn from this event.

Context is King

The sensitivity of information, and therefore the lengths one should go to protect it, is often more reliant on context than the information itself.

Email addresses are personal information, yet they aren’t generally thought of as particularly sensitive or needing of close protection.  After all, they are about communication, so designed to be shared.

Stored in a database of people supposedly looking for an extra-marital fling however, is a completely different ball game.  As has been pointed out elsewhere, some of the emails leaked indicate users in Saudi Arabia, where adultery is a capital offense.  Though we don’t know it yet – the hackers may have condemned some people to death.

At the very least is seems inevitable that the marriages and careers of many people will be ruined.  It doesn’t even matter if no wrong doing took place, suspicion by the mere presence of an email address in the data, will be enough to change some people’s lives forever.

Transparency, Transparency, Transparency

Amazingly enough, the privacy policy on the site is not that long or complicated.  However, it is clear that different versions are served up to different users.  On first access I noted my location was recorded as in the UK, and I got a policy from Praecellens Limited, operating out of Cyprus.  However, I could switch my location to the USA, and then be served the policy from Avid Dating Life Inc. of Canada

What strikes me is that even a cursory reading rings huge alarm bells.  For a start the Cyprus policy, presumably for EU readers, is different, but it still uses US-style language, lots of references to PII rather Personal Data. So immediately it seems like a half-hearted job.

More importantly, it makes clear that although some information ‘may be considered as sensitive’ – the policy allows for any personal information to be sold to unspecified third parties for marketing purposes.  At the same time the policy also stresses how important privacy is to the business.

Of course we know that nobody reads privacy policies, and this seems to prove it.  I find it difficult to believe that anyone contemplating embarking on a clandestine affair would knowingly agree to such unspecified information sharing that could easily lead to legal disclosure of their use of the site.  All of which tells me that there needs to be clearer ways of surfacing this kind of information, and clearer indications of consent – something of course being called for under the EU Data Protection Regulation.

Beware the All Seeing Cookie

Running a very brief scan over a few of the public pages on the site we identified trackers from Google, Facebook and Twitter on the ‘Infidelity News’ blog.  These are all organisations that can tie online behaviour directly to real identities, meaning the site is directly leaking at the very least ‘interest’ data about identified individuals in a way that could immediately impact their wider social profiles unless they are extremely careful.

However, the site clearly ignores EU cookie law requirements for consent.  It doesn’t even notify visitors, let alone give them some control.  Yet this is very  clearly the sort of site that users might want to keep out of their browsing history.  Not giving users the option for simple controls. is not only a breach of the cookie rules, it shows either a cavalier attitude to privacy, or ignorance of the power of the cookie to identify individuals.

Privacy is not Security

It also seems despite the promises of the importance of privacy, little thought was put into this when designing the system.

Email addresses were allowed to be on the system unverified – breaking data protection rules about accuracy of data as well as opening up non-users of the system to potential harm. Although the company claims that sensitive information is encrypted at rest on disk, as noted above, in this case even emails are sensitive, and were clearly not encrypted. Or at least not encrypted well enough to prevent their release.

Similarly it has been widely reported that the password re-set feature, can be used to effectively reveal the email addresses of users registered on the site.

Some reports have suggested that the security on the site is generally better than many others, which also manages to highlight quite well that security and privacy are two different realms. I don’t know whether or not the company carried out any kind of privacy impact or risk assessment.  However, it seems obvious now that not enough attention was paid to privacy concerns in the development of the platform and its services.

A Watershed Event?

The nature of the business makes it an obvious choice target for malicious attack. If there had been more thought given to privacy, it would not have made a breach any less likely to happen, however it may have reduced the impact of it.

The very nature of the potential damage here could in fact become a force for change in the way that the law looks at privacy harms.  Most law courts reduce harm in data breaches to financial loss.  Many actions fail because direct financial harm is very difficult to establish.

In this case, financial harm is likely to be way down the priority list of members.  It will be the harm to their personal lives – in many cases irreparable – that will almost certainly been the focus of the inevitable law suits.  How the courts deal with this could open the door for the wider recognition of non-financial harms In breaches of privacy – and that may make this a watershed event.

UPDATE 24 August:

Sadly, just three days after writing this post, my worst fears appear to have come true: two Ashley Madison users who had their personal details published, have reportedly taken their own lives as a direct result. My deepest sympathies go out to their loved ones.

Privacy and Social Media: Incompatible or Indispensable?

The growth of social media platforms, and particularly their seeming indispensability to the lives of the digital natives, is often used as evidence of the death of both the desire for privacy and its attendant social relevance. In a post-Facebook world, aren’t privacy worries increasingly confined to the old folks’ home and a few wonks? Nobody reads privacy policies, so nobody cares.  QED.

Europe’s data privacy rules are about to be updated for the social media age.  A lot of effort over many years has gone into re-writing them.  Some say they will become too restrictive, others not protective enough of consumers’ interests, but all agree they will include the potential for massively increased fines for non-compliance.  But why go to all that effort if nobody really cares anymore?

In October 2014 the highly respected Samaritans, a charity trying to stop vulnerable people from hurting and killing themselves, released the Samaritans Radar app with no small amount of fanfare.  Anyone worried about a friend could sign up to get an alert if they posted something on Twitter that the Radar algorithm interpreted as a need for help.  Sounds great doesn’t it?  The Samaritans were very proud, taking the public data of tweets and putting it to good use to look out for vulnerable people.

There was an immediate outcry from privacy experts, the app was taken down within a few days under public pressure, and was also investigated by the UK data protection regulator, the Information Commissioners Office (ICO).

Why? All they wanted to do was to use publicly available information to help people help friends they might be concerned about.

The problem was a failure to look at the full picture.  The app was making judgements about the mental health of people without their knowledge and sharing it with a third party.  Anyone could get this analysis on anyone else, regardless of their actual motives and relationship with the person concerned.

The app was withdrawn before a full investigation could take place, not because of the risk of enforcement but the much bigger potential risk to reputation, which might have undermined the trust the Samaritans rely on to do their very valuable and important work. However the ICO still concluded that the app “did risk causing distress to individuals and was unlikely to be compliant with the DPA” [The UK Data Protection Act].

This extreme example highlights some important issues.  Data privacy laws are complex, and though they may fail to keep up with changes in technology, there are some underlying principles that reflect long established social norms and cultural expectations.  Practices may change quickly on the surface, but deep seated values shift much more slowly.

The world of social media sits at the fulcrum of the balance between the private and the public. This means that having a sophisticated understanding of what is both legal and acceptable is vital to the success of social platforms. People don’t read privacy policies because they rely on trust much more than terms and conditions.  Established privacy principles and laws play a vital role in building and maintaining that trust.  However trust can be lost very quickly, at a cost much higher than any regulatory fine, if the platform is perceived to have breached it.

Social platforms should pay attention to data privacy laws not just to avoid enforcement but because they say something very important about culture and expectations.  They might be able to ignore the some of the rules some of the time and get away with it for a while, but in the long term my bet is that faced with a choice between privacy and any individual platform, privacy will win out.

This article was originally published on the Global Marketing Alliance website: http://www.the-gma.com/privacy-and-social-media-incompatible-or-indispensable

Lessons from London’s Leading Privacy Conference

The annual Data Protection Intensive, organised by the International Association of Privacy Professionals (IAPP), is a 2 day conference bringing together leading privacy experts from many different countries and industries. This year’s conference, which took place in mid-April, was my first and I found it very enjoyable and informative.

Privacy and Data Protection have been growing in importance to business in the last few years for a number of reasons.  Consumer data is now a key asset for many types of organisations. Its increasing availability, volume and granularity, coupled with the low cost of storage and analysis, has made it a valuable commodity and increasingly a source of competitive differentiation.  At the same time Edward Snowden, high profile hacking and cyber security breaches, debates about privacy vs. freedom of speech have all played their part in making data protection a main stream media story, raising both awareness and fear over the potential dangers of its misuse.

The proposals for the new EU General Data Protection Regulation, and what it might mean for compliance programmes were unsurprisingly very much top of the agenda at the conference this year, as we inch ever closer to an agreed text.  Much time has and continues to be given over to analysing the proposals in minute detail. However, to my mind the real take-aways from the conference were all about the big picture.

  • The hockey-stick curve of growth in IAPP membership is a testament to the fact that, contrary to what some would like us to believe, privacy is not dead but thriving, and some may even say it is on the edge of entering a golden era.
  • Even in the biggest and most privacy-mature businesses in attendance there was still a sense of plenty of room to improve and evolve.
  • The dominance of a legalistic approach to privacy management is on the wane, with a move towards more of a business-needs focus.
  • Privacy management activity is still quite low on the corporate agenda, which means budgets are very tight.
  • There is a need for tools and technologies to make privacy management more effective and efficient.
  • Many organisations think of their key privacy issues and solutions as being unique or special to them in some way.

This last point is critical in my view and represents a potential barrier to the one above it – development of new tools and technologies.  Whilst it is often true at the detail level, it is unlikely to be in the broader scope of organisation activity.

Take for example the use of customer data for marketing.  A large proportion of organisations will essentially hold the same data about customers (contact details, purchase histories etc.), and use it in very similar ways (segmenting, targeting, upselling), even if the details of what they have and the way they use it is different.

A lot of companies will also have very similar processes for handling employee data, and a similar set of partner relationships for payroll, recruitment etc. When you look at specific verticals and certain types of data use within them – like health providers and financial services, it is likely that the similarities become even more pronounced.

I believe that one of the biggest challenges for the privacy profession may be to get past the ‘Not Invented Here’ syndrome when it comes to privacy management.  This means learning to focus on those similarities rather than differences, which is key to opening up new opportunities for shared learning, better benchmarking, and a greater understanding of difficult issues like consumer privacy risks.

It is also when you have recognised similarities that you can start to leverage technology more to handle the standard, routine aspects of any task – which of course frees up human resources to deal with the more difficult, individual issues.

PIAs (Privacy Impact Assessments) are a good example of where technology can standardise and streamline the process of gathering information and enabling privacy teams to make better informed decisions.  By reducing the time and cost involved in managing PIAs, it becomes easier to carry them out more frequently and in smaller projects. This in turn could be one of the most effective ways of both increasing awareness of privacy issues within the organisation as well as encouraging the adoption of more privacy centric systems and processes.

New Draft of Data Protection Regulation Released

Shortly before Christmas a new draft version of the Data Protection Regulation was released by the Council of Ministers.  The text is still being debated but this certainly shows the direction the ministers are heading in, so is worth some analysis.

Once it is approved, this will become the third version of the law, following on from the original produced by the Commission in 2012, then the one approved by the parliament in 2014.

Once the Council version is finished, there will then be a final trilateral negotiation to reach the final piece of legislation. Comparing this latest Council draft with the version produced by the Parliament in particular gives some indication of how difficult that negotiation might be, and therefore how long it will take.

Key Issues:

Definition of Consent.  The council text weakens consent by removing the requirement that it must be ‘explicit’, preferring the use of the term ‘unambiguous’, a significant departure from both the Commission and Parliament. Although all texts support the interpretation in Recital 25 that consent should be indicated by ‘affirmative action, the Parliament further strengthened this by adding that ‘mere use of a service’ should not constitute consent.

This issue is particularly relevant to web services, which often seek to rely on continuation of browsing a site as an indicator of consent to privacy practices. The traditional alternative is putting some mechanism in place to require users to signify consent – such as tick boxes.  However this can put some people off from using a service by creating a barrier to entry, or lead to ‘consent fatigue’ – where they blindly agree to terms and conditions they haven’t read.

We have seen this battle played out before – most recently with the consent requirements in the cookie law.  I think it is safe to say that this is going to continue to be a key battleground right down to the wire.

Information Requirements. Allied to consent is the need to provide information so that data subjects can understand what it is they are consenting to. Here the Council text is far less prescriptive than the Parliament one, which sought to create a highly standardised format for information notices, with clear and consistent language and iconography. The aim was to find a model that would make privacy notices easier to understand, which many have argued is a highly laudable goal.  However the format of the notice, and especially the design of the icons, was not well received in the design community in particular.

Data Protection Impact Assessments and Data Protection Officers. The Council has embraced the ‘risk based approach’ to data protection, and this is nowhere more clear in the modifications to the requirements for carrying out Data Protection Impact Assessments and employing DPOs.  The Parliament version of the text is prescriptive in its requirements, with DPIAs and DPOs being required in most circumstances, with exceptions for small business and small scale data usage.  By contrast the Council makes DPOs voluntary for most organisations and requires DPIAs only for ‘high risk’ data processing activities.

Whilst this may lift administrative burdens in many circumstances, it also leaves much greater room for interpretation, especially around what constitutes ‘high risk’, and this potentially results in greater uncertainty and widely differing practices, which in turn could lead to weaker consumer protections.

Harmonisation.  One of the original stated goals of the Regulation was to harmonise both rules and practices across the EU – creating a level competitive playing field and contributing to the Digital Single Market initiative.  This idea is particularly attractive to multi-national operators – but one of the hardest to deliver, because it reduces the authority of individual countries through their national regulator.

That makes it a highly politicised issue.  True harmony might weaken rules in one country, whilst strengthening them in others, and this has resulted in objections to the same wording, but for very different reasons – Germany and the UK being prominent examples.  The Council text has a number of provisions in it which appear designed to increase the autonomy of individual country regulators in comparison with the Parliament and Commission texts, leading to a weakening of the ‘one stop shop’ principle.

Also of significant interest in this draft are the sheer number of notes indicating the continued concerns of individual member states.  This tells us that agreement on this document may still be a long way from being reached.

January 2015 saw the start of the 6 month Latvian presidency of the EU, and whilst they have put getting a final position from the Council as their top priority, the continuing differences have already led prominent MEP Jan Albrecht, who led the Parliament work on the legislation, to predict that we won’t see finalisation of the Regulation much before the end of this year.

What is High Risk Data Processing?

The idea of a ‘risk based approach’ to privacy and data protection compliance issues has been around for a number of years, and increasingly being embraced by regulators and legislators.

The latest draft wording of Chapter IV of the GDPR agreed by the Council of Ministers puts the risk based approach in a very central role. Under this draft, a significant range of legal obligations only come into effect if the data processing represents a high risk to the rights and freedoms of the individual.  This includes the need to conduct a Privacy Impact Assessment, report data breach, or in some cases appoint a Data Protection Officer.

So working out whether or not your organisation is doing any processing that could be seen as high risk is very important.  Which means there needs to be some kind of objective measure of what high risk activity is.

We get some steers from the Regulation in this respect.  Activities that create a risk of ‘discrimination, identity theft, fraud or financial loss’ are given as clear examples of high risks.  So lets look at one of the most common of these problems, identity theft.  What kind of processing can create a risk of identity theft?

Traditionally identity theft is thought of as activities like opening a bank account, taking out a loan, getting a passport, driving licence, or obtaining state benefits, all done in someone else’s name, principally for personal gain or wider fraud purposes.

It is easy to see how this can be damaging, and there are various existing checks and balances at banks and government agencies to make this difficult, including a requirement to provide quite rich and varied data when first establishing or proving your identity to the agency involved.

However, identity increasingly also encompasses online.  Someone else being able to take control of your social media presence, or to impersonate you in places where you have no pre-existing social identity, especially if that involves aspects of your real world identity (such as a photo) could be seen as a form of identity theft that is equally or even more damaging to the individual.  If someone can take over and damage my reputation using my stolen online identity, that could actually have more long term damage financially, due to loss of earnings opportunity, than a one-time fraudulent charge on my credit card.

So how easy would it be to take control of some aspect of my online identity, or impersonate me online?

Online identities are generally protected by login gateways, and these are primarily limited to a username and a password. Often the username is an email address, partly because it is then guaranteed to be unique. We are also told frequently how much we re-use password across different services, as well how easy those passwords can be to crack.

It is common practice amongst online criminals that when they have obtained login details from one service, they attempt to re-use them across a multitude of others.  This means that your online identity is only as secure as the most insecure site you use it on.

So even if as an organisation you are confident in your own security and use appropriate encryption standards, you have no way of defending against the same login credentials being obtained via another, less secure service, and therefore used for identity theft.

It therefore follows that any application that relies on a user-created login identity, especially if that includes an email address, should then be considered as high risk data processing leading to a significant risk of identity theft.

It might also be argued that any processing of email addresses, even on their own, creates a risk of identity theft, given the general role they play in online identity.

So that would impact almost all organisations that are operating online – as the vast majority will at some point collect email addresses.

And what are the organisations least likely to adequately secure that information from loss or theft?  The answer is small organisations and start-ups with limited experience and budgets, and who have a prime focus on getting their innovations to market, or marketing to customers.

And yet, these are precisely the organisations that the Council of Ministers also argued needn’t be subject to the same level of scrutiny or administrative burden.  However, the reality is if they don’t get it right, they increase risks elsewhere in the online ecosystem.

The idea of a risk based approach to data protection is a very interesting one.  It encourages a focus on those aspects of operations that have the most potential to create harm. However you have to take into account that high risk data processing does not necessarily mean rare or uncommon, or that it only takes place within large companies.

Privacy Impact Assessments and the DPR

One of the key obligations the EU Data Protection Regulation will impose on organisations is a requirement to conduct what are officially called Data Protection Impact Assessments but are more commonly known as Privacy Impact Assessments, or PIAs for short.

PIAs are not a new concept, they have in fact been used in some countries and specific industry sectors for several years.  For example they are used widely in big IT companies like IBM and HP, and they are already mandatory in the UK for most public sector bodies.

The big change however is that under the DPR, many more smaller organisations will have to carry them out for a lot of their data processing activities, or at least be able to justify when a PIA is not necessary in certain circumstances (spoiler alert: cost alone will not be a valid reason).

The problem with this for many people is that Privacy Impact Assessments have a reputation as being time consuming, requiring  a lot of managerial and expert input,  extensive analysis by privacy law experts, and as a result expensive.  Those who would rather avoid having to produce them, often paint a picture of a box-ticking exercise that gets in the way of innovation and progress, particularly for small companies.

However there is a bit of a chicken-and-egg situation going on here.  PIAs are generally big and expensive because they are used for big budget projects where the privacy issues are complex and decisions taken could impact thousands if not millions of people.  In such situations it is absolutely right that considerable effort is taken to reduce risks that could lead to significant problems for large numbers of people.

However, it is perfectly possible to apply the same principles to smaller projects in a way that is both manageable and proportionate.

By asking the right questions to the right people at the right stage in the development cycle and using the principles of triage-based assessment, an organisation can quickly distinguish between different levels of risk – and then use that information to decide where more effort is justified.

As the use of personal data becomes ever more central to economic growth and society at large, the organisational costs of losing or misusing it are increasing.  The headlines are about regulatory fines, but the real cost is loss of trust from citizens and customers. This is something which small companies in particular can struggle to recover from as they tend to have a less robust brand reputation to see them through.

A well designed PIA can quickly and efficiently distinguish between high and low risk data practices and allows smaller organisations to focus precious resources where they can have the biggest positive impact, whilst avoiding being side-tracked by trivia.

Far from being a threat to innovation in small businesses, a PIA can actually help them learn from the experience of large companies, and even help them punch above their weight.