Facebook Data Practices Breaking EU Law


With limited information and an absence of meaningful user choice, Facebook’s behavioural profiling and resulting targeted advertising practices do not meet the requirements for legally valid consent, according to a new report (PDF) from experts at the University of Leuven and Brussels Free University.

The report was commissioned by the Belgian Data Protection Authority, the CPP and is an in-depth analysis of both data practices and the notices and policies given to users to explain them.

Some of the interesting findings of the report are:

  • Opt-out controls for advertising are insufficiently clear or comprehensive enough to meet the standard of consent under data protection law.
  • User privacy settings can lead to a false sense of control, as they do not really limit how Facebook can use data for advertising purposes, more just what other users can see.
  • Use of user generated content by Facebook for commercial purposes is particularly singled out as controls to prevent this are largely absent.
  • Many of the contract terms could be considered as unfair practices under consumer protection laws, including the licences to re-use copyright material claimed by Facebook.
  • Some adverts may be considered to be equivalent to direct marketing communications, which by law require opt-out controls, which are not provided.
  • Use of location data should be subject to opt-in rather than opt-out consent.

There was also an example of how user sentiment can be easily twisted or removed. An advert for a fitness program appeared to have been endorsed by a user who had in fact  been critical of it.

The authors also highlighted a point that is often overlooked, Facebook is not so much a social network as it is a ‘vast advertising network’ – and this has become especially true as it has acquired other businesses like Instagram and WhatsApp.  These two in particular have helped it obtain more detailed information about its users.

Additionally, following the acquisition of ad platform company Atlas Solutions from Microsoft, it has created a new opportunity to advertised to people while they surf, or ‘off platform’ as it is often called.

This capability was something I have commented on elsewhere – and I understand the research team will be looking in more depth at this in the future.

It’s Not the TV that’s Smart

Samsung came under fire recently when it was revealed that the privacy policy on its latest line of Smart TVs warned anyone bothering to read it not to say anything of a sensitive nature within range, because it might just send your conversation off to some unspecified other company for analysis.

Then just a few days later it came to light the company was investigating an error that caused unexpected adverts to interrupt some programs.  One suspects the concern here is more aligned to whether or not those ads were being properly paid for, than any privacy issues it might raise.

Some people might also remember how one expert uncovered the fact that another brand of Smart TV, this time from LG, was transmitting viewing data back to corporate HQ, even after changing a setting supposedly designed to stop it.

To my mind, what these stories all really reveal is that these devices are basically mislabelled.  They are not smart at all.  And I don’t mean that in the sense of ‘a bit less clever than we have been told’ either.

Smart in this context is really just a euphemism for ‘Connected’.  All of the processing that makes the TV appear smart is being done somewhere else.  And in the case of Samsung’s voice recognition feature – by another company completely.

The same is true for almost any device being sold with a Smart label.  The smart bit happens somewhere else, so in order to be smart the device has to send out loads of data – much of which can be personal in itself, or used to infer things about us that we would rather it didn’t.  Try disconnecting your smart device from the internet, and you quickly discover that it is pretty dumb.

My guess is that a lot of people kind of realise this fact.  Of course the device manufacturers would also claim that they are being open with buyers about this, by pointing to the ubiquitous but never read privacy policy which we all dutifully agree to in order to make your device actually work.

But here is the thing.  If they were called Connected TVs, it would be a little reminder every day of the reality – that your screen is gathering just as much, if not more input about you and your preferences, as it is providing output.  It would also tell us that the bit that we own – the bit that actually sits in our home, is in fact subservient to a data crunching server somewhere else in the world, and we have no idea what it knows, or thinks it knows about us. It would remind us that far from having bought something we value for a fixed and fair price, we have in fact signed up to give something else away, our data, that has an ever-increasing value to other people who will strive to exploit that for their own maximum gain. Where do you think the smart is in that?

A Smart TV however sounds like the bit of hardware I have is the clever thing, which of course makes me clever for buying it.

I’d like to own a Smart TV, one that can understand what I like to watch, give me suggestions for new shows, maybe record things automatically when it knows I going to miss them.  But my Smart TV has to work for my benefit only, and it has to be able to do it without sending loads of data off to who knows where.  I would rather expect it to collect data from the web to my advantage.  It would find ways of skipping over the ads if I wanted it to – because it was working for my interests not those of someone else’s business model. My Smart TV would be smart enough to hold all my data locally, jealously guarding my privacy above all other interests if I wanted it to, but trading it if it was to my advantage.

My Smart TV would be both mine and smart in every sense of those words.  I’d be happy to pay a good sum of money for it.  Unfortunately it doesn’t exist.

As for the current crop of Connected TVs, you would have to pay me a princely sum to put one of those in my living room in exchange for my information. Who’d like to make the first offer?

The Data Trust Deficit

The public has a broad mistrust of institutions, both government and private sector, when it comes to sharing and use of their personal data.  These are the findings of a new UK survey conducted by IpsosMori and the Royal Statistical Society.

In one of the most in-depth surveys in the issue of data privacy in the UK, the findings reveal significant differences in attitudes to sharing of personal information, depending both on who it was shared with and for what purpose.  However data use trust is generally lower than broader trust in the same organisations.

In general GPs had the highest level of trust – but even there only 41% of people gave them a high trust score.

Online retailers, mobile phone and internet companies, i.e. those that generally make the most frequent and visible use of personal data, scored amongst the lowest at between 6% and 13% trust. However bottom of the pile were the media and press – with a dismal 4% giving them a high trust score.

How the data might be used also has a significant bearing on attitudes, including taking into account whether or not the data is anonoymised.

So sharing government controlled data for public funded research has a relatively high level of approval (although still only 50%), while opposition to the sale of anonymised health records to private companies reached 84%, where the motive was seen as to make money for the government.

People who stated a low level of trust were also asked what the reason for their mistrust was.  The biggest overall fear was that organisations were not being open and honest about how they were using data.  This is very much in line with our ideas around the importance of increased transparency of data usage.

Very much allied to this is the general  belief that governments and companies were benefiting more that people from data use, as the responses below demonstrate.



It was also clear that even when people could not see any obvious harm in data sharing, they still found it creepy.

The key messages of this survey for me are that if we are truly going to realise the benefits of the digitisation of society, including the promises of Big Data and the Internet of Things, then a lot more work needs to be done to inform and empower the people whose information is needed to make it all work.

Restoring Trust Through Transparency

Reset the Net

It is becoming ever more widely recognised that there is a serious issue with consumer trust of the online world.

No doubt the revelations of state surveillance from Edward Snowden have been a big part of that, but it would be wrong to therefore conclude that all or even most mis-trust is directed at the activities of governments.

A study by the World Economic Forum, conducted in 2012 (before Snowden) found that 67% of internet users believe companies ask for too much information online.

More recently phone operator Orange found that 78% of consumers do not trust companies to use their personal data responsibly, and that they believe businesses hold too much information about behaviour and preferences.

Equally telling, fully 67% of consumers believe that the benefits of ubiquitous tracking of purchasing and browser histories are felt more by business than themselves.

The study also showed that the momentum is going in the wrong direction – trust is decreasing year on year.

This lack of trust is increasingly leading to action.  The ICO recently reported that half of people they surveyed decided not to download a mobile app because of privacy concerns.

There are also big campaigns, such as Reset the Net (whose campaign image we have used above), to encourage consumers to use more privacy centred technologies.

The calls from business to find ways to fix this problem are getting louder.  Many companies are beginning to realise that trust in online data collection is vital for the bottom line, and those that can win the trust war, will benefit the most.

What everybody is also increasingly realising is that one of the most important elements of the solution to the trust problem is transparency. This idea features heavily in both the Orange and World Economic Forum analyses.

This reminds me of the old aphorism: Justice must not only be done, it must be seen to be done.

A lot of companies are actually pretty good at protecting their customers information, and take privacy very seriously.  The problem is they are not very good at communicating this.  They leave it up to the lawyers and bury the detail in privacy policies.

This model is clearly failing and now is the time to fix it. So if you want to join the leaders and ensure the trust of your visitors, come and talk to us about how to increase your data transparency.

Companies that don’t get this right will find more and more of their customers hitting the off switch.