All posts in “data protection”

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response. Update: Facebook has now sent the following statement, attributed to its chief privacy officer, Erin Egan: “We have prepared for the past 18 months to ensure we meet the requirements of the GDPR. We have made our policies clearer, our privacy settings easier to find and introduced better tools for people to access, download, and delete their information. Our work to improve people’s privacy doesn’t stop on May 25th. For example, we’re building Clear History: a way for everyone to see the websites and apps that send us information when you use them, clear this information from your account, and turn off our ability to store it associated with your account going forward.”

Schrems most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

Instapaper on pause in Europe to fix GDPR compliance “issue”

Remember Instapaper? The Pinterest-owned, read-it-later bookmarking service is taking a break in Europe — apparently while it works on achieving compliance with the region’s updated privacy framework, GDPR, which will start being applied from tomorrow.

Instapaper’s notification does not say how long the self-imposed outage will last.

The European Union’s General Data Protection Regulation updates the bloc’s privacy framework, most notably by bringing in supersized fines for data violations, which in the most serious cases can scale up to 4% of a company’s global annual turnover.

So it significantly ramps up the risk of, for example, having sloppy security, or consent flows that aren’t clear and specific enough (if indeed consent is the legal basis you’re relying on for processing people’s personal information).

That said, EU regulators are clearly going to tread softly on the enforcement front in the short term. And any major fines are only going to hit the most serious violations and violators — and only down the line when data protection authorities have received complaints and conducted thorough investigations.

So it’s not clear exactly why Instapaper believes it needs to pause its service to European users. It’s also had plenty of time to prepare to be compliant — given the new framework was agreed at the back end of 2015. We’ve reached out to Pinterest with questions and will update this story with any response.

In an exchange on Twitter, Pinterest product engineering manager Brian Donohue — who, prior to acquisition was Instapaper’s CEO — flagged that the product’s privacy policy “hasn’t been changed in several years”. But he declined to specify exactly what it feels its compliance issue is — saying only: “We’re actively working to resolve the issue.”

In a customer support email that we reviewed, the company also told one European user: “We’ve been advised to undergo an assessment of the Instapaper service to determine what, if any, changes may be appropriate but to restrict access to IP addresses in the EU as the best course of action.”

“We’re really sorry for any inconvenience, and we are actively working on bringing the service back online for residents in Europe,” it added.

The product’s privacy policy is one of the clearer T&Cs we’ve seen. It also states that users can already access “all your personally identifiable information that we collect online and maintain”, as well as saying people can “correct factual errors in your personally identifiable information by changing or deleting the erroneous information” — which, assuming those statements are true, looks pretty good for complying with portions of GDPR that are intended to give consumers more control over their personal data.

Instapaper also already lets users delete their accounts. And if they do that it specifies that “all account information and saved page data is deleted from the Instapaper service immediately” (though it also cautions that “deleted data may persist in backups and logs until they are deleted”).

In terms of what Instapaper does with users’ data, its privacy policy claims it does not share the information “with outside parties except to the extent necessary to accomplish Instapaper’s functionality”.

But it’s also not explicitly clear from the policy whether or not it’s passing information to its parent company Pinterest, for example, so perhaps it feels it needs to add more detail there.

Another possibility is Instapaper is working on compliance with GDPR’s data portability requirement. Though the service has offered exports options for years. But perhaps it feels these need to be more comprehensive.

As is inevitable ahead of a major regulatory change there’s a good deal of confusion about what exactly must be done to comply with the new rules. And that’s perhaps the best explanation for what’s going on with Instapaper’s pause.

Though, again, there’s plenty of official and detailed guidance from data protection agencies to help.

Unfortunately it’s also true that there’s a lot of unofficial and dubious quality advice from a cottage industry of self-styled ‘GDPR consultants’ that have sprung up with the intention of profiting off of the uncertainty. So — as ever — do your due diligence when it comes to the ‘experts’ you choose.

Where to watch Zuckerberg’s meeting with EU MEPs on Tuesday

The Facebook founder Mark Zuckerberg’s meeting with elected representatives of the European Union’s ~500 million citizens will be livestreamed after all, it was confirmed today.

MEPs had been angered by the original closed door format of the meeting, which was announced by the EU parliament’s president last week. But on Friday a majority of the political groups in the parliament had pushed for it to be broadcast online.

This morning president Antonio Tajani confirmed that Facebook had agreed to the 1hr 15 minute hearing being livestreamed.

A Facebook spokesperson also sent us this short statement today: “We’re looking forward to the meeting and happy for it to be live streamed.”

When is the meeting?

The meeting will take place on Tuesday May 22 at 18.15 to 19.30CET. If you want to tune in from the US the meeting is scheduled to start at 9.15PT /12.15ET.

Tajani’s announcement last week said it would start earlier, at 17.45CET, so the meeting appears to have been bumped on by half an hour. We’ve asked Facebook whether Zuckerberg will meet in private with the parliament’s Conference of Presidents prior to the livestream being switched on and will update this story with any response.

Where to watch it online?

According to Tajani’s spokesperson, the meeting will be broadcast on the EU parliament’s website. At the time of writing it’s not yet listed in the EPTV schedule — but we’re expecting it to be viewable here.

Who will be meeting with Zuckerberg?

The Facebook founder will meet with EU parliament president Tajani, along with leaders of the parliament’s eight political groups, and with Claude Moraes, the chair of the EU parliament’s Civil Liberties, Justice and Home Affairs (LIBE) committee.

It’s worth noting that the meeting is not a formal hearing, such as the sessions with Zuckerberg in the US Senate and Congress last month. Nor is it a full Libe committee hearing — discussions remain ongoing for Facebook representatives to meet with the full Libe committee at a later date.

What will Zuckerberg be asked about?

In the wake of the Cambridge Analytica data misuse scandal, MEPs are keen to discuss concerns related to social media’s impact on election processes with Zuckerberg.

Indeed, the impact of social media spread online disinformation is also the topic of an ongoing enquiry by the UK parliament’s DCMS committee which spent some five hours grilling Facebook’s CTO last month. Although Zuckerberg has thrice declined the committee’s summons — preferring to meet with EU parliamentarians instead.

Other topics on the agenda will include privacy and data protection — with Moraes likely to ask about how Facebook’s business model impacts EU citizens’ fundamental rights, and how EU regulations might need to evolve to keep pace, as he explained to us on Friday.

Some of the political group leaders are also likely to bring up concerns around freedom of expression as pressure in the region has ramped up on online platforms to get faster at policing hate speech.

Zuckerberg will meet with European parliament in private next week

Who says privacy is dead? Facebook’s founder Mark Zuckerberg has agreed to take European parliamentarians’ questions about how his platform impacts the privacy of hundreds of millions of European citizens — but only behind closed doors. Where no one except a handful of carefully chosen MEPs will bear witness to what’s said.

The private meeting will take place on May 22 at 17.45CET in Brussels. After which the president of the European Parliament, Antonio Tajani, will hold a press conference to furnish the media with his version of events.

It’s just a shame that journalists are being blocked from being able to report on what actually goes on in the room.

And that members of the public won’t be able to form their own opinions about how Facebook’s founder responds to pressing questions about what Zuckerberg’s platform is doing to their privacy and their fundamental rights.

Because the doors are being closed to journalists and citizens.

Even the intended contents of the meeting is been glossed over in public — with the purpose of the chat being vaguely couched as “to clarify issues related to the use of personal data” in a statement by Tajani (below).

The impact of Facebook’s platform on “electoral processes in Europe” is the only discussion point that’s specifically flagged.

Given Zuckerberg has thrice denied requests from UK lawmakers to take questions about his platform in a public hearing we can only assume the company made the CEO’s appearance in front of EU parliamentarians conditional on the meeting being closed.

Zuckerberg did agree to public sessions with US lawmakers last month, following a major global privacy scandal related to user data and political ad targeting.

But evidently the company’s sense of accountability doesn’t travel very far. (Despite a set of ‘privacy principles’ that Facebook published with great fanfare at the start of the year — one of which reads: ‘We are accountable’. Albeit Facebook didn’t specify to who or what exactly Facebook feels accountable.)

We’ve reached out to Facebook to ask why Zuckerberg will not take European parliamentarians questions in a public hearing. And indeed whether Mark can find the time to hop on a train to London afterwards to testify before the DCMS committee’s enquiry into online disinformation — and will update this story with any response.

As Vera Jourova, the European commissioner for justice and consumers, put it in a tweet, it’s a pity the Facebook founder does not believe all Europeans deserve to know how their data is handled by his company. Just a select few, holding positions of elected office.

A pity or, well, a shame.

Safe to say, not all MEPs are happy with the arrangement…

And according to an EU parliament source, around half the groups wanted an open hearing with the Committee on Civil Liberties, Justice and Home Affairs — with only a small majority of the Conference of Presidents agreeing to a closed meeting.

But let’s at least be thankful that Zuckerberg has shown us, once again, how very much privacy matters — to him personally

Facebook faces fresh criticism over ad targeting of sensitive interests

Is Facebook trampling over laws that regulate the processing of sensitive categories of personal data by failing to ask people for their explicit consent before it makes sensitive inferences about their sex life, religion or political beliefs? Or is the company merely treading uncomfortably and unethically close to the line of the law?

An investigation by the Guardian and the Danish Broadcasting Corporation has found that Facebook’s platform allows advertisers to target users based on interests related to political beliefs, sexuality and religion — all categories that are marked out as sensitive information under current European data protection law.

And indeed under the incoming GDPR, which will apply across the bloc from May 25.

The joint investigation found Facebook’s platform had made sensitive inferences about users — allowing advertisers to target people based on inferred interests including communism, social democrats, Hinduism and Christianity. All of which would be classed as sensitive personal data under EU rules.

And while the platform offers some constraints on how advertisers can target people against sensitive interests — not allowing advertisers to exclude users based on a specific sensitive interest, for example (Facebook having previously run into trouble in the US for enabling discrimination via ethnic affinity-based targeting) — such controls are beside the point if you take the view that Facebook is legally required to ask for a user’s explicit consent to processing this kind of sensitive data up front, before making any inferences about a person.

Indeed, it’s very unlikely that any ad platform can put people into buckets with sensitive labels like ‘interested in social democrat issues’ or ‘likes communist pages’ or ‘attends gay events’ without asking them to let it do so first.

And Facebook is not asking first.

Facebook argues otherwise, of course — claiming that the information it gathers about people’s affinities/interests, even when they entail sensitive categories of information such as sexuality and religion, is not personal data.

In a response statement to the media investigation, a Facebook spokesperson told us:

Like other Internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as ‘Gay Pride’ because they have liked a Pride associated Page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality. People are able to manage their Ad Preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads. Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.

Expect Facebook’s argument to be tested in the courts — likely in the very near future.

As we’ve said before, the GDPR lawsuits are coming for the company, thanks to beefed up enforcement of EU privacy rules, with the regulation providing for fines as large as 4% of a company’s global turnover.

Facebook is not the only online people profiler, of course, but it’s a prime target for strategic litigation both because of its massive size and reach (and the resulting power over web users flowing from a dominant position in an attention-dominating category), but also on account of its nose-thumbing attitude to compliance with EU regulations thus far.

The company has faced a number of challenges and sanctions under existing EU privacy law — though for its operations outside the US it typically refuses to recognize any legal jurisdiction except corporate-friendly Ireland, where its international HQ is based.

And, from what we’ve seen so far, Facebook’s response to GDPR ‘compliance’ is no new leaf. Rather it looks like privacy-hostile business as usual; a continued attempt to leverage its size and power to force a self-serving interpretation of the law — bending rules to fit its existing business processes, rather than reconfiguring those processes to comply with the law.

The GDPR is one of the reasons why Facebook’s ad microtargeting empire is facing greater scrutiny now, with just weeks to go before civil society organizations are able to take advantage of fresh opportunities for strategic litigation allowed by the regulation.

“I’m a big fan of the GDPR. I really believe that it gives us — as the court in Strasbourg would say — effective and practical remedies,” law professor Mireille Hildebrandt tells us. “If we go and do it, of course. So we need a lot of public litigation, a lot of court cases to make the GDPR work but… I think there are more people moving into this.

“The GDPR created a market for these sort of law firms — and I think that’s excellent.”

But it’s not the only reason. Another reason why Facebook’s handling of personal data is attracting attention is the result of tenacious press investigations into how one controversial political consultancy, Cambridge Analytica, was able to gain such freewheeling access to Facebook users’ data — as a result of Facebook’s lax platform policies around data access — for, in that instance, political ad targeting purposes.

All of which eventually blew up into a major global privacy storm, this March, though criticism of Facebook’s privacy-hostile platform policies dates back more than a decade at this stage.

The Cambridge Analytica scandal at least brought Facebook CEO and founder Mark Zuckerberg in front of US lawmakers, facing questions about the extent of the personal information it gathers; what controls it offers users over their data; and how he thinks Internet companies should be regulated, to name a few. (Pro tip for politicians: You don’t need to ask companies how they’d like to be regulated.)

The Facebook founder has also finally agreed to meet EU lawmakers — though UK lawmakers’ calls have been ignored.

Zuckerberg should expect to be questioned very closely in Brussels about how his platform is impacting European’s fundamental rights.

Sensitive personal data needs explicit consent

Facebook infers affinities linked to individual users by collecting and processing interest signals their web activity generates, such as likes on Facebook Pages or what people look at when they’re browsing outside Facebook — off-site intel it gathers via an extensive network of social plug-ins and tracking pixels embedded on third party websites. (According to information released by Facebook to the UK parliament this week, during just one week of April this year its Like button appeared on 8.4M websites; the Share button appeared on 931,000 websites; and its tracking Pixels were running on 2.2M websites.)

But here’s the thing: Both the current and the incoming EU legal framework for data protection sets the bar for consent to processing so-called special category data equally high — at “explicit” consent.

What that means in practice is Facebook needs to seek and secure separate consents from users (such as via a dedicated pop-up) for collecting and processing this type of sensitive data.

The alternative is for it to rely on another special condition for processing this type of sensitive data. However the other conditions are pretty tightly drawn — relating to things like the public interest; or the vital interests of a data subject; or for purposes of “preventive or occupational medicine”.

None of which would appear to apply if, as Facebook is, you’re processing people’s sensitive personal information just to target them with ads.

Ahead of GDPR, Facebook has started asking users who have chosen to display political opinions and/or sexuality information on their profiles to explicitly consent to that data being public.

Though even there its actions are problematic, as it offers users a take it or leave it style ‘choice’ — saying they either remove the info entirely or leave it and therefore agree that Facebook can use it to target them with ads.

Yet EU law also requires that consent be freely given. It cannot be conditional on the provision of a service.

So Facebook’s bundling of service provisions and consent will also likely face legal challenges, as we’ve written before.

“They’ve tangled the use of their network for socialising with the profiling of users for advertising. Those are separate purposes. You can’t tangle them like they are doing in the GDPR,” says Michael Veale, a technology policy researcher at University College London, emphasizing that GDPR allows for a third option that Facebook isn’t offering users: Allowing them to keep sensitive data on their profile but that data not be used for targeted advertising.

“Facebook, I believe, is quite afraid of this third option,” he continues. “It goes back to the Congressional hearing: Zuckerberg said a lot that you can choose which of your friends every post can be shared with, through a little in-line button. But there’s no option there that says ‘do not share this with Facebook for the purposes of analysis’.”

Returning to how the company synthesizes sensitive personal affinities from Facebook users’ Likes and wider web browsing activity, Veale argues that EU law also does not recognize the kind of distinction Facebook is seeking to draw — i.e. between inferred affinities and personal data — and thus to try to redraw the law in its favor.

“Facebook say that the data is not correct, or self-declared, and therefore these provisions do not apply. Data does not have to be correct or accurate to be personal data under European law, and trigger the protections. Indeed, that’s why there is a ‘right to rectification’ — because incorrect data is not the exception but the norm,” he tells us.

“At the crux of Facebook’s challenge is that they are inferring what is arguably “special category” data (Article 9, GDPR) from non-special category data. In European law, this data includes race, sexuality, data about health, biometric data for the purposes of identification, and political opinions. One of the first things to note is that European law does not govern collection and use as distinct activities: Both are considered processing.

“The pan-European group of data protection regulators have recently confirmed in guidance that when you infer special category data, it is as if you collected it. For this to be lawful, you need a special reason, which for most companies is restricted to separate, explicit consent. This will be often different than the lawful basis for processing the personal data you used for inference, which might well be ‘legitimate interests’, which didn’t require consent. That’s ruled out if you’re processing one of these special categories.”

“The regulators even specifically give Facebook like inference as an example of inferring special category data, so there is little wiggle room here,” he adds, pointing to an example used by regulators of a study that combined Facebook Like data with “limited survey information” — and from which it was found that researchers could accurately predict a male user’s sexual orientation 88% of the time; a user’s ethnic origin 95% of the time; and whether a user was Christian or Muslim 82% of the time.

Which underlines why these rules exist — given the clear risk of breaches to human rights if big data platforms can just suck up sensitive personal data automatically, as a background process.

The overarching aim of GDPR is to give consumers greater control over their personal data not just to help people defend their rights but to foster greater trust in online services — and for that trust to be a mechanism for greasing the wheels of digital business. Which is pretty much the opposite approach to sucking up everything in the background and hoping your users don’t realize what you’re doing.

Veale also points out that under current EU law even an opinion on someone is their personal data… (per this Article 29 Working Party guidance, emphasis ours):

From the point of view of the nature of the information, the concept of personal data includes any sort of statements about a person. It covers “objective” information, such as the presence of a certain substance in one’s blood. It also includes “subjective” information, opinions or assessments. This latter sort of statements make up a considerable share of personal data processing in sectors such as banking, for the assessment of the reliability of borrowers (“Titius is a reliable borrower”), in insurance (“Titius is not expected to die soon”) or in employment (“Titius is a good worker and merits promotion”).

We put that specific point to Facebook — but at the time of writing we’re still waiting for a response. (Nor would Facebook provide a public response to several other questions we asked around what it’s doing here, preferring to limit its comment to the statement at the top of this post.)

Veale adds that the WP29 guidance has been upheld in recent CJEU cases such as Nowak — which he says emphasized that, for example, annotations on the side of an exam script are personal data.

He’s clear about what Facebook should be doing to comply with the law: “They should be asking for individuals’ explicit, separate consent for them to infer data including race, sexuality, health or political opinions. If people say no, they should be able to continue using Facebook as normal without these inferences being made on the back-end.”

“They need to tell individuals about what they are doing clearly and in plain language,” he adds. “Political opinions are just as protected here, and this is perhaps more interesting than race or sexuality.”

“They certainly should face legal challenges under the GDPR,” agrees Paul Bernal, senior lecturer in law at the University of East Anglia, who is also critical of how Facebook is processing sensitive personal information. “The affinity concept seems to be a pretty transparent attempt to avoid legal challenges, and one that ought to fail. The question is whether the regulators have the guts to make the point: It undermines a quite significant part of Facebook’s approach.”

“I think the reason they’re pushing this is that they think they’ll get away with it, partly because they think they’ve persuaded people that the problem is Cambridge Analytica, as rogues, rather than Facebook, as enablers and supporters. We need to be very clear about this: Cambridge Analytica are the symptom, Facebook is the disease,” he adds.

“I should also say, I think the distinction between ‘targeting’ being OK and ‘excluding’ not being OK is also mostly Facebook playing games, and trying to have their cake and eat it. It just invites gaming of the systems really.”

Facebook claims its core product is social media, rather than data-mining people to run a highly lucrative microtargeted advertising platform.

But if that’s true why then is it tangling its core social functions with its ad-targeting apparatus — and telling people they can’t have a social service unless they agree to interest-based advertising?

It could support a service with other types of advertising, which don’t depend on background surveillance that erodes users’ fundamental rights.  But it’s choosing not to offer that. All you can ‘choose’ is all or nothing. Not much of a choice.

Facebook telling people that if they want to opt out of its ad targeting they must delete their account is neither a route to obtain meaningful (and therefore lawful) consent — nor a very compelling approach to counter criticism that its real business is farming people.

The issues at stake here for Facebook, and for the shadowy background data-mining and brokering of the online ad targeting industry as a whole, are clearly far greater than any one data misuse scandal or any one category of sensitive data. But Facebook’s decision to retain people’s sensitive personal data for ad targeting without asking for consent up-front is a telling sign of something gone very wrong indeed.

If Facebook doesn’t feel confident asking its users whether what it’s doing with their personal data is okay or not, maybe it shouldn’t be doing it in the first place.

At very least it’s a failure of ethics. Even if the final judgement on Facebook’s self-serving interpretation of EU privacy rules will have to wait for the courts to decide.