All posts in “Social”

Twitter lets advertisers “takeover” the Explore tab

Twitter is ready to squeeze a lot more money out of its trending topics. After minimizing its mediocre Moments feature and burying it inside the renamed Explore tab, Twitter is now starting to test Promoted Trend Spotlight ads. These put a big visual banner equipped with a GIF or image background atop Explore for the first two times you visit that day before settling back into the Trends list, with the first batch coming from Disney in the US.

These powerful new ad units demote organic content in Explore, which could make it less useful for getting a grip on what’s up in the world at a glance. But they could earn Twitter  strong revenue by being much more eye-catching than the traditional Timeline ads that people often skip past. That could further fuel Twitter’s turnaround after it soundly beat revenue estimates in Q1 with $665 million. Its share price of about $44 is near its 52-week high, and almost 3X its low for the year.

“We are continuing to explore new ways to enhance our takeover offerings and give brands more high-impact opportunities to drive conversation and brand awareness on our platform” a Twitter spokesperson told TechCrunch.

The Promoted Trend Spotlight ads are bought as an add-on to the existing Promoted Trends ads that are inserted amongst the list of Twitter’s most popular topics. When tapped, they open a feed of tweets with that headline with one of the advertiser’s related tweets at the top. Back in February AdAge reported whispers of a new visual redesign for Promoted Trends. You can view a demo of the experience below.

Anthy Price, Disney’s Executive Vice President for Media provided TechCrunch with a statement, saying “The Promoted Trend Spotlight on Twitter allowed us to prominently highlight Winnie the Pooh & celebrate the launch of ticket sales for Christopher Robin while four of the characters took over major Disney handles on the platform to engage with fans.”

Historically, Twitter’s biggest problem was that people skimmed past ads. The old unfiltered Timeline trained users to pick and choose what they read, looking past anything that didn’t seem relevant including paid marketing. But with the shift to an algorithmic Timeline and bigger focus on video, Twitter has slowly retrained users to expect relevant content in every slot. Explore’s design with imagery at the top followed by a text list of Trends pulls attention to where these new Spotlight ads sit. With better monetization, Twitter will now have to concentrate on building better ways to get users to open Explore instead of just their feed, notifications, and DMs.

[embedded content]

Hold for the drop: Twitter to purge locked accounts from follower metrics

Twitter is making a major change aimed at cleaning up the spammy legacy of its platform.

This week it will globally purge accounts it has previously locked (i.e. after suspecting them of being spammy) — by removing the accounts from users’ follower metrics.

Which in plain language means Twitter users with lots of followers are likely to see their follower counts take a noticeable hit in the coming days. So hold tight for the drop.

Late last month Twitter flagged smaller changes to follower counts, also as part of a series of platform-purging anti-spam measures — warning users they might see their counts fluctuate more as counts had been switched to being displayed in near real-time (in that case to try to prevent spambots and follow scams artificially inflating account metrics).

But the global purge of locked accounts from user account metrics looks like it’s going to be a rather bigger deal, putting some major dents in certain high profile users’ follower counts — and some major dents in celeb egos.

Hence Twitter has blogged again. “Follower counts are a visible feature, and we want everyone to have confidence that the numbers are meaningful and accurate,” writes Twitter’s Vijaya Gadde, legal, policy and trust & safety lead, flagging the latest change.

“Most people will see a change of four followers or fewer; others with larger follower counts will experience a more significant drop.”

It will certainly be interesting to see whether the change substantially dents Twitter follower counts of high profiles users — such as Katy Perry (109,609,073 Twitter followers at the time of writing) Donald Trump (53,379,873); Taylor Swift (85,566,010); Elon Musk (22,329,075); and Beyoncé (15,303,191), to name a few of the platform’s most followed users.

Check back in a week to see how their follower counts look.

“We understand this may be hard for some, but we believe accuracy and transparency make Twitter a more trusted service for public conversation,” adds Gadde.

Twitter is also warning that while “the most significant changes” will happen in the next few days, users’ follower counts “may continue to change more regularly as part of our ongoing work to proactively identify and challenge problematic accounts”.

The company says it locks accounts if it detects sudden changes in account behavior — such as tweeting “a large volume of unsolicited replies or mentions, Tweeting misleading links, or if a large number of accounts block the account after mentioning them” — which therefore may indicate an account has been hacked/taken over by a spambot.

It says it may also lock accounts if we see email and password combinations from other services posted online and believe that information could put the security of an account at risk.

After locking an account Twitter contacts the owner to try to confirm they still have control of the account. If the owner does not reply to confirm the account stays locked — and will soon also be removed from follower counts globally.

Twitter emphasizes that locked accounts already cannot Tweet, like or Retweet, and are not served ads. But removing them from follower counts is an important additional step that it’s great to see Twitter making — albeit at long last

Twitter also specifies that locked accounts that have not reset their password in more than one month were already not included in Twitter’s MAU or DAU counts — so it today reiterates the CFO’s recent message, saying this change won’t affect its own platform usage metrics. 

The company has been going through what — this time — looks to be a serious house-cleaning process for some months now, after years and years of criticism for failing to effectively tackle rampant spam and abuse on its platform.

In March, Twitter CEO Jack Dorsey also put out a call for ideas to help it capture, measure and evaluate healthy interactions on its platform and the health of public conversations generally — saying: “Ultimately we want to have a measurement of how it affects the broader society and public health, but also individual health, as well.”

Timehop admits that additional personal data was compromised in breach

Timehop is admitting that additional personal information was compromised in a data breach on July 4.

The company first acknowledged the breach on Sunday, saying that users’ names, email addresses and phone numbers had been compromised. Today it said it that additional information, including date of birth and gender, was also taken.

To understand what happened, and what Timehop is doing to fix things, I spoke to CEO Matt Raoul, COO Rick Webb and the security consultant that the company hired to manage its response. (The security consultant agreed to be interviewed on-the-record on the condition that they not be named.)

To be clear, Timehop isn’t saying that there was a separate breach of its data. Instead, the team has discovered that more data was taken in the already-announced incident.

Why didn’t they figure that out sooner? In an updated version of its report (which was also emailed to customers), the company put it simply: “Because we messed up.” It goes on:

In our enthusiasm to disclose all we knew, we quite simply made our announcement before we knew everything. With the benefit of staff who had been vacationing and unavailable during the first four days of the investigation, and a new senior engineering employee, as we examined the more comprehensive audit on Monday of the actual database tables that were stolen it became clear that there was more information in the tables than we had originally disclosed. This was precisely why we had stated repeatedly that the investigation was continuing and that we would update with more information as soon as it became available.

In both the email and my interviews, the Timehop team noted that the service does not have any financial information from users, nor does it perform the kinds of detailed behavioral tracking that you might expect from an ad-supported service. The team also emphasized that users’ “memories” — namely, the older social media posts that people use Timehop to rediscover — were not compromised.

How can they be sure, particularly since some of the compromised data was overlooked in the initial announcement? Well, the breach affected one specific database, while the memories are stored separately.

“That stuff is what we cared about, that stuff was protected,” Webb said. The challenge is, “We have to make a mental note to think about everything else.”

Timehop team

The breach occurred when someone accessed a database in Timehop’s cloud infrastructure that was not protected by two-factor authentication, though Raoul insisted that the company was already using two-factor quite broadly — it’s just that this “fell through the cracks.”

It’s also worth noting that while 21 million accounts were affected, Timehop had varying amounts of data about different users. For example, it says that 18.6 million email addresses were compromised (down from the “up to 21 million” addresses first reported), compared to 15.5 million dates of birth. In total, the company says 3.3 million records were compromised that included names, email addresses, phone numbers and DOBs.

None of those things may seem terribly sensitive (anyone with a copy of my business card and access to Google could probably get that information about me), but the security consultant acknowledged that in the “very, very small percentage” of cases where the records included full names, email addresses, phone numbers and DOBs, “identity theft becomes more likely,” and he suggested that users take standard steps to protect themselves, including password-protecting their phones.

Meanwhile, the company says that it worked with the social media platforms to detect activity that used the compromised authorization tokens, and it has not found anything suspicious. At this point, all of the tokens have been deauthorized (requiring users to re-authorize all of their accounts), so it shouldn’t be an ongoing issue.

As for other steps Timehop is taking to prevent future breaches, the security consultant told me the company is already in the process of ensuring that two-factor authentication is adopted across the board and encrypting its databases, as well as improving the process of deploying code to address security issues.

In addition, the company has shared the IP addresses used in the attack with law enforcement, and it will be sharing its “indicators of compromise” with partners in the security community.

Timehop screenshot

Everyone acknowledged that Timehop made real mistakes, both in its security and in the initial communication with customers. (As the consultant put it, “They made a schoolboy mistake by not doing two-factor authentication.”) However, they also suggested that their response was guided, in part, by the accelerated disclosure timeline required by Europe’s GDPR regulations.

The security consultant told me, “We haven’t had the time fine-toothed comb kinds of things we normally want to do,” like an in-depth forensic analysis. Those things will happen, he said — but thanks to GDPR, the company needed to make the announcement before it had all the information.

And overall, the consultant said he’s been impressed by Timehop’s response.

“I think it really says a lot to their integrity that they decided to go fully public the second they knew it was a breach,” he said. “I want to point out these guys responded within 24 hours with a full-on incident response and secured their environments. That’s better than so many companies.”

Facebook under fresh political pressure as UK watchdog calls for “ethical pause” of ad ops

The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (£500k) under the country’s 1998 data protection regime for breaches related to the Cambridge Analytica data misuse scandal.

But that’s just the tip of the regulatory missiles now being directed at the platform and its ad-targeting methods — and indeed, at the wider big data economy’s corrosive undermining of individuals’ rights.

Alongside yesterday’s update on its investigation into the Facebook-Cambridge Analytica data scandal, the Information Commissioner’s Office (ICO) has published a policy report — entitled Democracy Disrupted? Personal information and political influence — in which it sets out a series of policy recommendations related to how personal information is used in modern political campaigns.

In the report it calls directly for an “ethical pause” around the use of microtargeting ad tools for political campaigning — to “allow the key players — government, parliament, regulators, political parties, online platforms and citizens — to reflect on their responsibilities in respect of the use of personal information in the era of big data before there is a greater expansion in the use of new technologies”.

The watchdog writes [emphasis ours]:

Rapid social and technological developments in the use of big data mean that there is limited knowledge of – or transparency around – the ‘behind the scenes’ data processing techniques (including algorithms, analysis, data matching and profiling) being used by organisations and businesses to micro-target individuals. What is clear is that these tools can have a significant impact on people’s privacy. It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld. When the purpose for using these techniques is related to the democratic process, the case for high standards of transparency is very strong.

Engagement with the electorate is vital to the democratic process; it is therefore understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing. Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default. This could have a damaging long-term effect on the fabric of our democracy and political life.

It also flags a number of specific concerns attached to Facebook’s platform and its impact upon people’s rights and democratic processes — some of which are sparking fresh regulatory investigations into the company’s business practices.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” it writes. “Whilst these concerns about Facebook’s advertising model exist generally in relation to its commercial use, they are heightened when these tools are used for political campaigning. Facebook’s use of relevant interest categories for targeted advertising and it’s, Partner Categories Service are also cause for concern. Although the service has ceased in the EU, the ICO will be looking into both of these areas, and in the case of partner categories, commencing a new, broader investigation.”

The ICO says its discussions with Facebook for this report focused on “the level of transparency around how Facebook user data and third party data is being used to target users, and the controls available to users over the adverts they see”.

Among the concerns it raises about what it dubs Facebook’s “very complex” online targeting advertising model are [emphasis ours]:

Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.

The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.

The company has been criticized for years for confusing and complex privacy controls. But during the investigation, the ICO says it was also not provided with “satisfactory information” from the company to understand the process it uses for determining what interest segments individuals are placed in for ad targeting purposes.

“Whilst Facebook confirmed that the content of users’ posts were not used to derive categories or target ads, it was difficult to understand how the different ‘signals’, as Facebook called them, built up to place individuals into categories,” it writes.

Similar complaints of foot-dragging responses to information requests related to political ads on its platform have also been directed at Facebook by a parliamentary committee that’s running an inquiry into fake news and online disinformation — and in April the chair of the committee accused Facebook of “a pattern of evasive behavior”.

So the ICO is not alone in feeling that Facebook’s responses to requests for specific information have lacked the specific information being sought. (CEO Mark Zuckerberg also annoyed the European Parliament with highly evasive responses to their highly detailed questions this Spring.)

Meanwhile, a European media investigation in May found that Facebook’s platform allows advertisers to target individuals based on interests related to sensitive categories such as political beliefs, sexuality and religion — which are categories that are marked out as sensitive information under regional data protection law, suggesting such targeting is legally problematic.

The investigation found that Facebook’s platform enables this type of ad targeting in the EU by making sensitive inferences about users — inferred interests including communism, social democrats, Hinduism and Christianity. And its defense against charges that what it’s doing breaks regional law is that inferred interests are not personal data.

However the ICO report sends a very chill wind rattling towards that fig leaf, noting “there is a concern that by placing users into categories, Facebook have been processing sensitive personal information – and, in particular, data about political opinions”.

It further writes [emphasis ours]:

Facebook made clear to the ICO that it does ‘not target advertising to EU users on the basis of sensitive personal data’… The ICO accepts that indicating a person is interested in a topic is not the same as formally placing them within a special personal information category. However, a risk clearly exists that advertisers will use core audience categories in a way that does seek to target individuals based on sensitive personal information. In the context of this investigation, the ICO is particularly concerned that such categories can be used for political advertising.

The ICO believes that this is part of a broader issue about the processing of personal information by online platforms in the use of targeted advertising; this goes beyond political advertising. It is clear from academic research conducted by the University of Madrid on this topic that a significant privacy risk can arise. For example, advertisers were using these categories to target individuals with the assumption that they are, for example, homosexual. Therefore, the effect was that individuals were being singled out and targeted on the basis of their sexuality. This is deeply concerning, and it is the ICO’s intention as a concerned authority under the GDPR to work via the one-stop-shop system with the Irish Data Protection Commission to see if there is scope to undertake a wider examination of online platforms’ use of special categories of data in their targeted advertising models.

So, essentially, the regulator is saying it will work with other EU data protection authorities to push for a wider, structural investigation of online ad targeting platforms which put users into categories based on inferred interests — and certainly where those platforms are allowing targeting against special categories of data (such as data related to racial or ethnic origin, political opinions, religious beliefs, health data, sexuality).

Another concern the ICO raises that’s specifically attached to Facebook’s business is transparency around its so-called “partner categories” service — an option for advertisers that allows them to use third party data (i.e. personal data collected by third party data brokers) to create custom audiences on its platform.

In March, ahead of a major update to the EU’s data protection framework, Facebook announced it would be “winding down” this service down over the next six months.

But the ICO is going to investigate it anyway.

“A preliminary investigation of the service has raised significant concerns about transparency of use of the [partner categories] service for political advertising and wider concerns about the legal basis for the service, including Facebook’s claim that it is acting only as a processor for the third-party data providers,” it writes. “Facebook announced in March 2018 that it will be winding down this service over a six-month period, and we understand that it has already ceased in the EU. The ICO has also commenced a broader investigation into the service under the DPA 1998 (which will be concluded at a later date) as we believe it is in the public interest to do so.”

In conclusion on Facebook the regulator asserts the company has not been “sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign”.

“Individuals can opt out of particular interests, and that is likely to reduce the number of ads they receive on political issues, but it will not completely block them,” it points out. “These concerns about transparency lie at the core of our investigation. Whilst these concerns about Facebook’s advertising model exist in relation in general terms and its use in the commercial sphere, the concerns are heightened when these tools are used for political campaigning.”

The regulator also looked at political campaign use of three other online ad platforms — Google, Twitter and Snapchat — although Facebook gets the lion’s share of its attention in the report given the platform has also attracted the lion’s share of UK political parties’ digital spending. (“Figures from the Electoral Commission show that the political parties spent £3.2 million on direct Facebook advertising during the 2017 general election,” it notes. “This was up from £1.3 million during the 2015 general election. By contrast, the political parties spent £1 million on Google advertising.”)

The ICO is recommending that all online platforms which provide advertising services to political parties and campaigns should include experts within the sales support team who can provide political parties and campaigns with “specific advice on transparency and accountability in relation to how data is used to target users”.

“Social media companies have a responsibility to act as information fiduciaries, as citizens increasingly live their lives online,” it further writes.

It also says it will work with the European Data Protection Board, and the relevant lead data protection authorities in the region, to ensure that online platforms comply with the EU’s new data protection framework (GDPR) — and specifically to ensure that users “understand how personal information is processed in the targeted advertising model, and that effective controls are available”.

“This includes greater transparency in relation to the privacy settings, and the design and prominence of privacy notices,” it warns.

Facebook’s use of dark pattern design and A/B tested social engineering to obtain user consent for processing their data at the same time as obfuscating its intentions for people’s data has been a long-standing criticism of the company — but one which the ICO is here signaling is very much on the regulatory radar in the EU.

So expecting new laws — as well as lots more GDPR lawsuits — seems prudent.

The regulator is also pushing for all four online platforms to “urgently roll out planned transparency features in relation to political advertising to the UK” — in consultation with both relevant domestic oversight bodies (the ICO and the Electoral Commission).

In Facebook’s case, it has been developing policies around political ad transparency — amid a series of related data scandals in recent years, which have ramped up political pressure on the company. But self-regulation looks very unlikely to go far enough (or fast enough) to fix the real risks now being raised at the highest political levels.

“We opened this report by asking whether democracy has been disrupted by the use of data analytics and new technologies. Throughout this investigation, we have seen evidence that it is beginning to have a profound effect whereby information asymmetry between different groups of voters is beginning to emerge,” writes the ICO. “We are a now at a crucial juncture where trust and confidence in the integrity of our democratic process risks being undermined if an ethical pause is not taken. The recommendations made in this report — if effectively implemented — will change the behaviour and compliance of all the actors in the political campaigning space.”

Another key policy recommendation the ICO is making is to urge the UK government to legislate “at the earliest opportunity” to introduce a statutory Code of Practice under the country’s new data protection law for the use of personal information in political campaigns.

The report also essentially calls out all the UK’s political parties for data protection failures — a universal problem that’s very evidently being supercharged by the rise of accessible and powerful online platforms which have enabled political parties to combine (and thus enrich) voter databases they are legally entitled to with all sorts of additional online intelligence that’s been harvested by the likes of Facebook and other major data brokers.

Hence the ICO’s concern about “developing a system of voter surveillance by default”. And why she’s pushing for online platforms to “act as information fiduciaries”.

Or, in other words, without exercising great responsibility around people’s information, online ad platforms like Facebook risk becoming the enabling layer that breaks democracy and shatters civic society.

Particular concerns being attached by the ICO to political parties’ activities include: The purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence; a lack of fair processing; and use of third party data analytics companies with insufficient checks around consent. And the regulator says it has several related investigations ongoing.

In March, the information commissioner, Elizabeth Denham, foreshadowed the conclusions in this report, telling a UK parliamentary committee she would be recommending a code of conduct for political use of personal data, and pushing for increased transparency around how and where people’s data is flowing — telling MPs: “We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.”

The ICO says now that it will work closely with government to determine the scope of the Code. It also wants the government to conduct a review of regulatory gaps.

We’ve reached out to the Cabinet Office for a government response to the ICO’s recommendations.

A Facebook spokesman declined to answer specific questions related to the report — instead sending us this short statement, attributed to its chief privacy officer, Erin Egan: “As we have said before, we should have done more to investigate claims about Cambridge Analytica and take action in 2015. We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We’re reviewing the report and will respond to the ICO soon.”

Here’s the ICO’s summary of its ten policy recommendations:

1) The political parties must work with the ICO, the Cabinet Office and the Electoral Commission to identify and implement a cross-party solution to improve transparency around the use of commonly held data.

2) The ICO will work with the Electoral Commission, Cabinet Office and the political parties to launch a version of its successful Your Data Matters campaign before the next General Election. The aim will be to increase transparency and build trust and confidence amongst 5 the electorate on how their personal data is being used during political campaigns.

3) Political parties need to apply due diligence when sourcing personal information from third party organisations, including data brokers, to ensure the appropriate consent has been sought from the individuals concerned and that individuals are effectively informed in line with transparency requirements under the GDPR. This should form part of the data protection impact assessments conducted by political parties.

4) The Government should legislate at the earliest opportunity to introduce a statutory Code of Practice under the DPA2018 for the use of personal information in political campaigns. The ICO will work closely with Government to determine the scope of the Code.

5) It should be a requirement that third party audits be carried out after referendum campaigns are concluded to ensure personal data held by the campaign is deleted, or if it has been shared, the appropriate consent has been obtained.

6) The Centre for Data Ethics and Innovation should work with the ICO, the Electoral Commission to conduct an ethical debate in the form of a citizen jury to understand further the impact of new and developing technologies and the use of data analytics in political campaigns.

7) All online platforms providing advertising services to political parties and campaigns should include expertise within the sales support team who can provide political parties and campaigns with specific advice on transparency and accountability in relation to how data is used to target users.

8) The ICO will work with the European Data Protection Board (EDPB), and the relevant lead Data Protection Authorities, to ensure online platforms’ compliance with the GDPR – that users understand how personal information is processed in the targeted advertising model and that effective controls are available. This includes greater transparency in relation to the privacy settings and the design and prominence of privacy notices.

9) All of the platforms covered in this report should urgently roll out planned transparency features in relation to political advertising to the UK. This should include consultation and evaluation of these tools by the ICO and the Electoral Commission.

10)The Government should conduct a review of the regulatory gaps in relation to content and provenance and jurisdictional scope of political advertising online. This should include consideration of requirements for digital political advertising to be archived in an open data repository to enable scrutiny and analysis of the data.

UK’s Information Commissioner will fine Facebook the maximum £500K over Cambridge Analytica breach

Facebook continues to face fallout over the Cambridge Analytica scandal, which revealed how user data was stealthily obtained by way of quizzes and then appropriated for other purposes, such as targeted political advertising. Today, the U.K. Information Commissioner’s Office (ICO) announced that it would be issuing the social network with its maximum fine, £500,000 ($662,000) after it concluded that it “contravened the law” — specifically the 1998 Data Protection Act — “by failing to safeguard people’s information.”

The ICO is clear that Facebook effectively broke the law by failing to keep users data safe, when their systems allowed Dr Aleksandr Kogan, who developed an app, called “This is your digital life” on behalf of Cambridge Analytica, to scrape the data of up to 87 million Facebook users. This included accessing all of the friends data of the individual accounts that had engaged with Dr Kogan’s app.

The ICO’s inquiry first started in May 2017 in the wake of the Brexit vote and questions over how parties could have manipulated the outcome using targeted digital campaigns.

Damian Collins, the MP who is the chair of the Digital, Culture, Media and Sport Committee that has been undertaking the investigation, has as a result of this said that the DCMS will now demand more information from Facebook, including which other apps might have also been involved, or used in a similar way by others, as well as what potential links all of this activity might have had to Russia. He’s also gearing up to demand a full, independent investigation of the company, rather than the internal audit that Facebook so far has provided. A full statement from Collins is below.

The fine, and the follow-up questions that U.K. government officials are now asking, are a signal that Facebook — after months of grilling on both sides of the Atlantic amid a wider investigation — is not yet off the hook in the U.K. This will come as good news to those who watched the hearings (and non-hearings) in Washington, London and European Parliament and felt that Facebook and others walked away relatively unscathed. The reverberations are also being felt in other parts of the world. In Australia, a group earlier today announced that it was forming a class action lawsuit against Facebook for breaching data privacy as well. (Australia has also been conducting a probe into the scandal.)

The ICO also put forward three questions alongside its announcement of the fine, which it will now be seeking answers to from Facebook. In its own words:

  1. Who had access to the Facebook data scraped by Dr Kogan, or any data sets derived from it?
  2. Given Dr Kogan also worked on a project commissioned by the Russian Government through the University of St Petersburg, did anyone in Russia ever have access to this data or data sets derived from it?
  3. Did organisations who benefited from the scraped data fail to delete it when asked to by Facebook, and if so where is it now?

The DCMS committee has been conducting a wider investigation into disinformation and data use in political campaigns and it plans to publish an interim report on it later this month.

Collins’ full statement:

Given that the ICO is saying that Facebook broke the law, it is essential that we now know which other apps that ran on their platform may have scraped data in a similar way. This cannot by left to a secret internal investigation at Facebook. If other developers broke the law we have a right to know, and the users whose data may have been compromised in this way should be informed.

Facebook users will be rightly concerned that the company left their data far too vulnerable to being collected without their consent by developers working on behalf of companies like Cambridge Analytica. The number of Facebook users affected by this kind of data scraping may be far greater than has currently been acknowledged. Facebook should now make the results of their internal investigations known to the ICO, our committee and other relevant investigatory authorities.

Facebook state that they only knew about this data breach when it was first reported in the press in December 2015. The company has consistently failed to answer the questions from our committee as to who at Facebook was informed about it. They say that Mark Zuckerberg did not know about it until it was reported in the press this year. In which case, given that it concerns a breach of the law, they should state who was the most senior person in the company to know, why they decided people like Mark Zuckerberg didn’t need to know, and why they didn’t inform users at the time about the data breach. Facebook need to provide answers on these important points. These important issues would have remained hidden, were it not for people speaking out about them. Facebook’s response during our inquiry has been consistently slow and unsatisfactory.

The receivers of SCL elections should comply with the law and respond to the enforcement notice issued by the ICO. It is also disturbing that AIQ have failed to comply with their enforcement notice.

Facebook has been in the crosshairs of the ICO over other data protection issues, and not come out well.