All posts in “Privacy”

FTC tells ISPs to disclose exactly what information they collect on users and what it’s for

The Federal Trade Commission, in what could be considered a prelude to new regulatory action, has issued an order to several major internet service providers requiring them to share every detail of their data collection practices. The information could expose patterns of abuse or otherwise troubling data use against which the FTC — or states — may want to take action.

The letters requesting info (detailed below) went to Comcast, Google, T-Mobile, and both the fixed and wireless sub-companies of Verizon and AT&T. These “represent a range of large and small ISPs, as well as fixed and mobile Internet providers,” an FTC spokesperson said. I’m not sure which is mean to be the small one, but welcome any information the agency can extract from any of them.

Since the Federal Communications Commission abdicated its role in enforcing consumer privacy at these ISPs when it and Congress allowed the Broadband Privacy Rule to be overturned, others have taken up the torch, notably California and even individual cities like Seattle. But for enterprises spanning the nation, national-level oversight is preferable to a patchwork approach, and so it may be that the FTC is preparing to take a stronger stance.

To be clear, the FTC already has consumer protection rules in place and could already go after an internet provider if it were found to be abusing the privacy of its users — you know, selling their location to anyone who asks or the like. (Still no action there, by the way.)

But the evolving media and telecom landscape, in which we see enormous companies devouring one another to best provide as many complementary services as possible, requires constant reevaluation. As the agency writes in a press release:

The FTC is initiating this study to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content.

Although the FTC is always extremely careful with its words, this statement gives a good idea of what they’re concerned about. If Verizon (our parent company’s parent company) wants to offer not just the connection you get on your phone, but the media you request, the ads you are served, and the tracking you never heard of, it needs to show that these businesses are not somehow shirking rules behind the scenes.

For instance, if Verizon Wireless says it doesn’t collect or share information about what sites you visit, but the mysterious VZ Snooping Co (fictitious, I should add) scoops all that up and then sells it for peanuts to its sister company, that could amount to a deceptive practice. Of course it’s rarely that simple (though don’t rule it out), but the only way to be sure is to comprehensively question everyone involved and carefully compare the answers with real-world practices.

How else would we catch shady zero-rating practices, zombie cookies, backdoor deals, or lip service to existing privacy laws? It takes a lot of poring over data and complaints by the detail-oriented folks at these regulatory bodies to find things out.

To that end, the letters to ISPs ask for a whole boatload of information on companies’ data practices. Here’s a summary:

  • Categories of personal information collected about consumers or devices, including purposes, methods, and sources of collection
  • how the data has been or is being used
  • third parties that provide or are provided this data and what limitations are imposed thereupon
  • how such data is combined with other types of information and how long it is retained
  • internal policies and practices limiting access to this information by employees or service providers
  • any privacy assessments done to evaluate associated risks and policies.
  • how data is aggregated, anonymized, or deidentified (and how those terms are defined)
  • how aggregated data is used, shared, etc
  • “any data maps, inventories, or other charts, schematics, or graphic depictions” of information collection and storage
  • total number of consumers who have “visited or otherwise viewed or interacted with” the privacy policy
  • whether consumers are given any choice in collection and retention of data, and what the default choices are
  • total number and percentage of users that have exercised such a choice, and what choices they made
  • whether consumers are incentivized to (or threatened into) opt into data collection and how those programs work
  • any process for allowing consumers to “access, correct, or delete” their personal information
  • data deletion and retention policies for such information

Substantial, right?

Needless to say some of this information may not be particularly flattering to ISPs. If only 1 percent of consumers have ever chosen to share their information, for instance, that reflects badly on sharing it by default. And if data capable of being combined across categories or services to de-anonymize it, even potentially, that’s another major concern.

The FTC representative declined to comment on whether there would be any collaboration with the FCC on this endeavor, whether it was preliminary to any other action, and whether it can or will independently verify the information provided by the ISPs contacted. That’s an important point, considering how poorly these same companies represented their coverage data to the FCC for its yearly broadband deployment report. A reality check would be welcome.

You can read the rest of the letter here (PDF).

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns

A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook .

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  

Telegram adds ‘delete everywhere’ nuclear option to private chats — killing chat history

Telegram has added a feature that lets a user delete messages in one-to-one private chats, after the fact, and not only from their own inbox.

The new ‘nuclear option’ delete feature allows a user to selectively delete their own messages and/or messages sent by the other person in the chat. They don’t even have to have composed the original message or begun the thread to do so. They can just decide it’s time.

Let that sink in.

All it now takes is a few taps to wipe all trace of a historical one-to-one communication — from both your own inbox and the inbox of whoever else you were chatting with (assuming they’re also running the latest version of Telegram’s app).

NB: An earlier version of this article incorrectly stated this can be done in group chats too. However only a group admin has the power to blanket ‘delete everywhere’ messages in a group chat; non-admin members of a group chat can only delete messages from their own inboxes so only an admin can purge group chat history.  

Just over a year ago Facebook’s founder Mark Zuckerberg was criticized for silently and selectively testing a similar feature by deleting messages he’d sent from his interlocutors’ inboxes — leaving absurdly one-sided conversations. The episode was dubbed yet another Facebook breach of user trust.

Facebook later rolled out a much diluted Unsend feature — giving all users the ability to recall a message they’d sent but only within the first 10 minutes.

Telegram has gone much, much further. This is a perpetual, universal unsend of anything in a private chat.

The “delete any message in both ends in any private chat, anytime” feature has been added in an update to version 5.5 of Telegram — which the messaging app bills as offering “more privacy”, among a slate of other updates including search enhancements and more granular controls.

To delete a message from both ends a user taps on the message, selects ‘delete’ and then they’re offered a choice of ‘delete for [the name of the other person in the chat] or ‘delete for me’. Selecting the former deletes the message everywhere, while the later just removes it from your own inbox.

Explaining the rational for adding such a nuclear option via a post to his public Telegram channel yesterday, founder Pavel Durov argues the feature is necessary because of the risk of old messages being taken out of context — suggesting the problem is getting worse as the volume of private data stored by chat partners continues to grow exponentially.

“Over the last 10-20 years, each of us exchanged millions of messages with thousands of people. Most of those communication logs are stored somewhere in other people’s inboxes, outside of our reach. Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever,” he writes.

“An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriends in school can come haunt you in 2030 when you decide to run for mayor.”

Durov goes on to claim that the new wholesale delete gives users “complete control” over messages, regardless of who sent them.

However that’s not really what it does. More accurately it removes control from everyone in any private chat, and opens the door to the most paranoid; lowest common denominator; and/or a sort of general entropy/anarchy — allowing anyone in a private thread to choose to edit or even completely nuke the chat history if they so wish at any moment in time.

The feature could allow for self-servingly and selectively silent and/or malicious edits that are intended to gaslight/screw with others, such as by making them look mad or bad. (A quick screengrab later and a ‘post-truth’ version of a chat thread is ready for sharing elsewhere, where it could be passed off a genuine conversation even though it’s manipulated and therefore fake.)

Or else the motivation for editing chat history could be a genuine concern over privacy, such as to be able to remove sensitive or intimate stuff — say after a relationship breaks down.

Or just for kicks/the lolz between friends.

Either way, whoever deletes first seizes control of the chat history — taking control away from the other person in the process. RIP consent. This is possible because Telegram’s implementation of the super delete feature covers all messages, not just your own, and literally removes all trace of the deleted comms.

So unlike rival messaging app WhatsApp, which also lets users delete a message for everyone in a chat after the fact of sending it (though in that case the delete everywhere feature is strictly limited to messages a person sent themselves), there is no notification automatically baked into the chat history to record that a message was deleted.

There’s no record, period. The ‘record’ is purged. There’s no sign at all there was ever a message in the first place.

We tested this — and, well, wow.

It’s hard to think of a good reason not to create at very least a record that a message was deleted which would offer a check on misuse.

But Telegram has not offered anything. Anyone can secretly and silently purge the private record.

Again, wow.

There’s also no way for a user to recall a deleted message after deleting it (even the person who hit the delete button). At face value it appears to be gone for good. (A security audit would be required to determine whether a copy lingers anywhere on Telegram’s servers for standard chats; only its ‘secret chats’ feature uses end-to-end encryption which it claims “leave no trace on our servers”.)

In our tests on iOS we also found that no notifications is sent when a message is deleted from a Telegram private chat so the other person in an old convo might simply never notice changes have been made, or not until long after. After all human memory is far from perfect and old chat threads are exactly the sort of fast-flowing communication medium where it’s really easy to forget exact details of what was said.

Durov makes that point himself in defence of enabling the feature, arguing in favor of it so that silly stuff you once said can’t be dredged back up to haunt you.

But it cuts both ways. (The other way being the ability for the sender of an abusive message to delete it and pretend it never existed, for example, or for a flasher to send and subsequently delete dick pics.)

The feature is so powerful there’s clearly massive potential for abuse. Whether that’s by criminals using Telegram to sell drugs or traffic other stuff illegally, and hitting the delete everywhere button to cover their tracks and purge any record of their nefarious activity; or by coercive/abusive individuals seeking to screw with a former friend or partner.

The best way to think of Telegram now is that all private communications in the app are essentially ephemeral.

Anyone you’ve ever chatted to one-on-one could decide to delete everything you said (or they said) and go ahead without your knowledge let alone your consent.

The lack of any notification that a message has been deleted will certainly open Telegram to accusations it’s being irresponsible by offering such a nuclear delete option with zero guard rails. (And, indeed, there’s no shortage of angry comments on its tweet announcing the feature.)

Though the company is no stranger to controversy and has structured its business intentionally to minimize the risk of it being subject to any kind of regulatory and/or state control, with servers spread opaquely all over the world, and a nomadic development operation which sees its coders regularly switch the country they’re working out of for months at a time.

Durov himself acknowledges there is a risk of misuse of the feature in his channel post, where he writes: “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.”

Again, though, that’s a one-sided interpretation of what’s actually being enabled here. Because the feature inherently removes control from anyone it’s applied to. So it only offers ‘control’ to the person who first thinks to exercise it. Which is in itself a form of massive power asymmetry.

For historical chats the person who deletes first might be someone with something bad to hide. Or it might be the most paranoid person with the best threat awareness and personal privacy hygiene.

But suggesting the feature universally hands control to everyone simply isn’t true.

It’s an argument in line with a libertarian way of thinking that lauds the individual as having agency — and therefore seeks to empower the person who exercises it. (And Durov is a long time advocate for libertarianism so the design choice meshes with his personal philosophy.)

On a practical level, the presence of such a nuclear delete on Telegram’s platform arguably means the only sensible option for all users that don’t want to abandon the platform is to proactive delete all private chats on a regular and rolling basis — to minimize the risk of potential future misuse and/or manipulation of their chat history. (Albeit, what doing that will do to your friendships is a whole other question.)

Users may also wish to backup their own chats because they can no longer rely on Telegram to do that for them.

While, at the other end of the spectrum — for those really wanting to be really sure they totally nuke all message trace — there are a couple of practical pitfalls that could throw a spanner in the works.  

In our tests we found Telegram’s implementation did not delete push notifications. So with recently sent and deleted messages it was still possible to view the content of a deleted message via a persisting push notification even after the message itself had been deleted within the app.

Though of course, for historical chats — which is where this feature is being aimed; aka rewriting chat history — there’s not likely to be any push notifications still floating around months or even years later to cause a headache.

The other major issue is the feature is unlikely to function properly on earlier versions of Telegram. So if you go ahead and ‘delete everywhere’ there’s no way back to try and delete a message again if it was not successfully purged everywhere because the other person in the chat was still running an older version of Telegram.

Plus of course if anyone has screengrabbed your private chats with them already there’s nothing you can do about that.

In terms of wider impact, the nuclear delete might also have the effect of encouraging more screengrabbing (or other backups) — as users hedge against future message manipulation and/or purging. Or to make sure they have a record of any abusive messages.

That would just create more copies of your private messages in places you can’t at all control and where they could potentially leak if the person creating the backups doesn’t secure them properly — so the whole thing risks being counterproductive to privacy and security, really. Because users are being encouraged to mistrust everything.

Durov claims he’s comfortable with the contents of his own Telegram inbox, writing on his channel that “there’s not much I would want to delete for both sides” — while simultaneously claiming that “for the first time in 23 years of private messaging, I feel truly free and in control”.

The truth is the sensation of control he’s feeling is fleeting and relative.

In another test we performed we were able to delete private messages from Durov’s own inbox, including missives we’d sent to him in a private chat and one he’d sent us. (At least, in so far as we could tell — not having access to Telegram servers to confirm; but the delete option was certainly offered and content (both ours and his) disappeared from our end after we hit the relevant purge button.)

Only Durov could confirm for sure that the messages have gone from his end too. And most probably he’d have trouble doing so as it would require incredible memory for minor detail. But the point is if the deletion functioned as Telegram claims it does, purging equally at both ends, then Durov was not in control at all because we reached right into his inbox and selectively rubbed some stuff out. He got no say at all.

That’s a funny kind of agency and a funny kind of control.

One thing certainly remains in Telegram users’ control: The ability to choose your friends — and choose who you talk to privately.

Turns out you need to exercise that power very wisely.

Otherwise, well, other encrypted messaging apps are available…

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87 million Facebook users without proper consent.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account, a Washington, DC-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the U.K.’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its U.K. head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that has emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational,” suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan, who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the U.K.’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe, which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal, or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report, the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”

Last year the ICO issued Facebook with the maximum possible fine under U.K. law for the CA data breach.

Shortly after, Facebook announced it would appeal, saying the watchdog had not found evidence that any U.K. users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

This report was updated with comment from the ICO.

Opera’s VPN returns to its Android browser

Opera had a couple of tumultuous years behind it, but it looks like the Norwegian browser maker (now in the hands of a Chinese consortium) is finding its stride again and refocusing its efforts on its flagship mobile and desktop browsers. Before the sale, Opera offered a useful stand-alone and built-in VPN service. Somehow, the built-in VPN stopped working after the acquisition. My understanding is that this had something to do with the company being split into multiple parts, with the VPN service ending up on the wrong side of that divide. Today, it’s officially bringing this service back as part of its Android app.

The promise of the new Opera VPN in Opera for Android 51 is that it will give you more control over your privacy and improve your online security, especially on unsecured public WiFi networks. Opera says it uses 256-bit encryption and doesn’t keep a log or retain any activity data.

Since Opera now has Chinese owners, though, not everybody is going to feel comfortable using this service, though. When I asked the Opera team about this earlier this year at MWC in Barcelona, the company stressed that it is still based in Norway and operates under that country’s privacy laws. The message being that it may be owned by a Chinese consortium but that it’s still very much a Norwegian company.

If you do feel comfortable using the VPN, though, then getting started is pretty easy (I’ve been testing in the beta version of Opera for Android for a while). Simply head to the setting menu, flip the switch, and you are good to go.

“Young people are being very concerned about their online privacy as they increasingly live their lives online, said Wallman. “We want to make VPN adoption easy and user-friendly, especially for those who want to feel more secure on the Web but are not aware on how to do it. This is a free solution for them that works.”

What’s important to note here is that the point of the VPN is to protect your privacy, not to give you a way to route around geo-restrictions (though you can do that, too). That means you can’t choose a specific country as an endpoint, only ‘America,’ ‘Asia,’ and ‘Europe.’