All posts in “Privacy”

UK spies using social media data for mass surveillance


Privacy rights group Privacy International says it has obtained evidence for the first time that UK spy agencies are collecting social media information on potentially millions of people.

It has also obtained letters it says show the intelligence agencies’ oversight body had not been informed that UK intelligence agencies had shared bulk databases of personal data with foreign governments, law enforcement and industry — raising concerns about effective oversight of the mass surveillance programs.

The documents have come out as a result of an ongoing legal challenge Privacy International has brought against UK intelligence agencies’ use of bulk personal data collection as an investigatory power. (The group also has various other active legal challenges, including to state hacking).

It says now that the Investigatory Powers Commissioner’s Office (IPCO) oversight body “sought immediate inspection when secret practices came to light” as a result of its litigation.

The use by UK spooks of so-called bulk personal datasets (BPDs) — aka massive databases of personal information — was only publicly revealed in March 2015, via an Intelligence and Security Committee report, which also raised various concerns about their use.

Although the report revealed the existence of BPDs it was heavily redacted — for example scrubbing info on exactly how many BPDs are held by the different agencies. Nor was it clear where exactly agencies were sourcing the bulk data from.

It did specify that the stored and searchable data can include details such as an individual’s religion, racial or ethnic origin, political views, medical condition, sexual orientation, and legally privileged, journalistic or “otherwise confidential” information. It also specified that BPDs “vary in size from hundreds to millions of records”, and can be acquired by “overt and covert channels”.

A key concern of the committee at the time was that rules governing use of the datasets had not been defined in legislation (although the UK government has since passed a new investigatory powers framework that enshrines various state surveillance bulk powers in law).

But at the time of the report, privacy issues and other safeguards pertaining to BPDs had not been considered in public or parliament.

While access to BPD data had been authorized internally without ministerial approval. And there were no legal penalties for misuse — and perhaps unsurprisingly the report also revealed all intelligence agencies had dealt with cases of inappropriate access of BPDs.

The documents obtained by Privacy International now put a little more meat on the bones of BPDs. “New disclosure reveals that the UK intelligence agencies hold databases of our social media data,” the group writes today. “This is the first confirmed concrete example of the type of information collected by the UK intelligence agencies and held in large databases.

“The social media database potentially includes information about millions of people,” it further writes, adding: “It remains unclear exactly what aspects of our communications they hold and what other types of information the government agencies are collecting, beyond the broad unspecific categories previously identified such as ‘biographical details’, ‘commercial and financial activities’, ‘communications’, ‘travel data’, and ‘legally privileged communications’.”

In one of the new documents — a draft report from last month summarizing the findings of a 2017 audit of the operation of BPDs — the IPCO, which only took over oversight duties for UK investigatory powers last month, makes a stated reference (below) to “social media data” when discussing how agencies handle different BPD databases; indicating that content from consumer social networks such as Facebook and Twitter is indeed ending up within spy agencies’ bulk databases. (Though no services are mentioned by name.)

Additional documents in the new bundle obtained by Privacy International show the IPCO flagging the role of private contractors that are given ‘administrator’ access to the information UK intelligence agencies’ collect — and raising concerns that there are currently no safeguards in place to prevent misuse of the systems by third party contractors.

Part of the UK government’s defense to the group legal challenge over intelligence agencies’ use of BPDs is that there are effective safeguards in place to prevent misuse. But Privacy International’s contention is that the new documents show otherwise — with the IPCO stating the Commissioner was never made aware of any practice of GCHQ sharing bulk data with industry.

Commenting in a statement, Privacy International solicitor Millie Graham Wood said: “The intelligence agencies’ practices in relation to bulk data were previously found to be unlawful. After three years of litigation, just before the court hearing we learn not only are safeguards for sharing our sensitive data non-existent, but the government has databases with our social media information and is potentially sharing access to this information with foreign governments.

“The risks associated with these activities are painfully obvious. We are pleased the IPCO is keen to look at these activities as a matter of urgency and the report is publicly available in the near future.”

The six additional documents were disclosed to Privacy International on October 13, which also notes it is back in court today for the BPDs litigation.

A full list of the disclosure and documents pertaining to its bulk personal datasets challenge can be found here.

Apple responds to Senator Franken’s Face ID privacy concerns


Apple has now responded to a letter from Senator Franken last month in which he asked the company to provide more information about the incoming Face ID authentication technology which is baked into its top-of-the-range iPhone X, due to go on sale early next month.

As we’ve previously reported, Face ID raises a range of security and privacy concerns because it encourages smartphone consumers to use a facial biometric for authenticating their identity — and specifically a sophisticated full three dimensional model of their face.

And while the tech is limited to one flagship iPhone for now, with other new iPhones retaining the physical home button plus fingerprint Touch ID biometric combo that Apple launched in 2013, that’s likely to change in future.

After all, Touch ID arrived on a single flagship iPhone before migrating onto additional Apple hardware, including the iPad and Mac. So Face ID will surely also spread to other Apple devices in the coming years.

That means if you’re an iOS user it may be difficult to avoid the tech being baked into your devices. So the Senator is right to be asking questions on behalf of consumers. Even if most of what he’s asking has already been publicly addressed by Apple.

Last month Franken flagged what he dubbed “substantial questions” about how “Face ID will impact iPhone users’ privacy and security, and whether the technology will perform equally well on different groups of people”, asking Apple for “clarity to the millions of Americans who use your products” and how it had weighed privacy and security issues pertaining to the tech itself; and for additional steps taken to protect users.

Here’s the full list of 10 questions the Senator put to the company:

1.      Apple has stated that all faceprint data will be stored locally on an individual’s device as opposed to being sent to the cloud.

a.      Is it currently possible – either remotely or through physical access to the device – for either Apple or a third party to extract  and obtain usable faceprint data from the iPhone X?

b.      Is there any foreseeable reason why Apple would decide to begin storing such data remotely?

2.     Apple has stated that it used more than one billion images in developing the Face ID algorithm. Where did these one billion face images come from?

3.     What steps did Apple take to ensure its system was trained on a diverse set of faces, in terms of race, gender, and age? How is Apple protecting against racial, gender, or age bias in Face ID?

4.     In the unveiling of the iPhone X, Apple made numerous assurances about the accuracy and sophistication of Face ID. Please describe again all the steps that Apple has taken to ensure that Face ID can distinguish an individual’s face from a photograph or mask, for example.

5.     Apple has stated that is has no plans to allow any third party applications access to the Face ID system or its faceprint data. Can Apple assure its users that it will never share faceprint data, along with the tools or other information necessary to extract the data, with any commercial third party?

6.      Can Apple confirm that it currently has no plans to use faceprint data for any purpose other than the operation of Face ID?

7.     Should Apple eventually determine that there would be reason to either begin storing faceprint data remotely or use the data for a purpose other than the operation of Face ID, what steps will it take to ensure users are meaningfully informed and in control of their data?

8.      In order for Face ID to function and unlock the device, is the facial recognition system “always on,” meaning does Face ID perpetually search for a face to recognize? If so:

a.      Will Apple retain, even if only locally, the raw photos of faces that are used to unlock (or attempt to unlock) the device?

b.      Will Apple retain, even if only locally, the faceprints of individuals other than the owner of the device?

9.      What safeguards has Apple implemented to prevent the unlocking of the iPhone X when an individual other than the owner of the device holds it up to the owner’s face?

10.   How will Apple respond to law enforcement requests to access Apple’s faceprint data or the Face ID system itself?

In its response letter, Apple first points the Senator to existing public info — noting it has published a Face ID security white paper and a Knowledge Base article to “explain how we protect our customers’ privacy and keep their data secure”. It adds that this “detailed information” provides answers “all of the questions you raise”.

But also goes on to summarize how Face ID facial biometrics are stored, writing: “Face ID data, including mathematical representations of your face, is encrypted and only available to the Secure Enclave. This data never leaves the device. It is not sent to Apple, nor is it included in device backups. Face images captured during normal unlock operations aren’t saved, but are instead immediately discarded once the mathematical representation is calculated for comparison to the enrolled Face ID data.”

It further specifies in the letter that: “Face ID confirms attention by directing the direction of your gaze, then uses neural networks for matching and anti-spoofing so you can unlock your phone with a glance.”

And reiterates its prior claim that the chance of a random person being able to unlock your phone because their face fooled Face ID is approximately 1 in 1M (vs 1 in 50,000 for the Touch ID tech). After five unsuccessful match attempts a passcode will be required to unlock the device, it further notes.

“Third-party apps can use system provided APIs to ask the user to authenticate using Face ID or a passcode, and apps that support Touch ID automatically support Face ID without any changes. When using Face ID, the app is notified only as to whether the authentication was successful; it cannot access Face ID or the data associated with the enrolled face,” it continues.

On questions about the accessibility of Face ID technology, Apple writes: “The accessibility of the product to people of diverse races and ethnicities was very important to us. Face ID uses facial matching neural networks that we developed using over a billion images, including IR and depth images collected in studies conducted with the participants’ informed consent.”

The company had already made the “billion images” claim during its Face ID presentation last month, although it’s worth noting that it’s not saying — and has never said — it trained the neural networks on images of a billion different people.

Indeed, Apple goes on to tell the Senator that it relied on a “representative group of people” — though it does not confirm exactly how many individuals, writing only that: “We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users.”

There’s obviously an element of commercial sensitivity at this point, in terms of Apple cloaking its development methods from competitors. So you can understand why it’s not disclosing more exact figures. But of course Face ID’s robustness in the face of diversity remains to be proven (or disproven) when iPhone X devices are out in the wild.

Apple also specifies that it has trained a neural network to “spot and resist spoofing” to defend against attempts to unlock the device with photos or masks. Before concluding the letter with an offer to brief the Senator further if he has more questions.

Notably Apple hasn’t engaged with Senator Franken’s question about responding to law enforcement requests — although given enrolled Face ID data is stored locally on a user’s device in the Secure Element as a mathematical model, the technical architecture of Face ID has been structured to ensure Apple never takes possession of the data — and couldn’t therefore hand over something it does not hold.

The fact Apple’s letter does not literally spell that out is likely down to the issue of law enforcement and data access being rather politically charged.

In his response to the letter, Senator Franken appears satisfied with the initial engagement, though he also says he intends to take the company up on its offer to be briefed in more detail.

“I appreciate Apple’s willingness to engage with my office on these issues, and I’m glad to see the steps that the company has taken to address consumer privacy and security concerns. I plan to follow up with Apple to find out more about how it plans to protect the data of customers who decide to use the latest generation of iPhone’s facial recognition technology,” he writes.

“As the top Democrat on the Privacy Subcommittee, I strongly believe that all Americans have a fundamental right to privacy,” he adds. “All the time, we learn about and actually experience new technologies and innovations that, just a few years back, were difficult to even imagine. While these developments are often great for families, businesses, and our economy, they also raise important questions about how we protect what I believe are among the most pressing issues facing consumers: privacy and security.”

Mobile phone companies appear to be providing your number and location to anyone who pays


You may remember that last year, Verizon (which owns Oath, which owns TechCrunch) was punished by the FCC for injecting information into its subscribers’ traffic that allowed them to be tracked without their consent. That practice appears to be alive and well despite being disallowed in a ruling last March: companies appear to be able to request your number, location, and other details from your mobile provider quite easily.

The possibility was discovered by Philip Neustrom, co-founder of Shotwell Labs, who documented it in a blog post earlier this week. He found a pair of websites which, if visited from a mobile data connection, report back in no time with numerous details: full name, billing zip code, current location (as inferred from cell tower data), and more.

It appears to be similar to the Unique Identifier Header used by Verizon. The UIDH was appended to HTTP requests made by Verizon customers, allowing websites they visited to see their location, billing data and so on (if they paid Verizon for the privilege, naturally). The practice, in common use by carriers for a decade or more, was highlighted in the last few years and eventually the FCC required Verizon (and by extension other mobile providers) to get positive consent before implementing.

Now, this is not to say that the whole thing is some huge scam: that data could be very useful for, for instance, an administrator who wants to be sure that an employee’s phone is actually in the location their IP seems to indicate. Why bother with a text-based one time password if a service can verify you’re you by querying your mobile provider? It’s at least a reasonable possibility.

And that’s what companies like Payfone and Danal are using it for; furthermore, users of their services would by definition be opting into this kind of tracking, so there’s no problem there.

I asked Payfone CEO Rodger Desai for a little clarification. He wrote back in an email:

There is a very rigorous framework of security and data privacy consent. The main issue is that with all the legitimate mobile change events fraudsters get in… For example, if you download a mobile banking app today, the bank is not sure if it is you on your new phone or someone acting as you – the fraudster only needs your bank password. PC techniques like certificates and device printing don’t work well – since it is a new phone.

But as Neustrom found out, mobile providers don’t appear to be working very hard to verify that consent. Both sites provide demos of their functionality, pinging mobile providers for data and presenting it to you.

Of course, if you want the demo to work, you kind of opt into the tracking as well. But where’s the text or email from the mobile provider asking you for verification? It seems that this kind of request could be made fraudulently by many means, since the providers don’t verify them in any way other than a few programmatic ones (matching IPs, etc).

Without rigorous consent standards, mobile companies may as well be selling the data indiscriminately the same way they were before advocacy groups took them to task for it. For now there doesn’t appear to be a way to officially opt out — but there also doesn’t appear to be a clear and present danger, such as an obvious scammer or wholesaler using this technique.

I’ve asked T-Mobile, AT&T, and Verizon whether they participate in this kind of program, providing subscriber details to anyone who pays — and who, in turn, may provide to to others. I’ve also asked the FCC if this practice is of concern to them. I’ll update this post if I hear back.

Featured Image: Zap Art/Getty Images

User outcry prompts OnePlus to step down its excessive data collection


Earlier this week, it was revealed that independent phone maker OnePlus was collecting all manner of information from phones running its OxygenOS — without telling users, of course. Caught red-handed, the company is backing off from the opt-out data collection program, giving users a choice up front instead of buried in the options.

The offending telemetry was discovered earlier this week, when software engineer Christopher Moore happened to snoop on his phone’s traffic for a hacking challenge. He noticed that the device was phoning home to OnePlus when it crashed — which is expected and benign — but also every time the phone was woken up or put to sleep — which is odd and intrusive.

Looking closer, he found that the device was also repeatedly sending its IMEI, phone number, serial number, wi-fi network and MAC address, and numerous other metrics. Having the option to send this information with, say, a bug report would be understandable, but it was sending this information every time an app was launched.

OnePlus said at the time that the data was to “fine tune our software according to user behavior” and “provide better after-sales support.” It could be partially turned off in advanced settings, or totally removed with a command line tool.

Of all phone manufacturers, of course, OnePlus probably has the users most likely to go snooping around for this kind of stuff, so it’s strange that such plainly intrusive metrics would be employed. Users were clearly bothered, so yesterday OnePlus provided a more substantial response on its support forums.

After the standard “We take our users – and their data privacy – very seriously” boilerplate and assuring people that this was all a big misunderstanding, OnePlus co-founder Carl Pei explained the practical steps the company was taking:

By the end of October, all OnePlus phones running OxygenOS will have a prompt in the setup wizard that asks users if they want to join our user experience program. The setup wizard will clearly indicate that the program collects usage analytics. In addition, we will include a terms of service agreement that further explains our analytics collection. We would also like to share we will no longer be collecting telephone numbers, MAC Addresses and WiFi information.

He also notes that the company never sent this information to any third parties, which is good. But opting out of the “user experience program” doesn’t appear to stop telemetry data from being sent — it just means “your usage analytics will not be tied to your device information.” Users may prefer to know that their data is not being collected at all, but for now that option appears to be limited to the same command-line tools as it was before.

Twitter explains why Rose McGowan’s account got suspended

Actress Rose McGowan attends a premiere.
Actress Rose McGowan attends a premiere.

Image: Imeh Akpanudosen/Getty Images

Twitter is finally breaking its silence on why Rose McGowan was temporarily suspended from its platform.

The actress — who has been vocal about Harvey Weinstein amid dozens of other women alleging he sexual harassed and assaulted them — announced on Instagram Wednesday night that Twitter suspended her account for 12 hours for violating the platform’s rules.

Though it was unclear at the time which tweet violated Twitter’s rules, the official Twitter Safety account clarified McGowan had included a private phone number in one of her tweets, which is prohibited in the site’s Privacy Policy.

Twitter’s account explained the company has been in touch with McGowan’s team and temporarily locked the account as a result of the tweet that included the phone number. 

The site’s policy reads: “Posting another person’s private and confidential information is a violation of the Twitter Rules.” And goes on to list personal phone numbers as an example of private information.

However, Twitter claimed McGowan’s account has since been unlocked and the tweet has been removed.

McGowan reached a settlement with Weinstein in the late ’90s, but had been especially vocal about the Hollywood producer since other women came forward with their own similar experiences.

McGowan didn’t hold back condemning Weinstein on the platform, so when Twitter suspended her account without explanation it led to a great deal of backlash.

Many felt the suspension further discouraged women from speaking up about sexual assault, but Twitter explained it is “proud to empower and support the voices on our platform, especially those that speak truth to power.”

“We stand with the brave women and men who use Twitter to share their stories and will work hard every day to improve our processes to protect those voices.”

In the future, Twitter claimed it would be “clearer about these policies and decisions” and the site’s CEO Jack Dorsey acknowledged the team needs to be “a lot more transparent in our actions in order to build trust.”

The way Twitter enforces its own rules has come under fire many times before, most notably in reference to President Trump’s negative behavior — such as when he threatened violence with North Korea

Last month Twitter announced that though it holds all accounts to the same rules it assesses the “newsworthiness” of a tweet that’s been reported.

It seems Twitter will have to work a lot harder to ensure their rules are upheld equally in the future.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2f601f02c0 6f8c 60be%2fthumb%2f00001