All posts in “Privacy”

Apple’s Tim Cook slams Silicon Valley over ‘false promises’ and ‘chaos’

Apple CEO Tim Cook had a few words of wisdom for Stanford’s 2019 graduating class.

During his commencement speech at Stanford University on Sunday, Cook praised technology’s role in “remaking society,” but also warned about letting it go unchecked. He urged graduates to be fearless in building things, but to also take responsibility for their effects on society.

Without naming any specific companies, Cook threw shade at Silicon Valley for thinking “good intensions excuse away horrible outcomes.” 

The Apple CEO indirectly called out companies such as Facebook and Theranos for abusing their positions of power without first considering the consequences.

“Lately it seems, this industry is becoming known for a less noble innovation: the belief that you can claim credit without accepting responsibility,” said Cook. “We see it every day now, with every data breach, every privacy violation, every blind-eye-turned-to-hate-speech. Fake news poisoning our national conversation. The false miracles in exchange of a drop of your blood.”

Several times, Cook reiterated the importance of tech companies thinking thoroughly about what they’re building before unleashing it onto the world. “If you build a chaos factory, you can’t dodge responsibility for the chaos; taking responsibility means having the courage to think things through,” he said.

“Our problems in technology, in politics, wherever are human problems,” Cook said. “From the Garden of Eden to today, it’s our humanity that got us into this mess and it’s our humanity that’s going to have to get us out.”

“If you build a chaos factory, you can’t dodge responsibility for the chaos.”

Cook also dedicated a good chunk of his speech to the importance of privacy in the face of increasing digital surveillance. Again, without calling out specific companies (hello Facebook!), Cook said ignoring privacy will lead to a world of self censorship.

“If we accept it as normal and unavoidable, that everything in our lives can be aggregated, sold, or even leaked in a hack, then we lose so much more than data,” said Cook. “We lose the freedom to be human.”

“Think about what’s at stake. Everything you write, everything you say, every topic of curiosity, every stray thought, every impulsive purchase, every moment of frustration or weakness, every gripe or complaint, every secret shared in confidence,” said Cook. “In a world without digital privacy, even if you’ve done nothing wrong other than think differently, you begin to censor yourself. Not entirely at first — just a little, bit by bit. To risk less, to hope less, to imagine less, to dare less, to create less, to try less, to talk less, to think less.”

[embedded content]

Cook warned the “chilling effect of digital surveillance” would be profound and that it “touches everything.”

Cook’s Stanford speech echoes the many he’s given over the years, where he’s championed the importance of privacy. 

At WWDC, Apple stressed it’s making privacy a core foundation of its products and services, as opposed to a feature enabled by consumers. One such new privacy service is “Sign in with Apple,” an alternative social login to Facebook and Google that doesn’t track you across the internet.

Though Cook’s speech largely focused on advising graduates to take responsibility for their creations, he also briefly touched on the bravery of those in the Stonewall riots as well as what Steve Jobs’ death taught him. “When the dust settled, all I knew was that I was going to have to be the best version of myself that I could be.”

Oh, and fun fact: Cook spent four years on the sailing team at his alma mater, Auburn University. “Tying knots is hard,” he said. The more you know…

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91648%252f65655b42 bd0e 4a1d b7c1 46b074e91625.jpg%252foriginal.jpg?signature=aqiq eelyhfnyyuxnz07thbxeow=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Every secure messaging app needs a self-destruct button

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best-case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best-case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

Facebook collected device data on 187,000 users using banned snooping app

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.

The social media giant said in a letter to lawmakers — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India.

Earlier this year, a TechCrunch investigation found both Facebook and Google were abusing their Apple-issued enterprise developer certificates, designed to only allow employees to run iPhone and iPad apps used only inside the company. The investigation found the companies were building and providing apps for consumers outside Apple’s App Store, in violation of Apple’s rules. The apps paid users in return for collecting data on how participants used their devices and understand app habits by gaining access to all of the network data in and out of their device.

Apple banned the apps by revoking Facebook’s enterprise developer certificate — and later Google’s enterprise certificate. In doing so, the revocation knocked both companies’ fleet of internal iPhone or iPad app offline that relied on the same certificates.

But in response to lawmakers’ questions, Apple said it didn’t know how many devices installed Facebook’s rule-violating app.

“We know that the provisioning profile for the Facebook Research app was created on April 19, 2017, but this does not necessarily correlate to the date that Facebook distributed the provisioning profile to end users,” said Timothy Powderly, Apple’s director of federal affairs, in his letter.

Facebook said the app dated back to 2016.

TechCrunch also obtained the letters sent by Apple and Google to lawmakers in early March, but were never made public.

These “research” apps relied on willing participants to download the app from outside the app store and use the Apple-issued developer certificates to install the apps. Then, the apps would install a root network certificate, allowing the app to collect all the data out of the device — like web browsing histories, encrypted messages, and mobile app activity — potentially also including data from their friends — for competitive analysis.

A response by Facebook about the number of users involved in Project Atlas. (Image: TechCrunch)

In Facebook’s case, the research app — dubbed Project Atlas — was a repackaged version of its Onavo VPN app, which Facebook was forced to remove from Apple’s App Store last year for gathering too much device data.

Just this week, Facebook relaunched its research app as Study, only available on Google Play and for users who have been approved through Facebook’s research partner, Applause. Facebook said it would be more transparent about how it collects user data.

Facebook’s vice-president of public policy Kevin Martin defended the company’s use of enterprise certificates, saying it “was a relatively well-known industry practice.” When asked, a Facebook spokesperson didn’t quantify this further. Later, TechCrunch found dozens of apps that used enterprise certificates to evade the app store.

Facebook previously said it “specifically ignores information shared via financial or health apps.” In its letter to lawmakers, Facebook stuck to its guns, saying its data collection was focused on “analytics,” but confirmed “in some isolated circumstances the app received some limited non-targeted content.”

“We did not review all of the data to determine whether it contained health or financial data,” said a Facebook spokesperson. “We have deleted all user-level market insights data that was collected from the Facebook Research app, which would include any health or financial data that may have existed.”

But Facebook didn’t say what kind of data, only that the app didn’t decrypt “the vast majority” of data sent by a device.

Facebook describing the type of data it collected — including “limited, non-targeted content.” (Image: TechCrunch)

Google’s letter, penned by public policy vice-president Karan Bhatia, did not provide a number of devices or users, saying only that its app was a “small scale” program. When reached, a Google spokesperson did not comment by our deadline.

Google also said it found “no other apps that were distributed to consumer end users,” but confirmed several other apps used by the company’s partners and contractors, which no longer rely on enterprise certificates.

Google explaining which of its apps were improperly using Apple-issued enterprise certificates. (Image: TechCrunch)

Apple told TechCrunch that both Facebook and Google “are in compliance” with its rules as of the time of publication. At its annual developer conference last week, the company said it now “reserves the right to review and approve or reject any internal use application.”

Facebook’s willingness to collect this data from teenagers — despite constant scrutiny from press and regulators — demonstrates how valuable the company sees market research on its competitors. With its restarted paid research program but with greater transparency, the company continues to leverage its data collection to keep ahead of its rivals.

Facebook and Google came off worse in the enterprise app abuse scandal, but critics said in revoking enterprise certificates Apple retains too much control over what content customers have on their devices.

The Justice Department and the Federal Trade Commission are said to be examining the big four tech giants — Apple, Amazon, Facebook, and Google-owner Alphabet — for potentially falling foul of U.S. antitrust laws.

Facebook’s new Study app pays adults for data after teen scandal

Facebook shut down its Research and Onavo programs after TechCrunch exposed how the company paid teenagers for root access to their phones to gain market data on competitors. Now Facebook is relaunching its paid market research program, but this time with principles — namely transparency, fair compensation, and safety. The goal? To find out what other competing apps and features Facebook should buy, copy, or ignore.

Today Facebook releases its “Study From Facebook” app for Android only. Some adults 18+ in the US and India will be recruited by ads on and off Facebook to willingly sign up to let Facebook collect extra data from them exchange for a monthly payment. They’ll be warned that Facebook will gather what apps are on their phone, how much time they spend using those apps, the app activity names of features they use in other apps, plus their country, device, and network type.

Facebook promises it won’t snoop on user IDs, passwords, or any of participants’ content including photos, videos, or messages. It won’t sell participants info to third parties, use it target ads, or add it to their account or the behavior profiles the company keeps on each user. Yet while Facebook writes that “transparency” is a major part of “Approaching market research in a responsible way”, it refuses to tell us how much participants will be paid.

“Study From Facebook” could give the company critical insights for shaping its product roadmap. If it learns everyone is using screensharing social network Squad, maybe it will add its own screensharing feature. If it finds group video chat app Houseparty is on the decline, it might not worry about cloning that functionality. Or if it finds Snapchat’s Discover mobile TV shows are retaining users for a ton of time, it might amp up teen marketing of Facebook Watch. But it also might rile up regulators and politicians who already see it as beating back competition through acquisitions and feature cloning.

An Attempt To Be Less Creepy

TechCrunch’s investigation from January revealed that Facebook had been quietly operating a Research program codenamed Atlas that paid users ages 13 to 35 up to $20 per month in gift cards in exchange for root access to their phone so it could gather all their data for competitive analysis. That included everything the Study app grabs, but also their web browsing activity, and even encrypted information since the app required users to install a VPN that routed all their data through Facebook. It even had the means to collect private messages and content shared — potentially including data owned by their friends.

Facebook’s Research app also abused Apple’s enterprise certificate program designed for distributing internal use-only apps to employees without the App Store or Apple’s approval. Facebook originally claimed it obeyed Apple’s rules, but Apple quickly disabled Facebook’s Research app and also shut down its enterprise certificate, temporarily breaking Facebook’s internal test builds of its public apps as well as the shuttle times and lunch menu apps employees rely on.

In the aftermath of our investigation, Facebook shut down its Research program. It then also announced in February that it would shut down its Onavo Protect app on Android, which branded itself as a privacy app providing a free VPN instead of paying users while it collected tons of data on them. After giving users until May 9th to find a replacement VPN, Onavo Protect was killed off.

This embarrassing string of events that stemmed from unprincipled user research. Now Facebook is trying to correct its course and revive its paid data collection program but with more scruples.

How Study From Facebook Works

Unlike Onavo or Facebook Research, users can’t freely sign up for Study. They have to be recruited through ads Facebook will show on its own app and others to both 18+ Facebook users and non-users in the US and India. That should keep out grifters and make sure the studies stay representative of Facebook’s user base. Eventually Facebook plans to extend the program to other countries.

If users click through the ad, they’ll be brought to Facebook’s research operations partner Applause’s website that clearly identifies Facebook’s involvement, unlike Facebook Research that hid that fact until users were fully registered.. There they’ll be explained how the Study app is opt-in, what data they’ll give up in exchange for what compensation, and that they can opt-out at any time. They’ll need to confirm their age, have a PayPal account that are only supposed to be available to users 18 and over, and Facebook will cross-check the age to make sure it matches the person’s Facebook profile if they have one. They won’t have to sign and NDA like with the Facebook Research program.

Anyone can download the Study From Facebook app from Google Play, but only those who’ve been approved through Applause will be able to log in and unlock the app. It will again explain what Facebook will collect, and ask for data permissions. The app will send periodic notifications to users reminding them they’re selling their data to Facebook and offering them an opt-out. Study From Facebook will use standard Google-approved APIs and won’t use a VPN, SSL bumping, root access, enterprise certificates, or permission profiles you install on your device like the Research Program that ruffled feathers.

Different users will be paid the same amount to their PayPal account, but Facebook wouldn’t say how much it’s dealing out, or even whether it was in the ball park of cents, dollars, or hundreds of dollars per month. That seems like a stern departure from its stated principle of transparency. This matters because Facebook earns billions in profit per quarter. It has the cash to potentially offer so much to Study participants that it effectively coerces them to give up their data. $10 to $20 per month like it was paying Research participants seems reasonable in the US, but that’s enough money in India to make people act against their better judgement.

The launch shows Facebook’s boldness despite the threat of anti-trust regulation focusing on how it’s surpressed competition through its acquisitions and copying. Democrat presidential candidates could use Study From Facebook as a talking point, noting how the company’s huge profits earned from its social network domination afford it a way to buy private user data to entrench its lead.

At 15 years old, Facebook is at risk of losing touch with what the next generation wants out of their phones. Rather than trying to guess based on their activity on its own app, it’s putting its huge wallet to work so it can pay for edge on the competition.

AI security startup Darktrace’s CEO defeats buzzword bingo with trust and transparency

It takes a lot of trust to allow a company to come in and install a mystery box on their network to monitor for threats. It’s like inviting in a security guard to sit in your living room to make sure nobody breaks in.

Yet that’s exactly what Darktrace does. (The box, not the security guard.)

The Cambridge U.K.-founded company, now with a second headquarters in San Francisco, assumes that any network can be breached. Instead of looking at the perimeter of a network, Darktrace uses artificial intelligence (AI) and machine learning to scan and identify security weaknesses and malicious traffic inside a company’s network.

Traditional network monitoring typically uses signature-based threat detection of matching against known malicious files, but can be easily modified to evade detection. Instead, Darktrace builds up a profile of the network to understand what the baseline “normal” looks like so it can spot and identify potential issues, like large amounts of data exfiltration or suspect devices.

But how do you win over those who see a sea of meaningless buzzwords? How can you differentiate between the smoke and mirrors and the real deal?

“No one wants the black box making decisions without them knowing what it’s doing,” said Nicole Eagan, Darktrace’s co-founder and chief executive, in a call with TechCrunch.

“So, let them have visibility,” she said.

Darktrace’s founders have roots in the U.K. and U.S. intelligence, where they took what they knew of the cybersecurity threats to the private sector to where the new battleground opened up. In the past half-decade of its existence, the company has gained major clients on its roster — from telcos to banks, tech giants and car makers — supported by 900 staff in over 40 offices around the world.

About a quarter of its customers are in financial services, said Eagan. But it takes a lot for the heavily regulated companies to trust a mystery device on a company’s network where the data and security, like financial services, is highly regulated.