All posts in “United Kingdom”

Amazon adds loads more branded Dash buttons in UK

Amazon has doubled the total selection of branded Dash buttons available to UK members of its Prime subscription service, to more than 100, just over a year after launching the push-button wi-fi gizmos, which let people reorder a specific product via its ecommerce marketplace just by pushing the button.

The first Dash buttons launched in the UK in August last year. Amazon now says Dash Button orders have delivered more than 160,000 cups of coffee and almost 300,000 rolls of toilet tissue paper in the market.

Although it’s not — in typical Amazon fashion — breaking out any hard metrics for the buttons, which cost £4.99 a piece (though users then get a £4.99 discount on their first Dash push order — so sticking these things all over your white goods comes with essentially zero additional cost, assuming you’re already locked into Amazon’s Prime membership program).

Reordering toilet roll is the most popular Dash push for UK users, according to the ecommerce giant. Followed by dishwasher tablets, cat litter, cat food, beer, mouthwash and baby wipes. So most definitely this gadget is one to file under ‘utility & convenience’ (not ‘shiny & sexy’).

Among the new brands willingly sticking themselves on Dash buttons are Bold, Cillit Bang, English Tea Shop, evian, Febreze, Flash, Gaviscon, Harringtons, Head & Shoulders, Pampers, Purina Gourmet, SMA, Tampax, Vet’s Best and Waterwipes.

The full list of new (and existing) UK Dash buttons can be found here.

For fast moving consumer goods brands, which inevitably have stacks of similarly priced rival products vying to catch consumers’ eyes on shop shelves, the chance to peel away and monopolize consumers’ attention in their own homes is clearly the equivalent of catnip.

Add in the fact Dash also reduces friction for repeat orders of their product and, well, there’s really no down side as far as the brands are concerned. Dash buttons for every kind of staple seems inevitable — at least until some kind of instant reordering gets integrated into products themselves.

Until then an unknown number of Brits are apparently comfortable pebble-dashing their homes with stick-on buttons. Or at least happy to put a Dash button for reordering bog roll somewhere near the toilet (hopefully in close proximity to soap and hot water).

GCHQ Cyber Accelerator doubles down for second intake

A cyber security accelerator with links to the UK’s GCHQ intelligence agency is doubling down for a second program that’s larger and longer than the inaugural bootcamp which kicked off in January.

The second cohort, announced today, will go through a nine month program vs three. There’s also more of them: Nine startups vs seven. And more cash on the table for selected teams, with £25,000 apiece vs the original £5k grant.

Startups in the first cohort were not required to give up any equity to participate, with neither GCHQ nor Wayra investing at that point. We’ve asked whether that situation has changed for the second batch of teams now that the program has been expanded and will update this story with any response. Update: No change, but see below for a quick Q&A with a spokesman for the accelerator.

The expanded program will offer selected teams access to technological and security expertise from GCHQ, the National Cyber Security Centre and Telefónica, which is the partner organization running the accelerator program (under its Wayra UK bootcamp banner), as well as the usual mix of mentoring, business services and office space.

The nine startups selected for the program play in a wide range of areas, from age verification online, to security skills, to blockchain cybercrime to IoT (in)security.

They are:

  • Cybershield detects phishing and spear phishing, and alerts employees before they mistakenly act on deceptive emails 
  • Elliptic detects and investigates cybercrime involving crypto-currencies, enabling the company to identify illicit blockchain activity and provide intelligence to financial institutions and law enforcement agencies
  • ExactTrak supplies embedded technology that protects data and devices, giving the user visibility and control even when the devices are turned off
  • Intruder provides a proactive security monitoring platform for Internet-facing systems and businesses, detecting system weaknesses before hackers do
  • Ioetec provides a plug-and-play cloud service solution to connect Internet of Things devices with end-to-end authenticated, encrypted security
  • RazorSecure provides advanced intrusion and anomaly detection for aviation, rail and automotive sectors
  • Secure Code Warrior has built a hands-on, gamified Software-as-a-Service learning platform to help developers write secure code
  • Trust Elevate solves the problem of age verification and parental consent for young adults and children in online transactions
  • Warden helps businesses protect their users from hacks in real time by monitoring for suspicious activity 

For cyber security startups joining the program it’s proximity to the UK’s domestic spy agency and the chance to impress spooks — and potentially tap into a chunk of the £165 million ($250M) Defence and Cyber Innovation Fund announced by the government two years ago — that is surely the biggest draw here.

The government said the aim of the fund was to widen procurement for security technologies via investing in cyber security and defense startups. It has been said to be “loosely inspired” by In-Q-Tel — aka the CIA’s VC arm.

parliamentary question to the UK secretary of state for defense last month, asking how much of the money had been allocated so far and for what purposes, suggests around £10M per year apiece is being made available for defense and cyber security related support — including investing in startups.

“£10 million out of the £155 million is available in this financial year to the Defence Innovation Fund, to support innovative procurement across Defence. The Fund is harnessing the best ideas from inside and outside of Defence through activities such as themed competitions and the Open Call for Innovation, delivered using the Defence and Security Accelerator,” said Harriett Baldwin, responding to the parliamentary question.

“The government also allocated £10 million to establish a Cyber Innovation Fund. This supports the UK’s national security requirements by providing innovative start-ups with financial and procurement support,” she added.

The GCHQ Cyber Accelerator is part of a wider £1.9 billion investment aimed at significantly transforming the UK’s cyber security capabilities via a national strategy.


TC: It’s a big jump from three months to a nine month program. Was three months judged to be just too short?
Spokesman: After the successful first phase of the program, we believe we can develop the start-ups even further via a longer program, ensuring the companies gain maximum advantage of this opportunity.

TC: Where is the funding coming from? Is this all UK government money?
Spokesman: The Accelerator is funded through the National Cyber Security Program, delivered through the Department of Digital, Culture, Media and Sport and the NCSC. Wayra UK and Telefónica provide additional funding support and activities to further increase the benefit for the cohort.

TC: Where are the teams from? Presumably not all from the UK?
Spokesman: All of the companies are UK-registered companies. The founders include British, Spanish, Venezuelan and Irish nationals, and we received applications from all around the world.

One of the requirements is that they be UK-registered in order to grow the UK cyber ecosystem and support the NCSC’s mission to make the UK the safest place to live and work online.

TC: Can you also confirm whether Wayra (or GCHQ) is taking any equity in the teams this time around?
Spokesman: Neither GCHQ, the NCSC or DCMS will be taking equity in any of the companies. However, our accelerator partner (Wayra) and other companies supporting the start-ups are welcome to invest if they wish and the companies agree to this, but this is not a requirement for entry to the program.

Featured Image: GCHQ/Crown Copyright

UK spies using social media data for mass surveillance

Privacy rights group Privacy International says it has obtained evidence for the first time that UK spy agencies are collecting social media information on potentially millions of people.

It has also obtained letters it says show the intelligence agencies’ oversight body had not been informed that UK intelligence agencies had shared bulk databases of personal data with foreign governments, law enforcement and industry — raising concerns about effective oversight of the mass surveillance programs.

The documents have come out as a result of an ongoing legal challenge Privacy International has brought against UK intelligence agencies’ use of bulk personal data collection as an investigatory power. (The group also has various other active legal challenges, including to state hacking).

It says now that the Investigatory Powers Commissioner’s Office (IPCO) oversight body “sought immediate inspection when secret practices came to light” as a result of its litigation.

The use by UK spooks of so-called bulk personal datasets (BPDs) — aka massive databases of personal information — was only publicly revealed in March 2015, via an Intelligence and Security Committee report, which also raised various concerns about their use.

Although the report revealed the existence of BPDs it was heavily redacted — for example scrubbing info on exactly how many BPDs are held by the different agencies. Nor was it clear where exactly agencies were sourcing the bulk data from.

It did specify that the stored and searchable data can include details such as an individual’s religion, racial or ethnic origin, political views, medical condition, sexual orientation, and legally privileged, journalistic or “otherwise confidential” information. It also specified that BPDs “vary in size from hundreds to millions of records”, and can be acquired by “overt and covert channels”.

A key concern of the committee at the time was that rules governing use of the datasets had not been defined in legislation (although the UK government has since passed a new investigatory powers framework that enshrines various state surveillance bulk powers in law).

But at the time of the report, privacy issues and other safeguards pertaining to BPDs had not been considered in public or parliament.

While access to BPD data had been authorized internally without ministerial approval. And there were no legal penalties for misuse — and perhaps unsurprisingly the report also revealed all intelligence agencies had dealt with cases of inappropriate access of BPDs.

The documents obtained by Privacy International now put a little more meat on the bones of BPDs. “New disclosure reveals that the UK intelligence agencies hold databases of our social media data,” the group writes today. “This is the first confirmed concrete example of the type of information collected by the UK intelligence agencies and held in large databases.

“The social media database potentially includes information about millions of people,” it further writes, adding: “It remains unclear exactly what aspects of our communications they hold and what other types of information the government agencies are collecting, beyond the broad unspecific categories previously identified such as ‘biographical details’, ‘commercial and financial activities’, ‘communications’, ‘travel data’, and ‘legally privileged communications’.”

In one of the new documents — a draft report from last month summarizing the findings of a 2017 audit of the operation of BPDs — the IPCO, which only took over oversight duties for UK investigatory powers last month, makes a stated reference (below) to “social media data” when discussing how agencies handle different BPD databases; indicating that content from consumer social networks such as Facebook and Twitter is indeed ending up within spy agencies’ bulk databases. (Though no services are mentioned by name.)

Additional documents in the new bundle obtained by Privacy International show the IPCO flagging the role of private contractors that are given ‘administrator’ access to the information UK intelligence agencies’ collect — and raising concerns that there are currently no safeguards in place to prevent misuse of the systems by third party contractors.

Part of the UK government’s defense to the group legal challenge over intelligence agencies’ use of BPDs is that there are effective safeguards in place to prevent misuse. But Privacy International’s contention is that the new documents show otherwise — with the IPCO stating the Commissioner was never made aware of any practice of GCHQ sharing bulk data with industry.

Commenting in a statement, Privacy International solicitor Millie Graham Wood said: “The intelligence agencies’ practices in relation to bulk data were previously found to be unlawful. After three years of litigation, just before the court hearing we learn not only are safeguards for sharing our sensitive data non-existent, but the government has databases with our social media information and is potentially sharing access to this information with foreign governments.

“The risks associated with these activities are painfully obvious. We are pleased the IPCO is keen to look at these activities as a matter of urgency and the report is publicly available in the near future.”

The six additional documents were disclosed to Privacy International on October 13, which also notes it is back in court today for the BPDs litigation.

A full list of the disclosure and documents pertaining to its bulk personal datasets challenge can be found here.

UK gives WhatsApp another spanking over e2e crypto

The UK government has once again bared its anti-technology teeth in public, leaning especially heavily on messaging platform WhatsApp for its use of end-to-end encryption security tech, and calling it out for enabling criminals to communicate in secret.

Reuters reported yesterday that UK Home Secretary Amber Rudd had called out end-to-end encryption services “like WhatsApp”, claiming they are being used by paedophiles and other criminals and pressurizing the companies to stop enabling such people from operating outside the law.

“I do not accept it is right that companies should allow them and other criminals to operate beyond the reach of law enforcement. We must require the industry to move faster and more aggressively. They have the resources and there must be greater urgency,” Rudd reportedly added.

Earlier this week she also admitted she doesn’t really understand e2e encryption.

Asked about her understanding of the technology at the Conservative Party conference, Rudd came out with this gem: “I don’t need to understand how encryption works to understand how it’s helping the criminals. I will engage with the security services to find the best way to combat that.”

She also complained about being ridiculed by the tech industry for not understanding the technologies she’s seeking to regulate. Whilst apparently doubling down on the ignorance that has attracted said mockery.

This of course led to more mockery…

You can see the problem with this strategy. Unless you’re the UK government, evidently.

But what exactly is Rudd trying to get WhatsApp to do? The company has repeatedly pointed out it can’t hand over decrypted message content because e2e crypto means it doesn’t hold the keys to decrypt and access the content.

Which is exactly the point of e2e encryption, and also explains why it’s better for data security.

The Facebook-owned company reportedly rejected a government demand it come up with technical solutions to enable intelligence agencies to access e2e encrypted WhatsApp messages this summer (per a Sky News report).

And an e2e encryption system with a backdoor wouldn’t be an e2e encryption system, as Rudd apparently can’t understand. (She wrote some other confusing words on that topic this summer.)

Meanwhile Facebook’s Sheryl Sandberg has tried to sell governments on the notion that access to its — doubtless high resolution — metadata should be enough for their counterterrorism/crime-fighting needs.

(Note for Rudd: U.S. intelligence agencies have previously said they kill people based on metadata, so Sandberg probably has a point. But maybe you don’t fully grasp what metadata is either?)

Yesterday Reuters also quoted UK security minister Ben Wallace, whose brief covers counterterrorism and comms data legislation, bashing on services that use e2e encryption for preventing security services from tracking and catching criminals because “we can’t get into these communications”.

Wallace also reportedly had this to say: “There are other ways I can’t talk about which we think they can help us more without necessarily entering into end-to-end encryption. So we think they can do more.”

What “other ways” is the government thinking of? A backdoor into an e2e encrypted messaging platform given any other name would still be, er, a backdoor. Unless you’re just getting your hands on an unlocked device and reading the plain text messages that way. (Which is of course one possible workaround for security services to access e2e encrypted comms.)

We asked WhatsApp (and Facebook) for comment on the government’s latest attacks on its messaging platform. Neither replied.

But when politicians seem intent on ignoring how your technology works while simultaneously asking your technologists to make the tech do what they want (which also happens to be: Destroy the security promise that your service is founded on) you can’t really blame them for not wanting to engage in conversation on this topic.

Security researcher and former Facebook staffer Alec Muffett, who worked on deploying e2e crypto for its ‘Secret Conversations’ feature, did have this to say when we asked for this thoughts: “If the Snowden affair has taught us anything it’s that government will internally redefine any distasteful term such as ‘backdoor’ so that it arguably does not apply to what they wish to achieve. I strongly suspect that state officials themselves do not have technical or specific plans, so much as a set of ‘desired outcomes’ which they will pressure the communications providers to deliver. For the rest of us, any ‘feature’ which breaks the promise that is implicit in the name of ‘end-to-end encryption’ is rightly called a ‘backdoor’ and should be resisted.”

Amen to that.

Meanwhile rumors suggest Rudd is gearing up for a potential leadership fight, if/when current UK PM Theresa May is finally unseated by the Brexit mess she has managed to exacerbate.

So Rudd’s views on e2e crypto — and her apparent willingness to continue to misunderstand how technologies work — should worry us all.

At this week’s party conference she unveiled plans to tighten the law around watching terrorist content online, with proposals to increase the maximum jail term for repeat viewing such content online or via a streaming service to up to 15 years.

So the current political trajectory in the UK is for greater control and regulation of the Internet. At the same time as the government is pushing hard to undermine the security of online data.

Again, that should worry us all — not least because other governments are watching the UK’s example, and some appear to be taking inspiration to make their own moves against encryption.

If Rudd wasn’t enough, another Tory leadership contender in waiting — current foreign secretary Boris Johnson — appears to have an even more butterfingered grasp of digital infrastructure than she does (at least Rudd has taken a lot of meetings with tech firms lately, albeit without necessarily learning a great deal).

Also speaking at the Conservative Party conference this week, Johnson reportedly suggested the UK could diverge from the EU’s data protection standards, post-Brexit — i.e. should he become the next UK PM.

Where on Earth has Johnson got the idea that the UK would want to do things different in the area of “data”? What can he be thinking to go out on such a strange limb?

His comments come despite the UK’s data protection watchdog sweating hard to inform UK businesses they do indeed need to comply with the incoming GDPR — and will need to continue to comply even after the country leaves the bloc (because, you know, complying with required standards is oil in the engine of trade).

And despite UK digital minister Matt Hancock stating multiple times the government is aiming to essentially mirror EU data protection regulations — precisely to ensure there is no cliff edge as far as data flows are concerned.

If the UK does not meet EU data protection standards once it leaves the bloc, UK businesses and startups will face being instantly cut off from selling into European markets.

The UK will also likely need to negotiate its own data transfer agreement with the US which has its own data agreement with the EU. So could be cut off from the US market too if they can’t get some quick agreement in place (vs mirroring EU DP regs probably making some kind of UK-US Privacy-Shield copy-paste job quicker and easier to pull off.)

Apparently none of the complexities of international data regulation have arrived beneath Johnson’s blonde mop. Expect that grand landing in some very far-flung future.

Instead we find only a vague grasp on “data” — tightly coupled with a telling political stiffness for “doing things differently”.

And when button-pushing politicians have such a childish grasp on technology at the same time as powerful technologists are demonstrably failing to factor politics into their platforms we should all be rightly and highly concerned about the resulting societal outcomes.

Tech giants told to remove extremist content much faster

Tech giants are once again being urged to do more to tackle the spread of online extremism on their platforms. Leaders of the UK, France and Italy are taking time out at a UN summit today to meet with Google, Facebook and Microsoft.

This follows an agreement in May for G7 nations to take joint action on online extremism.

The possibility of fining social media firms which fail to meet collective targets for illegal content takedowns has also been floated by the heads of state. Earlier this year the German government proposed a regime of fines for social media firms that fail to meet local takedown targets for illegal content.

The Guardian reports today that the UK government would like to see the time it takes for online extremist content to be removed to be greatly speeded up — from an average of 36 hours down to just two.

That’s a considerably narrower timeframe than the 24 hour window for performing such takedowns agreed within a voluntary European Commission code of conduct which the four major social media platformed signed up to in 2016.

Now the group of European leaders, led by the UK Prime Minister Theresa May, apparently want to go even further by radically squeezing the window of time before content must be taken down — and they apparently want to see evidence of progress from the tech giants in a month’s time, when their interior ministers meet at the G7.

According to UK Home Office analysis, ISIS shared 27,000 links to extremist content in the first five months of the 2017 and, once shared, the material remained available online for an average of 36 hours. That, says May, is not good enough.

Ultimately the government wants companies to develop technology to spot extremist material early and prevent it being shared in the first place — something UK Home Secretary Amber Rudd called for earlier this year.

While, in June, the tech industry bandied together to offer a joint front on this issue, under the banner of the Global Internet Forum to Counter Terrorism (GIFCT) — which they said would collaborate on engineering solutions, sharing content classification techniques and effective reporting methods for users.

The initiative also includes sharing counterspeech practices as another string for them to publicly pluck to respond to pressure to do more to eject terrorist propaganda from their platforms.

In response to the latest calls from European leaders to enhance online extremism identification and takedown systems, a GIFCT spokesperson provided the following responsibility-distributing statement:

Combatting terrorism requires responses from government, civil society and the private sector, often working collaboratively. The Global Internet Forum to Counter Terrorism was founded to help do just this and we’ve made strides in the past year through initiatives like the Shared Industry Hash Database.  We’ll continue our efforts in the years to come, focusing on new technologies, in-depth research, and best practices. Together, we are committed to doing everything in our power to ensure that our platforms are not used to distribute terrorist content.

Monika Bickert, Facebook’s director of global policy management, is also speaking at today’s meeting with European leaders — and she’s slated to talk up the company’s investments in AI technology, while also emphasizing that the problem cannot be fixed by tech alone.

“Already, AI has begun to help us identify terrorist imagery at the time of upload so we can stop the upload, understand text-based signals for terrorist support, remove terrorist clusters and related content, and detect new accounts created by repeat offenders,” Bickert was expected to say today.

“AI has tremendous potential in all these areas — but there still remain those instances where human oversight is necessary. AI can spot a terrorist’s insignia or flag, but has a hard time interpreting a poster’s intent. That’s why we have thousands of reviewers, who are native speakers in dozens of languages, reviewing content — including content that might be related to terrorism — to make sure we get it right.”

In May, following various media reports about moderation failures on a range of issues (not just online extremism), Facebook announced it would be expanding the number of human reviewers it employs — adding 3,000 to the existing 4,500 people it has working in this capacity. Although it’s not clear over what time period those additional hires were to be brought in.

But the vast size of Facebook’s platform — which passed more than two billion users in June — means even a team of 7,500 people, aided by the best AI tools that money can build, surely has forlorn hope of being able to keep on top of the sheer volume of user generated content being distributed daily on its platform.

And even if Facebook is prioritizing takedowns of extremist content (vs moderating other types of potentially problematic content), it’s still facing a staggeringly massive haystack of content to sift through, with only a tiny team of overworked (yet, says Bickert, essential) human reviewers attached to this task, at a time when political thumbscrews are being turned on tech giants to get much better at nixing online extremism — and fast.

If Facebook isn’t able to deliver the hoped for speed improvements in a month’s time it could raise awkward political questions about why it’s not able to improve its standards, and perhaps invite greater political scrutiny of the small size of its human moderation team vs the vast size of the task they have to do.

Yesterday, ahead of meeting the European leaders, Twitter released its latest Transparency Report covering government requests for content takedowns, in which it claimed some big wins in using its own in-house technology to automatically identify pro-terrorism accounts — including specifying that it had also been able to suspend the majority of these accounts (~75%) before they were able to tweet.

The company, which has only around 328M active monthly users (and inevitable a far smaller volume of content to review vs Facebook) revealed it had closed nearly 300,000 pro-terror accounts in the past six months, and said government reports of terrorism accounts had dropped 80 per cent since its prior report.

Twitter argues that terrorists have shifted much of their propaganda efforts elsewhere — pointing to messaging platform Telegram as the new tool of choice for ISIS extremists. This is a view backed up by Charlie Winter, senior research fellow at the International Center for the Study of Radicalization and Political Violence (ICSR).

Winter tells TechCrunch: “Now, there’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State, and other Salafi jihadist groups. Places like Twitter, YouTube and Facebook are all way more inhospitable than they’ve ever been to online extremism.

There’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State.

“Yes there are still pockets of extremists using these platforms but they are, in the grand scheme of things, and certainly compared to 2014/2015 vanishingly small.”

Discussing how Telegram is responding to extremism propaganda, he says: “I don’t think they’re doing nothing. But I think they could do more… There’s a whole set of channels which are very easily identifiable as the keynotes of Islamic State propaganda determination, that are really quite resilient on Telegram. And I think that it wouldn’t be hard to identify them — and it wouldn’t be hard to remove them.

“But were Telegram to do that the Islamic State would simply find another platform to use instead. So it’s only ever going to be a temporary measure. It’s only ever going to be reactive. And I think maybe we need to think a little bit more outside the box than just taking the channels down.”

“I don’t think it’s a complete waste of time [for the government to still be pressurizing tech giants over extremism],” Winter adds. “I think that it’s really important to have these big ISPs playing a really proactive role. But I do feel like policy or at least rhetoric is stuck in 2014/2015 when platforms like Twitter were playing a much more important role for groups like the Islamic State.”

Indeed, Twitter’s latest Transparency Report shows that the vast majority of recent government reports pertaining to its content involve complaints about “abusive behavior”.  Which suggests that, as Twitter shrinks its terrorism problem, another long-standing issue — dealing with abuse on its platform — is rapidly zooming into view as the next political hot potato for it to grapple with.

Meanwhile, Telegram is an altogether smaller player than the social giants most frequently called out by politicians over online extremism — though not a tiddler by any means, announcing it had passed 100M monthly users in February 2016.

But not having a large and fixed corporate presence in any country makes the nomadic team behind the platform — led by Russian exile Pavel Durov, its co-founder — an altogether harder target for politicians to wring concessions from. Telegram is simply not going to turn up to a meeting with political leaders.

That said, the company has shown itself responsive to public criticism about extremist use of its platform. In the wake of the 2015 Paris terror attacks it announced it had closed a swathe of public channels that had been used to broadcast ISIS-related content.

It has apparently continued to purge thousands of ISIS channels since then — claiming it nixed more than 8,800 this August alone, for example. Although, and nonetheless, this level of effort does not appear enough to persuade ISIS of the need to switch to another app platform with lower ‘suspension friction’ to continue spreading its propaganda. So it looks like Telegram needs to step up its efforts if it wants to ditch the dubious honor of being known as the go-to platform for ISIS et extremist al.

“Telegram is important to the Islamic State for a great many different reasons — and other Salafi jihadist group too like Al-Qaeda or Harakat Ahrar ash-Sham al-Islamiyya in Syria,” says Winter. “It uses it first and foremost… for disseminating propaganda — so whether that’s videos, photo reports, newspaper, magazine and all that. It also uses it on a more communal basis, for encouraging interaction between supporters.

“And there’s a whole other layer of it that I don’t think anyone sees really which I’m talking about in a hypothetical sense because I think it would be very difficult to penetrate where the groups will be using it for more operational things. But again, without being in an intelligence service, I don’t think it’s possible to penetrate that part of Telegram.

“And there’s also evidence to suggest that the Islamic State actually migrates onto even more heavily encrypted platforms for the really secure stuff.”

Responding to the expert view that Telegram has become the “platform of choice for the Islamic State”, Durov tells TechCrunch: “We are taking down thousands of terrorism-related channels monthly and are constantly raising the efficiency of this process. We are also open to ideas on how to improve it further, if… the ICSR has specific suggestions.”

As Winter hints, there’s also terrorist chatter concerning governments that takes place out of the public view — on encrypted communication channels. And this is another area where the UK government especially has, in recent years, ramped up political pressure on tech giants (for now European lawmakers appear generally more hesitant to push for a decrypt law; while the U.S. has seen attempts to legislate but nothing has yet come to pass on that front).

End-to-end encryption still under pressure

A Sky News report yesterday, citing UK government sources, claimed that Facebook-owned WhatsApp had been asked by British officials this summer to come up with technical solutions to allow them to access the content of messages on its end-to-end encrypted platform to further government agencies’ counterterrorism investigations — so, effectively, to ask the firm to build a backdoor into its crypto.

This is something the UK Home Secretary, Amber Rudd, has explicitly said is the government’s intention. Speaking in June she said it wanted big Internet firms to work with it to limit their use of e2e encryption. And one of those big Internet firms was presumably WhatsApp.

WhatsApp apparently rejected the backdoor demand put to it by the government this summer, according to Sky’s report.

We reached out to the messaging giant to confirm or deny Sky’s report but a WhatsApp spokesman did not provide a direct response or any statement. Instead he pointed us to existing information on the company’s website — including an FAQ in which it states: “WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device.”

He also flagged up a note on its website for law enforcement which details the information it can provide and the circumstances in which it would do so: “A valid subpoena issued in connection with an official criminal investigation is required to compel the disclosure of basic subscriber records (defined in 18 U.S.C. Section 2703(c)(2)), which may include (if available): name, service start date, last seen date, IP address, and email address.”

Facebook CSO Alex Stamos also previously told us the company would refuse to comply if the UK government handed it a so-called Technical Capability Notice (TCN) asking for decrypted data — on the grounds that its use of e2e encryption means it does not hold encryption keys and thus cannot provide decrypted data — though the wider question is really how the UK government might then respond to such a corporate refusal to comply with UK law.

Properly implemented e2e encryption ensures that the operators of a messaging platform cannot access the contents of the missives moving around the system. Although e2e encryption can still leak metadata — so it’s possible for intelligence on who is talking to whom and when (for example) to be passed by companies like WhatsApp to government agencies.

Facebook has confirmed it provides WhatsApp metadata to government agencies when served a valid warrant (as well as sharing metadata between WhatsApp and its other business units for its own commercial and ad-targeting purposes).

Talking up the counter-terror potential of sharing metadata appears to be the company’s current strategy for trying to steer the UK government away from demands it backdoor WhatsApp’s encryption — with Facebook’s Sheryl Sandberg arguing in July that metadata can help inform governments about terrorist activity.

In the UK successive governments have been ramping up political pressure on the use of e2e encryption for years — with politicians proudly declaring themselves uncomfortable with rising use of the tech. While domestic surveillance legislation passed at the end of last year has been widely interpreted as giving security agencies powers to place requirements on companies not to use e2e encryption and/or to require comms services providers to build in backdoors so they can provide access to decrypted data when handed a state warrant. So, on the surface, there’s a legal threat to the continued viability of e2e encryption in the UK.

However the question of how the government could seek to enforce decryption on powerful tech giants, which are mostly headquartered overseas, have millions of engaged local users and sell e2e encryption as a core part of their proposition, is unclear. Even with the legal power to demand it, they’d still be asking for legible data from owners of systems designed not to enable third parties to read that data.

One crypto expert we contacted for comment on the conundrum, who cannot be identified because they were not authorized to speak to the press by their employer, neatly sums up the problem for politicians squaring up to tech giants using e2e encryption: “They could close you down but do they want to? If you aren’t keeping records, you can’t turn them over.”

It’s really not clear how long the political compass will keep swinging around and pointing at tech firms to accuse them of building systems that are impeding governments’ counterterrorism efforts — whether that’s related to the spread of extremist propaganda online, or to a narrower consideration like providing warranted access to encrypted messages.

As noted above, the UK government legislated last year to enshrine expansive and intrusive investigatory powers in a new framework, called the Investigatory Powers Act — which includes the ability to collect digital information in bulk and for spy agencies to maintain vast databases of personal information on citizens who are not (yet) suspected of any wrongdoing in order that they can sift these records when they choose. (Powers that are incidentally being challenged under European human rights law.)

And with such powers on its statute books you’d hope there would be more pressure for UK politicians to take responsibility for the state’s own intelligence failures — rather than seeking to scapegoat technologies such as encryption. But the crypto wars are apparently, sad to say, a neverending story.

On extremist propaganda, the co-ordinated political push by European leaders to get tech platforms to take more responsibility for user generated content which they’re freely distributing, liberally monetizing and algorithmically amplifying does at least have more substance to it. Even if, ultimately, it’s likely to be just as futile a strategy for fixing the underlying problem.

Because even if you could wave a magic wand and make all online extremist propaganda vanish you wouldn’t have fixed the core problem of why terrorist ideologies exist. Nor removed the pull that those extremist ideas can pose for certain individuals. It’s just attacking the symptom of a problem, rather than interrogating the root causes.

The ICSR’s Winter is generally downbeat on how the current political strategy for tackling online extremism is focusing so much attention on restricting access to content.

“[UK PM] Theresa May is always talking about removing the safe spaces and shutting down part of the Internet were terrorists exchange instructions and propaganda and that sort of stuff, and I just feel that’s a Sisyphean task,” he tells TechCrunch. “Maybe you do get it to work on any one platform they’re just going to go onto a different one and you’ll have exactly the same sort of problem all over again.

“I think they are publicly making too much of a thing out of restricting access to content. And I think the role that is being described to the public that propaganda takes is very, very different to the one that it actually has. It’s much more nuanced, and much more complex than simply something which is used to “radicalize and recruit people”. It’s much much more than that.

“And we’re clearly not going to get to that kind of debate in a mainstream media discourse because no one has the time to hear about all the nuances and complexities of propaganda but I do think that the government puts too much emphasis on the online space — in a manner that is often devoid of nuance and I don’t think that is necessarily the most constructive way to go about this.”