All posts in “terrorism”

Tech giants told to remove extremist content much faster


Tech giants are once again being urged to do more to tackle the spread of online extremism on their platforms. Leaders of the UK, France and Italy are taking time out at a UN summit today to meet with Google, Facebook and Microsoft.

This follows an agreement in May for G7 nations to take joint action on online extremism.

The possibility of fining social media firms which fail to meet collective targets for illegal content takedowns has also been floated by the heads of state. Earlier this year the German government proposed a regime of fines for social media firms that fail to meet local takedown targets for illegal content.

The Guardian reports today that the UK government would like to see the time it takes for online extremist content to be removed to be greatly speeded up — from an average of 36 hours down to just two.

That’s a considerably narrower timeframe than the 24 hour window for performing such takedowns agreed within a voluntary European Commission code of conduct which the four major social media platformed signed up to in 2016.

Now the group of European leaders, led by the UK Prime Minister Theresa May, apparently want to go even further by radically squeezing the window of time before content must be taken down — and they apparently want to see evidence of progress from the tech giants in a month’s time, when their interior ministers meet at the G7.

According to UK Home Office analysis, ISIS shared 27,000 links to extremist content in the first five months of the 2017 and, once shared, the material remained available online for an average of 36 hours. That, says May, is not good enough.

Ultimately the government wants companies to develop technology to spot extremist material early and prevent it being shared in the first place — something UK Home Secretary Amber Rudd called for earlier this year.

While, in June, the tech industry bandied together to offer a joint front on this issue, under the banner of the Global Internet Forum to Counter Terrorism (GIFCT) — which they said would collaborate on engineering solutions, sharing content classification techniques and effective reporting methods for users.

The initiative also includes sharing counterspeech practices as another string for them to publicly pluck to respond to pressure to do more to eject terrorist propaganda from their platforms.

In response to the latest calls from European leaders to enhance online extremism identification and takedown systems, a GIFCT spokesperson provided the following responsibility-distributing statement:

Combatting terrorism requires responses from government, civil society and the private sector, often working collaboratively. The Global Internet Forum to Counter Terrorism was founded to help do just this and we’ve made strides in the past year through initiatives like the Shared Industry Hash Database.  We’ll continue our efforts in the years to come, focusing on new technologies, in-depth research, and best practices. Together, we are committed to doing everything in our power to ensure that our platforms are not used to distribute terrorist content.

Monika Bickert, Facebook’s director of global policy management, is also speaking at today’s meeting with European leaders — and she’s slated to talk up the company’s investments in AI technology, while also emphasizing that the problem cannot be fixed by tech alone.

“Already, AI has begun to help us identify terrorist imagery at the time of upload so we can stop the upload, understand text-based signals for terrorist support, remove terrorist clusters and related content, and detect new accounts created by repeat offenders,” Bickert was expected to say today.

“AI has tremendous potential in all these areas — but there still remain those instances where human oversight is necessary. AI can spot a terrorist’s insignia or flag, but has a hard time interpreting a poster’s intent. That’s why we have thousands of reviewers, who are native speakers in dozens of languages, reviewing content — including content that might be related to terrorism — to make sure we get it right.”

In May, following various media reports about moderation failures on a range of issues (not just online extremism), Facebook announced it would be expanding the number of human reviewers it employs — adding 3,000 to the existing 4,500 people it has working in this capacity. Although it’s not clear over what time period those additional hires were to be brought in.

But the vast size of Facebook’s platform — which passed more than two billion users in June — means even a team of 7,500 people, aided by the best AI tools that money can build, surely has forlorn hope of being able to keep on top of the sheer volume of user generated content being distributed daily on its platform.

And even if Facebook is prioritizing takedowns of extremist content (vs moderating other types of potentially problematic content), it’s still facing a staggeringly massive haystack of content to sift through, with only a tiny team of overworked (yet, says Bickert, essential) human reviewers attached to this task, at a time when political thumbscrews are being turned on tech giants to get much better at nixing online extremism — and fast.

If Facebook isn’t able to deliver the hoped for speed improvements in a month’s time it could raise awkward political questions about why it’s not able to improve its standards, and perhaps invite greater political scrutiny of the small size of its human moderation team vs the vast size of the task they have to do.

Yesterday, ahead of meeting the European leaders, Twitter released its latest Transparency Report covering government requests for content takedowns, in which it claimed some big wins in using its own in-house technology to automatically identify pro-terrorism accounts — including specifying that it had also been able to suspend the majority of these accounts (~75%) before they were able to tweet.

The company, which has only around 328M active monthly users (and inevitable a far smaller volume of content to review vs Facebook) revealed it had closed nearly 300,000 pro-terror accounts in the past six months, and said government reports of terrorism accounts had dropped 80 per cent since its prior report.

Twitter argues that terrorists have shifted much of their propaganda efforts elsewhere — pointing to messaging platform Telegram as the new tool of choice for ISIS extremists. This is a view backed up by Charlie Winter, senior research fellow at the International Center for the Study of Radicalization and Political Violence (ICSR).

Winter tells TechCrunch: “Now, there’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State, and other Salafi jihadist groups. Places like Twitter, YouTube and Facebook are all way more inhospitable than they’ve ever been to online extremism.

There’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State.

“Yes there are still pockets of extremists using these platforms but they are, in the grand scheme of things, and certainly compared to 2014/2015 vanishingly small.”

Discussing how Telegram is responding to extremism propaganda, he says: “I don’t think they’re doing nothing. But I think they could do more… There’s a whole set of channels which are very easily identifiable as the keynotes of Islamic State propaganda determination, that are really quite resilient on Telegram. And I think that it wouldn’t be hard to identify them — and it wouldn’t be hard to remove them.

“But were Telegram to do that the Islamic State would simply find another platform to use instead. So it’s only ever going to be a temporary measure. It’s only ever going to be reactive. And I think maybe we need to think a little bit more outside the box than just taking the channels down.”

“I don’t think it’s a complete waste of time [for the government to still be pressurizing tech giants over extremism],” Winter adds. “I think that it’s really important to have these big ISPs playing a really proactive role. But I do feel like policy or at least rhetoric is stuck in 2014/2015 when platforms like Twitter were playing a much more important role for groups like the Islamic State.”

Indeed, Twitter’s latest Transparency Report shows that the vast majority of recent government reports pertaining to its content involve complaints about “abusive behavior”.  Which suggests that, as Twitter shrinks its terrorism problem, another long-standing issue — dealing with abuse on its platform — is rapidly zooming into view as the next political hot potato for it to grapple with.

Meanwhile, Telegram is an altogether smaller player than the social giants most frequently called out by politicians over online extremism — though not a tiddler by any means, announcing it had passed 100M monthly users in February 2016.

But not having a large and fixed corporate presence in any country makes the nomadic team behind the platform — led by Russian exile Pavel Durov, its co-founder — an altogether harder target for politicians to wring concessions from. Telegram is simply not going to turn up to a meeting with political leaders.

That said, the company has shown itself responsive to public criticism about extremist use of its platform. In the wake of the 2015 Paris terror attacks it announced it had closed a swathe of public channels that had been used to broadcast ISIS-related content.

It has apparently continued to purge thousands of ISIS channels since then — claiming it nixed more than 8,800 this August alone, for example. Although, and nonetheless, this level of effort does not appear enough to persuade ISIS of the need to switch to another app platform with lower ‘suspension friction’ to continue spreading its propaganda. So it looks like Telegram needs to step up its efforts if it wants to ditch the dubious honor of being known as the go-to platform for ISIS et extremist al.

“Telegram is important to the Islamic State for a great many different reasons — and other Salafi jihadist group too like Al-Qaeda or Harakat Ahrar ash-Sham al-Islamiyya in Syria,” says Winter. “It uses it first and foremost… for disseminating propaganda — so whether that’s videos, photo reports, newspaper, magazine and all that. It also uses it on a more communal basis, for encouraging interaction between supporters.

“And there’s a whole other layer of it that I don’t think anyone sees really which I’m talking about in a hypothetical sense because I think it would be very difficult to penetrate where the groups will be using it for more operational things. But again, without being in an intelligence service, I don’t think it’s possible to penetrate that part of Telegram.

“And there’s also evidence to suggest that the Islamic State actually migrates onto even more heavily encrypted platforms for the really secure stuff.”

Responding to the expert view that Telegram has become the “platform of choice for the Islamic State”, Durov tells TechCrunch: “We are taking down thousands of terrorism-related channels monthly and are constantly raising the efficiency of this process. We are also open to ideas on how to improve it further, if… the ICSR has specific suggestions.”

As Winter hints, there’s also terrorist chatter concerning governments that takes place out of the public view — on encrypted communication channels. And this is another area where the UK government especially has, in recent years, ramped up political pressure on tech giants (for now European lawmakers appear generally more hesitant to push for a decrypt law; while the U.S. has seen attempts to legislate but nothing has yet come to pass on that front).

End-to-end encryption still under pressure

A Sky News report yesterday, citing UK government sources, claimed that Facebook-owned WhatsApp had been asked by British officials this summer to come up with technical solutions to allow them to access the content of messages on its end-to-end encrypted platform to further government agencies’ counterterrorism investigations — so, effectively, to ask the firm to build a backdoor into its crypto.

This is something the UK Home Secretary, Amber Rudd, has explicitly said is the government’s intention. Speaking in June she said it wanted big Internet firms to work with it to limit their use of e2e encryption. And one of those big Internet firms was presumably WhatsApp.

WhatsApp apparently rejected the backdoor demand put to it by the government this summer, according to Sky’s report.

We reached out to the messaging giant to confirm or deny Sky’s report but a WhatsApp spokesman did not provide a direct response or any statement. Instead he pointed us to existing information on the company’s website — including an FAQ in which it states: “WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device.”

He also flagged up a note on its website for law enforcement which details the information it can provide and the circumstances in which it would do so: “A valid subpoena issued in connection with an official criminal investigation is required to compel the disclosure of basic subscriber records (defined in 18 U.S.C. Section 2703(c)(2)), which may include (if available): name, service start date, last seen date, IP address, and email address.”

Facebook CSO Alex Stamos also previously told us the company would refuse to comply if the UK government handed it a so-called Technical Capability Notice (TCN) asking for decrypted data — on the grounds that its use of e2e encryption means it does not hold encryption keys and thus cannot provide decrypted data — though the wider question is really how the UK government might then respond to such a corporate refusal to comply with UK law.

Properly implemented e2e encryption ensures that the operators of a messaging platform cannot access the contents of the missives moving around the system. Although e2e encryption can still leak metadata — so it’s possible for intelligence on who is talking to whom and when (for example) to be passed by companies like WhatsApp to government agencies.

Facebook has confirmed it provides WhatsApp metadata to government agencies when served a valid warrant (as well as sharing metadata between WhatsApp and its other business units for its own commercial and ad-targeting purposes).

Talking up the counter-terror potential of sharing metadata appears to be the company’s current strategy for trying to steer the UK government away from demands it backdoor WhatsApp’s encryption — with Facebook’s Sheryl Sandberg arguing in July that metadata can help inform governments about terrorist activity.

In the UK successive governments have been ramping up political pressure on the use of e2e encryption for years — with politicians proudly declaring themselves uncomfortable with rising use of the tech. While domestic surveillance legislation passed at the end of last year has been widely interpreted as giving security agencies powers to place requirements on companies not to use e2e encryption and/or to require comms services providers to build in backdoors so they can provide access to decrypted data when handed a state warrant. So, on the surface, there’s a legal threat to the continued viability of e2e encryption in the UK.

However the question of how the government could seek to enforce decryption on powerful tech giants, which are mostly headquartered overseas, have millions of engaged local users and sell e2e encryption as a core part of their proposition, is unclear. Even with the legal power to demand it, they’d still be asking for legible data from owners of systems designed not to enable third parties to read that data.

One crypto expert we contacted for comment on the conundrum, who cannot be identified because they were not authorized to speak to the press by their employer, neatly sums up the problem for politicians squaring up to tech giants using e2e encryption: “They could close you down but do they want to? If you aren’t keeping records, you can’t turn them over.”

It’s really not clear how long the political compass will keep swinging around and pointing at tech firms to accuse them of building systems that are impeding governments’ counterterrorism efforts — whether that’s related to the spread of extremist propaganda online, or to a narrower consideration like providing warranted access to encrypted messages.

As noted above, the UK government legislated last year to enshrine expansive and intrusive investigatory powers in a new framework, called the Investigatory Powers Act — which includes the ability to collect digital information in bulk and for spy agencies to maintain vast databases of personal information on citizens who are not (yet) suspected of any wrongdoing in order that they can sift these records when they choose. (Powers that are incidentally being challenged under European human rights law.)

And with such powers on its statute books you’d hope there would be more pressure for UK politicians to take responsibility for the state’s own intelligence failures — rather than seeking to scapegoat technologies such as encryption. But the crypto wars are apparently, sad to say, a neverending story.

On extremist propaganda, the co-ordinated political push by European leaders to get tech platforms to take more responsibility for user generated content which they’re freely distributing, liberally monetizing and algorithmically amplifying does at least have more substance to it. Even if, ultimately, it’s likely to be just as futile a strategy for fixing the underlying problem.

Because even if you could wave a magic wand and make all online extremist propaganda vanish you wouldn’t have fixed the core problem of why terrorist ideologies exist. Nor removed the pull that those extremist ideas can pose for certain individuals. It’s just attacking the symptom of a problem, rather than interrogating the root causes.

The ICSR’s Winter is generally downbeat on how the current political strategy for tackling online extremism is focusing so much attention on restricting access to content.

“[UK PM] Theresa May is always talking about removing the safe spaces and shutting down part of the Internet were terrorists exchange instructions and propaganda and that sort of stuff, and I just feel that’s a Sisyphean task,” he tells TechCrunch. “Maybe you do get it to work on any one platform they’re just going to go onto a different one and you’ll have exactly the same sort of problem all over again.

“I think they are publicly making too much of a thing out of restricting access to content. And I think the role that is being described to the public that propaganda takes is very, very different to the one that it actually has. It’s much more nuanced, and much more complex than simply something which is used to “radicalize and recruit people”. It’s much much more than that.

“And we’re clearly not going to get to that kind of debate in a mainstream media discourse because no one has the time to hear about all the nuances and complexities of propaganda but I do think that the government puts too much emphasis on the online space — in a manner that is often devoid of nuance and I don’t think that is necessarily the most constructive way to go about this.”

Twitter claims tech wins in quashing terror tweets


In its latest Transparency report, which covers requests it’s received from governments pertaining to content on its platform, Twitter has reported a big decline in the proportion of pro-terrorism accounts being reported over the past six months, saying this is down 80 per cent since its last report, as well as reporting a drop in the number of accounts it removed for terrorism-related content during this period.

Twitter claims pro-terrorism account reports have shrunk by a fifth in the past six months.

It also reports that the vast majority (95 per cent) of account suspensions pertaining to the promotion of terrorism resulted from use of its in-house tech tools, up from 74 per cent on the prior six-month report period — with government requests accounting for less than one per cent of pro-terror account suspensions.

Along with other social media platform giants, Twitter is facing increased political pressure to promptly eject terrorist content and hate speech from its platform — especially in Europe where new laws have been proposed in some countries that could see governments introducing a regime of financial penalties attached to failures of performance for social media content takedown as a stick to encourage faster removals of illegal content.

~300,000 accounts nixed for terrorism in six months

Between January and June 2017, the six-month period covered by this, Twitter’s 11th Transparency Report, the tech firm said it removed a total of 299,649 pro-terrorism accounts — surfaced by both reports from governments and its own in-house tech (though the lion’s share of identifications were generated by its tech tools).

It says this represents a 20 per cent drop in terrorism-promoting Twitter accounts since the last reporting period, of July 1, 2016 through December 31, 2016.

Which — coupled with the 80 per cent drop in government agencies reporting pro-terror Twitter accounts — suggests the company is at least managing to squeeze terrorist activity on its platform, given it seems unlikely there’s been such a large reduction in globally active terrorists online over the same period. (Even as there are still hundreds of thousands of pro-terrorism Twitter accounts being created every half a year.)

The company further emphasizes it killed a majority of the pro-terrorism accounts set up on its platform before they could post anything: “Notably, 75% of these accounts were suspended before posting their first Tweet,” it writes.

Which seems a big win. And a figure to watch to see if Twitter is able to further increase the proportion of non-tweeter terrorism account suspensions in its next Transparency Report.

A spokeswoman for Twitter confirmed to us that this is the first time it’s published data on “that particular metric” when we asked whether there has been a rise in Twitter being able to cut-off terrorist accounts before they’ve sent a single tweet.

“In the last six months we have seen our internal, spam-fighting tools play an increasingly valuable role in helping us get terrorist content off of Twitter,” she added. “Our anti-spam tools are getting faster, more efficient, and smarter in how we take down accounts that violate our TOS.”

The figure for total suspensions of pro-terrorism Twitter accounts is now approaching 1M over two years. (To be exact, the company reports 935,897 pro-terrorism account suspensions between August 1, 2015 through June 30, 2017.)

Asked for more details about the changes it’s made to its anti-terrorism tools — to apparently deliver better results — the spokeswoman told us: “We are reluctant to share details of how these tools work as we do not want to provide information that could be used to try to avoid detection.”

“We can say that these tools enable us to take signals from accounts found to be in violation of our TOS and to work to continuously strengthen and refine the combinations of signals that can accurately surface accounts that may be similar,” she added.

Another Twitter spokesperson also pointed to a few pieces of academic research which suggest the Islamic State terror group has shifted its social media strategy from relying on Twitter’s platform to distribute violent propaganda to utilizing the messaging platform Telegram (which lets users broadcast missives to large groups).

The spokesman also made a point of flagging how the latter has been called out for a lack of co-operation by security agencies. So the company is clearly hoping to shift the big red finger of terrorism propaganda blame onto the rival Telegram messaging platform.

Abusive behavior triggered 98% of gov’t TOS reports

In this 11th edition of its Transparency Report Twitter has also expanded the categories it breaks out in the government TOS reports section (which it added in its 10th report) to now show a break down of four categories of these types of reports — namely: Abusive Behavior, Copyright, Promotion of Terrorism, and Trademark reports.

This shows that the vast majority of reports Twitter is receiving from governments relate to abusive behavior on Twitter — which it says accounted for 98 per cent of global government TOS reports it received — with pro-terrorism content a very, very distant second (accounting for around 2 per cent of the reports).

This is interesting as it underlines the huge difference in how Twitter is approaching terrorism-related content vs abusive behavior — with the vast majority (92 per cent) of accounts reported for terrorism going on to be removed by Twitter from its platform vs just 13 per cent (as Twitter reports it) of those reported for abusive behavior actually being suspended.

In the report Twitter says the fact that the vast majority of abuse-related reports resulted in no content being removed is down to “a variety of reasons” —

… such as the reporter failing to identify content on Twitter or our investigation finding that the reported content did not violate our Terms. As we take an objective approach to processing global Terms of Service reports, the fact that the reporters in these cases happened to be government officials had no bearing on whether any action was taken under our Rules.

You could argue that terrorism is a rather easier category of content to identify than ‘abusive behavior’, with the latter representing something of a subjective spectrum when you’re talking in terms of a package of content delivered in tweet form (and of course depending on how high you dial up your ‘free speech’ setting); and likely a much more subjective spectrum vs pro-terrorism content specifically.

Though there’s no doubt Twitter is still the target of fierce criticism, including by many users, for how its platform continues to enable, for example, misogynist troll armies to pile in and harass women en masse. And such co-ordinated harassment clearly undermines the free speech rights of those being targeted. (Though Twitter has claimed to be stepping up its anti-abuse measures and tools.)

The company also continues to be criticized for racist speech on its platform. Even though its TOC expressively forbid “hateful conduct” including “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease”.

Just this August the company was called out — in this instance by a UK parliamentary committee — for failing to act on abusive tweets, including failing to taken down graphic images of suspected rape and abuse which, its critics argue, clearly violate its own community standards — which forbid inciting or engaging in “targeted abuse or harassment of others”.

in that instance the Guardian reported that the committee chair wrote to Twitter asking it to explain its methodology and timescales for removing graphic pictures and sexually explicit messages, and also asking it to provide details of the average time taken to investigate reports and take down tweets, as well as what action is being taken to speed up removals.

The MP also sought information on how many staff Twitter employs actively looking for abusive content, and for more detail on its policy on the removal of tweets and suspension of accounts.

Which are exactly the sorts of questions Twitter’s Transparency Report does not answer. Although it is at least now breaking out abusive behavior as a government TOS reports category and revealing it to be the overwhelmingly number one issue being reported by government agencies.

We can’t compare this with prior Transparency Reports as Twitter was not previously breaking government reports into specific categories. But its inclusion and prominence now does suggest politicians are feeling under pressure to take action to try to curb abuse taking place on Twitter.

Of the government-reported abusive content that Twitter did remove, the company reports the largest proportion was related to harassment and “hateful conduct” — stating that: “The majority was removed for “violating rules under these areas: harassment (37%), hateful conduct (35%), and impersonation (13%)”.

“The remainder of the violating content fell within other areas of our prohibitions against abusive behavior as set forth in the Twitter Rules,” it adds.

Asked if it could disclose the geographical locations where it receives the most government reports relating to abusive behavior on its platform, the Twitter spokeswoman told us it cannot provide “that level of granularity this time”.

Nor, she told us, is it able to disclose the geographies where it did take action on the minority of government reports on abusive behavior and remove accounts.

The company does not reveal how many reports of abusive behavior it receives generally, from all users, i.e. rather than just government-related reports — per this report. But now that it’s breaking out government agency reports of abusive behavior it should at least be possible to see how political pressure on Twitter over this issue rises (or falls) going forward.

Elsewhere in the Transparency Report, Twitter notes it has expanded its U.S. country report, adding a breakdown of California state information requests at the county level — and says it has plans to introduce this section to other states in future to help users “get a better idea of how frequently their local authorities seek user account information”.

Over the report period, it also says it received 6 per cent more global government requests for account information which affected 3% fewer accounts than in the previous period. It further notes requests originated from four new countries: Nepal, Paraguay, Panama, and Uruguay.

“In addition, we received approximately 10% more global legal requests to remove content impacting roughly 12% more accounts compared to the previous reporting period. These included requests from nine new countries: Bahrain, China, Croatia, Finland, Nepal, Paraguay, Poland, Qatar, Ukraine, and Uruguay,” it adds.

Snap joins rivals Facebook and YouTube to fight terrorism


Snap Inc has joined the Global Internet Forum to Counter Terrorism, which sees consumer internet companies cooperating to stop the spread of terrorism and extremism online. Facebook, Google and YouTube, Microsoft and Twitter formed the GIFCT last month, and tomorrow it will host its first workshop with fellow tech companies plus government and non-governmental organizations.

The GIFCT started as an extension of the shared industry hash database that allows tech companies to share the digital fingerprints of extremist and terrorist content, such as photos and videos, so that once one identifies a piece of prohibited content, all the others can also block its upload. It’s almost like a vaccine program, where one company beats an infection, then shares how to produce antibodies with the rest of the group.

In identical blog posts published by Facebook, YouTube, Twitter and Microsoft, the GIFCT wrote “Our mission is to substantially disrupt terrorists’ ability to use the Internet in furthering their causes, while also respecting human rights.”

The first GIFCT workshop, held in San Francisco on August 1st, will host the United Kingdom Home Secretary Rt Hon Amber Rudd MP and United States Acting Secretary of Homeland Security Elaine Duke, plus representatives of the European Union, United Nations, Australia and Canada. The event’s goal is to formalize how the tech giants can collaborate with smaller companies, and what those companies would need as far as support to get involved.

In the coming months, the group’s goals include adding three more tech companies to the hash sharing program beyond new members Snap and JustPaste.it, get 50 companies to share their best practices for countering extremism through the Tech Against Terrorism project and plan four knowledge-sharing workshops.

Improving automated moderation and deletion of terrorist content is critical to preventing it from slipping through the cracks. While internet giants like Facebook typically employ thousands of contractors to sift through reported content, they often have to work extraordinarily fast through endless queues of disturbing imagery than can leave them emotionally damaged. Using shared hash database and best practices could relieve humans of some of this tough work while potentially improving the speed and accuracy with which terrorist propaganda is removed.

It’s good to see Facebook and Snap putting aside their differences for a good cause. While Snap is notorious for its secrecy, and Facebook for its copying of competitors, the GIFCT sees them openly sharing data and strategies to limit the spread of terrorist propaganda online. There is plenty of nuance to determining where free speech ends and inciting violence begins, so cooperation could improve all the member companies’ processes.

Beyond banishing content purposefully shared by terrorists, there remains the question of how algorithmically sorted content feeds like Facebook and Twitter handle the non-stop flood of news about terrorist attacks. Humans are evolutionarily disposed to seek information about danger. But when we immerse ourselves into the tragic details of any terrorist attack around the world, we can start to perceive these attacks as more frequent and dangerous than they truly are.

As former Google design ethicist Tristan Harris discusses, social networks know that we’re drawn to content that makes us outraged. As the GIFCT evolves, it would be good to see it research how news and commentary about terrorism should best be handled by curation algorithms to permit free speech, unbiased distribution of information and discussion without exploiting tragedy for engagement.

US lifts laptop ban for all remaining airlines and airports


The U.S. has now lifted entirely a controversial ban on laptops in hand luggage for passengers flying to the country from the Middle East or via certain Middle Eastern airlines, with the Department of Homeland Security professing itself satisfied with “enhanced security measures in place”.

It had already lifted the ban for three major airlines, earlier this month. But late yesterday an official tweeted that all restrictions had been lifted for remaining airlines and airports.

The laptop ban, which also barred other large electronic devices such as tablets and e-readers from hand luggage, was initiated in March. It immediately covered all flights to U.S. destinations from 10 airports in the Middle East, including major travel hubs such as Dubai, Abu Dhabi and Doha, as well as nine airlines.

The ban was said to have been introduced to increase national security, based on evaluated intelligence that indicated terrorist organizations were looking to hide explosives in consumer electronics and smuggle them onto passenger planes.

However there were questions over the timing of the ban; the choice of affected airports and airlines; and even some suggestions the motive might be economic protectionism, given US airlines were not affected by a ban that created a lot of extra hassle for travelers and especially traveling business people (so might well have been bad for the business of the affected airlines).

Add to that, the sight of U.S. President Donald Trump following up an earlier highly controversial executive order, which had sought to place restrictions on travel to the U.S. from seven majority Muslim countries, with a second prohibitive measure targeting companies from the region led some to suggest the ban was motivated by anti-Muslim prejudice.

That said, the UK also initiated a laptop ban in March, following the US’ lead — albeit, targeting a slightly different list of airlines operating direct flights into the country from the Middle East and North Africa.

We’ve confirmed with the UK’s Department for Transport that its laptop ban remains.

“To be clear, the restrictions introduced by the UK government in March currently remain in place,” a spokesman for the Department for Transport told us.

Turkish news agency, citing diplomatic sources, has reported that the UK’s ban will soon be lifted for direct flights from Turkey. However the spokesman declined to comment on “rumor and speculation”.

Facebook, Microsoft, YouTube and Twitter form Global Internet Forum to Counter Terrorism


Today Facebook, Microsoft, YouTube and Twitter collectively announced a new partnership aimed at reducing the accessibility of internet services to terrorists. The new Global Internet Forum to Counter Terrorism adds structure to existing efforts by the companies to target and remove from major web platforms recruiting materials for terror groups.

Together, the four tech leaders say they will collaborate on engineering solutions to the problem, sharing content classification techniques and effective reporting methods for users. Each company also will contribute to both technical and policy research and share best practices for counterspeech initiatives.

Back in December of 2016, the same four companies announced the creation of a shared industry hash database. By sharing hashes with each other, the group was able to collectively identify terror accounts without each having to do the time- and resource-intensive legwork independently. This new organization creates more formal bureaucracy for improving that database.

Similarly, Facebook, Microsoft, YouTube and Twitter will be teaching smaller companies and organizations to follow in their footsteps to adopt their own proactive plans for combating terror. A portion of this training will cover key strategies for executing counterspeech programs like YouTube’s Creators for Change and Facebook’s P2P and OCCI.

All of these actions are occurring side-by-side with public sector efforts. The G7 has been vocal about the importance of combating extremism with a multi-pronged approach. Today’s partnership further solidifies the relationship between four multi-national tech companies with the aim of pushing back against terrorism on their respective platforms.

Featured Image: Photodisc/Getty Images