All posts in “social media platforms”

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.

The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.

UK watchdog eyeing PM Boris Johnson’s Facebook ads data grab

The online campaigning activities of the UK’s new prime minister, Boris Johnson, have already caught the eye of the country’s data protection watchdog.

Responding to concerns about the scope of data processing set out in the Conservative Party’s Privacy Policy being flagged to it by a Twitter user, the Information Commissioner’s Office replied that: “This is something we are aware of and we are making enquiries.”

The Privacy Policy is currently being attached to an online call to action that ask Brits to tell the party what the most “important issue” to them and their family is, alongside submitting their personal data.

Anyone sending their contact details to the party is also asked to pick from a pre-populated list of 18 issues the three most important to them. The list runs the gamut from the National Health Service to brexit, terrorism, the environment, housing, racism and animal welfare, to name a few. The online form also asks responders to select from a list how they voted at the last General Election — to help make the results “representative”. A final question asks which party they would vote for if a General Election were called today.

Speculation is rife in the UK right now that Johnson, who only became PM two weeks ago, is already preparing for a general election. His minority government has been reduced to a majority of just one MP after the party lost a by-election to the Liberal Democrats last week, even as an October 31 brexit-related deadline fast approaches.

People who submit their personal data to the Conservative’s online survey are also asked to share it with friends with “strong views about the issues”, via social sharing buttons for Facebook and Twitter or email.

“By clicking Submit, I agree to the Conservative Party using the information I provide to keep me updated via email, online advertisements and direct mail about the Party’s campaigns and opportunities to get involved,” runs a note under the initial ‘submit — and see more’ button, which also links to the Privacy Policy “for more information”.

If you click through to the Privacy Policy will find a laundry list of examples of types of data the party says it may collect about you — including what it describes as “opinions on topical issues”; “family connections”; “IP address, cookies and other technical information that you may share when you interact with our website”; and “commercially available data – such as consumer, lifestyle, household and behavioural data”.

“We may also collect special categories of information such as: Political Opinions; Voting intentions; Racial or ethnic origin; Religious views,” it further notes, and it goes on to claim its legal basis for processing this type of sensitive data is for supporting and promoting “democratic engagement and our legitimate interest to understand the electorate and identify Conservative supporters”.

Third party sources for acquiring data to feed its political campaigning activity listed in the policy include “social media platforms, where you have made the information public, or you have made the information available in a social media forum run by the Party” and “commercial organisations”, as well as “publicly accessible sources or other public records”.

“We collect data with the intention of using it primarily for political activities,” the policy adds, without specifying examples of what else people’s data might be used for.

It goes on to state that harvested personal data will be combined with other sources of data (including commercially available data) to profile voters — and “make a prediction about your lifestyle and habits”.

This processing will in turn be used to determine whether or not to send a voter campaign materials and, if so, to tailor the messages contained within it. 

In a nutshell this is describing social media microtargeting, such as Facebook ads, but for political purposes; a still unregulated practice that the UK’s information commissioner warned a year ago risks undermining trust in democracy.

Last year Elizabeth Denham went so far as to call for an ‘ethical pause’ in the use of microtargeting tools for political campaigning purposes. But, a quick glance at Facebook’s Ad Library Archive — which it launched in response to concerns about the lack of transparency around political ads on its platform, saying it will imprints of ads sent by political parties for up to seven years — the polar opposite has happened.

Since last year’s warning about democratic processes being undermined by big data mining social media platforms, the ICO has also warned that behavioral ad targeting does not comply with European privacy law. (Though it said it will give the industry time to amend its practices rather than step in to protect people’s rights right now.)

Denham has also been calling for a code of conduct to ensure voters understand how and why they’re being targeted with customized political messages, telling a parliamentary committee enquiry investigating online disinformation early last year that the use of such tools “may have got ahead of where the law is” — and that the chain of entities involved in passing around voters’ data for the purposes of profiling is “much too opaque”.

“I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are,” she said in March 2018, adding that the use of analytics and algorithms to make decisions about the microtargeting of voters “might not have transparency and the law behind them.”

The DCMS later urged government to fast-track changes to electoral law to reflect the use of powerful new voter targeting technologies — including calling for a total ban on microtargeting political ads at so-called ‘lookalike’ audiences online.

The government, then led by Theresa May, gave little heed to the committee’s recommendations.

And from the moment he arrived in Number 10 Downing Street last month, after winning a leadership vote of the Conservative Party’s membership, new prime minister Johnson began running scores of Facebook ads to test voter opinion.

Sky News reported that the Conservative Party ran 280 ads on Facebook platforms on the PM’s first full day in office. At the time of writing the party is still ploughing money into Facebook ads, per Facebook’s Ad Library Archive — shelling out £25,270 in the past seven days alone to run 2,464 ads, per Facebook’s Ad Library Report, which makes it by far the biggest UK advertiser by spend for the period.

Screenshot 2019 08 05 at 16.45.48

The Tories’ latest crop of Facebook ads contain another call to action — this time regarding a Johnson pledge to put 20,000 more police officers on the streets. Any Facebook users who clicks the embedded link is redirected to a Conservative Party webpage described as a ‘New police locator’, which informs them: “We’re recruiting 20,000 new police officers, starting right now. Want to see more police in your area? Put your postcode in to let Boris know.”

But anyone who inputs their personal data into this online form will also be letting the Conservatives know a lot more about them than just that they want more police on their local beat. In small print the website notes that those clicking submit are also agreeing to the party processing their data for its full suite of campaign purposes — as contained in the expansive terms of its Privacy Policy mentioned above.

So, basically, it’s another data grab…

Screenshot 2019 08 05 at 16.51.12

Political microtargeting was of course core to the online modus operandi of the disgraced political data firm, Cambridge Analytica, which infamously paid an app developer to harvest the personal data of millions of Facebook users back in 2014 without their knowledge or consent — in that case using a quiz app wrapper and Facebook’s lack of any enforcement of its platform terms to grab data on millions of voters.

Cambridge Analytica paid data scientists to turn this cache of social media signals into psychological profiles which they matched to public voter register lists — to try to identify the most persuadable voters in key US swing states and bombard them with political messaging on behalf of their client, Donald Trump.

Much like the Conservative Party is doing, Cambridge Analytica sourced data from commercial partners — in its case claiming to have licensed millions of data points from data broker giants such as Acxiom, Experian, Infogroup. (The Conservatives’ privacy policy does not specify which brokers it pays to acquire voter data.)

Aside from data, what’s key to this type of digital political campaigning is the ability, afforded by Facebook’s ad platform, for advertisers to target messages at what are referred to as ‘lookalike audience’ — and do so cheaply and at vast scale. Essentially, Facebook provides its own pervasive surveillance of the 2.2BN+ users on its platforms as a commercial service, letting advertisers pay to identify and target other people with a similar social media usage profile to those whose contact details they already hold, by uploading their details to Facebook.

This means a political party can data-mine its own supporter base to identify the messages that resonant best with different groups within that base, and then flip all that profiling around — using Facebook to dart ads at people who may never in their life have clicked ‘Submit — and see more‘ on a Tory webpage but who happen to share a similar social media profile to others in the party’s target database.

Facebook users currently have no way of blocking being targeted by political advertisers on Facebook, nor indeed no way to generally switch off microtargeted ads which use personal data to select marketing messages.

That’s the core ethical concern in play when Denham talks about the vital need for voters in a democracy to have transparency and control over what’s done with their personal data. “Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned last year.

However the Conservative Party’s privacy policy sidesteps any concerns about its use of microtargeting, with the breeze claim that: “We have determined that this kind of automation and profiling does not create legal or significant effects for you. Nor does it affect the legal rights that you have over your data.”

The software the party is using for online campaigning appears to be NationBuilder: A campaign management software developed in the US a decade ago — which has also been used by the Trump campaign and by both sides of the 2016 Brexit referendum campaign (to name a few of its many clients).

Its privacy policy shares the same format and much of the same language as one used by the Scottish National Party’s yes campaign during Scotland’s independence reference, for instance. (The SNP was an early user of NationBuilder to link social media campaigning to a new web platform in 2011, before going on to secure a majority in the Scottish parliament.)

So the Conservatives are by no means the only UK political entity to be dipping their hands in the cookie jar of social media data. Although they are the governing party right now.

Indeed, a report by the ICO last fall essentially called out all UK political parties for misusing people’s data.

Issues “of particular concern” the regulator raised in that report were:

  • the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to;
  • a lack of fair processing information;
  • the use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose;
  • assuming ethnicity and/or age and combining this with electoral data sets they hold, raising concerns about data accuracy;
  • the provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments

The ICO issued formal warnings to 11 political parties at that time, including warning the Conservative Party about its use of people’s data.

The regulator also said it would commence audits of all 11 parties starting in January. It’s not clear how far along it’s got with that process. We’ve reached out to it with questions.

Last year the Conservative Party quietly discontinued use of a different digital campaign tool for activists, which it had licensed from a US-based add developer called uCampaign. That tool had also been used in US by Republican campaigns including Trump’s.

As we reported last year the Conservative Campaigner app, which was intended for use by party activists, linked to the developer’s own privacy policy — which included clauses granting uCampaign very liberal rights to share app users’ data, with “other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”.

Any users of the app who uploaded their phone’s address book were also handing their friends’ data straight to uCampaign to also do as it wished. A few months late, after the Conservative Campaigner app vanished from apps stores, a note was put up online claiming the company was no longer supporting clients in Europe.

Instagram will now warn you before your account gets deleted, offer in-app appeals

Instagram this morning announced several changes to its moderation policy, the most significant of which is that it will now warn users if their account could become disabled before that actually takes place. This change goes to address a longstanding issue where users would launch Instagram only to find that their account had been shut down without any warning.

While it’s one thing for Instagram to disable accounts for violating its stated guidelines, the service’s automated systems haven’t always gotten things right. The company has come under fire before for banning innocuous photos, like those of mothers breastfeeding their children, for example, or art. (Or, you know, Madonna.)

Now the company says it will introduce a new notification process that will warn users if their account is at risk of becoming disabled. The notification will also allow them to appeal the deleted content in some cases.

For now, users will be able to appeal moderation decisions around Instagram’s nudity and pornography policies, as well as its bullying and harassment, hate speech, drug sales, and counter-terrorism policies. Over time, Instagram will expand the appeal capabilities to more categories.

The change means users won’t be caught off guard by Instagram’s enforcement actions. Plus, they’ll be given a chance to appeal a decision directly in the app, instead of only through the Help Center as before.

Disable Thresholds 2 up EN

In addition, Instagram says it will increase its enforcement of bad actors.

Previously, it could remove accounts that had a certain percentage of content in violation of its policies. But now it will also be able to remove accounts that have a certain number of violations within a window of time.

“Similarly to how policies are enforced on Facebook, this change will allow us to enforce our policies more consistently and hold people accountable for what they post on Instagram,” the company says in its announcement.

The changes follow a recent threat of a class-action lawsuit against the photo-sharing network led by the Adult Performers Actors Guild. The organization claimed Instagram was banning the adult performers’ accounts, even when there was no nudity being shown.

“It appears that the accounts were terminated merely because of their status as an adult performer,” James Felton, the Adult Performers Actors Guild legal counsel, told the Guardian in June. “Efforts to learn the reasons behind the termination have been futile,” he said, adding that the Guild was considering legal action.

The Electronic Frontier Foundation (EFF) also this year launched an anti-censorship campaign, TOSSed Out, which aimed to highlight how social media companies unevenly enforce their terms of service. As part of its efforts, the EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram.

It found that only four companies—Facebook, Reddit, Apple, and GitHub—had committed to actually informing users when their content was censored what community guideline violation or legal request had led to that action.

“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,” said Gennie Gebhart, EFF associate director of research, at the time of the report. “Notifying people when their content has been removed or censored is a challenge when your users number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”

Instagram’s policy change focused on cracking down on repeat offenders is rolling out now, while the ability to appeal decisions directly within the app will arrive in the coming months.

Facebook can be told to cast a wider net to find illegal content, says EU court advisor

How much of an obligation should social media platforms be under to hunt down illegal content?

An influential advisor to Europe’s top court has taken the view that social media platforms like Facebook can be required to seek out and identify posts that are equivalent to content that an EU court has deemed illegal — such as hate speech or defamation — if the comments have been made by the same user.

Platforms can also be ordered to hunt for identical repostings of the illegal content.

But there should not be an obligation for platforms to identify equivalent defamatory comments that have been posted by any user, with the advocate general opining that such a broad requirement would not ensure a fair balance between the fundamental rights concerned — flagging risks to free expression and free access to information.

“An obligation to identify equivalent information originating from any user would not ensure a fair balance between the fundamental rights concerned. On the one hand, seeking and identifying such information would require costly solutions. On the other hand, the implementation of those solutions would lead to censorship, so that freedom of expression and information might well be systematically restricted.”

We covered this referral to the CJEU last year.

It’s an interesting case that blends questions of hate speech moderation and the limits of robust political speech, given that the original 2016 complaint of defamation was made by the former leader of the Austrian Green Party, Eva Glawischnig.

An Austrian court agreed with Glawischnig that hate speech posts made about her on Facebook were defamatory and ordered the company to remove them. Facebook did so, but only in Austria. Glawischnig challenged its partial takedown and in May 2017 a local appeals court ruled that it must remove both the original posts and any verbatim repostings and do so worldwide, not just in Austria. 

Further legal appeals led to the referral to the CJEU which is being asked to determine where the line should be drawn for similarly defamatory postings, and whether takedowns can be applied globally or only locally.

On the global takedowns point, the advocate general believes that existing EU law does not present an absolute blocker to social media platforms being ordered to remove information worldwide.

“Both the question of the extraterritorial effects of an injunction imposing a removal obligation and the question of the territorial scope of such an obligation should be analysed, in particular, by reference to public and private international law,” runs the non-binding opinion.

Another element relates to the requirement under existing EU law that platforms should not be required to carry out general monitoring of information they store — and specifically whether that directive precludes platforms from being ordered to remove “information equivalent to the information characterised as illegal” when they have been made aware of it by the person concerned, third parties or another source. 

On that, the AG takes the view that the EU’s e-Commerce Directive does not prevent platforms from being ordered to take down equivalent illegal content when it’s been flagged to them by others — writing that, in that case, “the removal obligation does not entail general monitoring of information stored”.

Advocate General Maciej Szpunar’s opinion — which can be read in full here — is not the last word on the matter, with the court still to deliberate and issue its final decision (usually within three to six months of an AG opinion). However advisors to the CJEU are influential and tend to predict which way the court will jump.

We reached out to Facebook for comment. A spokesperson for the company told us:

This case raises important questions about freedom of expression online and about the role that internet platforms should play in locating and removing speech, particularly when it comes to political discussions and criticizing elected officials. We remove content that breaks the law and our priority is always to keep people on Facebook safe. However this opinion undermines the long-standing principle that one country should not have the right to limit free expression in other countries. We hope the CJEU will clarify that, even in the age of the internet, the scope of court orders from one country must be limited to its borders.

This report was updated with comment from Facebook

UK Internet attitudes study finds public support for social media regulation

UK telecoms regulator Ofcom has published a new joint report and stat-fest on Internet attitudes and usage with the national data protection watchdog, the ICO — a quantitative study to be published annually which they’re calling the Online Nation report.

The new structure hints at the direction of travel for online regulation in the UK, following government plans set out in a recent whitepaper to regulate online harms — which will include creating a new independent regulator to ensure Internet companies meet their responsibilities.

Ministers are still consulting on whether this should be a new or existing body. But both Ofcom and the ICO have relevant interests in being involved — so it’s fitting to see joint working going into this report.

As most of us spend more time than ever online, we’re increasingly worried about harmful content — and also more likely to come across it,” writes Yih-Choung Teh, group director of strategy and research at Ofcom, in a statement. “ For most people, those risks are still outweighed by the huge benefits of the internet. And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech – which is one of the internet’s great strengths.”

While it’s not yet clear exactly what form the UK’s future Internet regulator will take, the Online Nation report does suggest a flavor of the planned focus.

The report, which is based on responses from 2,057 adult internet users and 1,001 children, flags as a top-line finding that eight in ten adults have concerns about some aspects of Internet use and further suggests the proportion of adults concerned about going online has risen from 59% to 78% since last year (though its small-print notes this result is not directly comparable with last year’s survey so “can only be interpreted as indicative”).

Another stat being highlighted is a finding that 61% of adults have had a potentially harmful online experience in the past year — rising to 79% among children (aged 12-15). (Albeit with the caveat that it’s using a “broad definition”, with experiences ranging from “mildly annoying to seriously harmful”.)

While a full 83% of polled adults are found to have expressed concern about harms to children on the Internet.

The UK government, meanwhile, has made child safety a key focus of its push to regulate online content.

At the same time the report found that most adults (59%) agree that the benefits of going online outweigh the risks, and 61% of children think the internet makes their lives better.

While Ofcom’s annual Internet reports of years past often had a fairly dry flavor, tracking usage such as time spent online on different devices and particular services, the new joint study puts more of an emphasis on attitudes to online content and how people understand (or don’t) the commercial workings of the Internet — delving into more nuanced questions, such as by asking web users whether they understand how and why their data is collected, and assessing their understanding of ad-supported business models, as well as registering relative trust in different online services’ use of personal data.

The report also assesses public support for Internet regulation — and on that front it suggests there is increased support for greater online regulation in a range of areas. Specifically it found that most adults favour tighter rules for social media sites (70% in 2019, up from 52% in 2018); video-sharing sites (64% v. 46%); and instant-messaging services (61% v. 40%).

At the same time it says nearly half (47%) of adult internet users expressed recognition that websites and social media platforms play an important role in supporting free speech — “even where some people might find content offensive”. So the subtext there is that future regulation of harmful Internet content needs to strike the right balance.

On managing personal data, the report found most Internet users (74%) say they feel confident to do so. A majority of UK adults are also happy for companies to collect their information under certain conditions — vs over a third (39%) saying they are not happy for companies to collect and use their personal information.

Those conditions look to be key, though — with only small minorities reporting they are happy for their personal data to be used to program content (17% of adult Internet users were okay with this); and to target them with ads (only 18% didn’t mind that, so most do).

Trust in online services to protect user data and/or use it responsibly also varies significantly, per the report findings — with social media definitely in the dog house on that front. “Among ten leading UK sites, trust among users of these services was highest for BBC News (67%) and Amazon (66%) and lowest for Facebook (31%) and YouTube (34%),” the report notes.

Despite low privacy trust in tech giants, more than a third (35%) of the total time spent online in the UK is on sites owned by Google or Facebook.

“This reflects the primacy of video and social media in people’s online consumption, particularly on smartphones,” it writes. “Around nine in ten internet users visit YouTube every month, spending an average of 27 minutes a day on the site. A similar number visit Facebook, spending an average of 23 minutes a day there.”

And while the report records relatively high awareness that personal data collection is happening online — finding that 71% of adults were aware of cookies being used to collect information through websites they’re browsing (falling to 60% for social media accounts; and 49% for smartphone apps) — most (69%) also reported accepting terms and conditions without reading them.

So, again, mainstream public awareness of how personal data is being used looks questionable.

The report also flags limited understanding of how search engines are funded — despite the bald fact that around half of UK online advertising revenue comes from paid-for search (£6.7BN in 2018). “[T]here is still widespread lack of understanding about how search engines are funded,” it writes. “Fifty-four per cent of adult internet users correctly said they are funded by advertising, with 18% giving an incorrect response and 28% saying they did not know.”

The report also highlights the disconnect between time spent online and digital ad revenue generated by the adtech duopoly, Google and Facebook — which it says together generated an estimated 61% of UK online advertising revenue in 2018; a share of revenue that it points out is far greater than time spent (35%) on their websites (even as those websites are the most visited by adults in the UK).

As in previous years of Ofcom ‘state of the Internet’ reports, the Online Nation study also found that Facebook use still dominates the social media landscape in the UK.

Though use of the eponymous service continues falling (from 95% of social media users in 2016 to 88% in 2018). Even as use of other Facebook-owned social properties — Instagram and WhatsApp — grew over the same period.


The report also recorded an increase in people using multiple social services — with just a fifth of social media users only using Facebook in 2018 (down from 32% in 2018). Though as noted above, Facebook still dominates time spent, clocking up way more time (~23 minutes) per user per day on average vs Snapchat (around nine minutes) and Instagram (five minutes).  

A large majority (74%) of Facebook users also still check it at least once a day.

Overall, the report found that Brits have a varied online diet, though — on average spending a minute or more each day on 15 different internet sites and apps. Even as online ad revenues are not so equally distributed.

“Sites and apps that were not among the top 40 sites ranked by time spent accounted for 43% of average daily consumption,” the report notes. “Just over one in five internet users said that in the past month they had used ‘lots of websites or apps they’ve used before’ while a third (36%) said they ‘only use websites or apps they’ve used before’.”

There is also variety when it comes to how Brits search for stuff online, and while 97% of adult internet users still use search engines the report found a variety of other services also in the mix. 

It found that nearly two-thirds of people (65%) go more often to specific sites to find specific things, such as a news site for news stories or a video site for videos; while 30% of respondents said they used to have a search engine as their home page but no longer do.

The high proportion of searches being registered on shopping websites/apps (61%) also looks interesting in light of the 2017 EU antitrust ruling against Google Shopping — when the European Commission found Google had demoted rival shopping comparison services in search results, while promoting its own, thereby undermining rivals’ ability to gain traffic and brand recognition.

The report findings also indicate that use of voice-based search interfaces remains relatively low in the UK, with just 10% using voice assistants on a mobile phone — and even smaller percentages tapping into smart speakers (7%) or voice AIs on connected TVs (3%).

In another finding, the report suggests recommendation engines play a major part in content discovery.

“Recommendation engines are a key way for platforms to help people discover content and products — 70% of viewing to YouTube is reportedly driven by recommendations, while 35% of what consumers purchase on Amazon comes from recommendations,” it writes. 

In overarching aggregate, the report says UK adults now spend the equivalent of almost 50 days online per year.

While, each week, 44 million Brits use the internet to send or receive email; 29 million send instant messages; 30 million bank or pay bills via the internet; 27 million shop online; and 21 million people download information for work, school or university.

The full report can be found here.