All posts in “digital media”

UK Internet attitudes study finds public support for social media regulation

UK telecoms regulator Ofcom has published a new joint report and stat-fest on Internet attitudes and usage with the national data protection watchdog, the ICO — a quantitative study to be published annually which they’re calling the Online Nation report.

The new structure hints at the direction of travel for online regulation in the UK, following government plans set out in a recent whitepaper to regulate online harms — which will include creating a new independent regulator to ensure Internet companies meet their responsibilities.

Ministers are still consulting on whether this should be a new or existing body. But both Ofcom and the ICO have relevant interests in being involved — so it’s fitting to see joint working going into this report.

As most of us spend more time than ever online, we’re increasingly worried about harmful content — and also more likely to come across it,” writes Yih-Choung Teh, group director of strategy and research at Ofcom, in a statement. “ For most people, those risks are still outweighed by the huge benefits of the internet. And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech – which is one of the internet’s great strengths.”

While it’s not yet clear exactly what form the UK’s future Internet regulator will take, the Online Nation report does suggest a flavor of the planned focus.

The report, which is based on responses from 2,057 adult internet users and 1,001 children, flags as a top-line finding that eight in ten adults have concerns about some aspects of Internet use and further suggests the proportion of adults concerned about going online has risen from 59% to 78% since last year (though its small-print notes this result is not directly comparable with last year’s survey so “can only be interpreted as indicative”).

Another stat being highlighted is a finding that 61% of adults have had a potentially harmful online experience in the past year — rising to 79% among children (aged 12-15). (Albeit with the caveat that it’s using a “broad definition”, with experiences ranging from “mildly annoying to seriously harmful”.)

While a full 83% of polled adults are found to have expressed concern about harms to children on the Internet.

The UK government, meanwhile, has made child safety a key focus of its push to regulate online content.

At the same time the report found that most adults (59%) agree that the benefits of going online outweigh the risks, and 61% of children think the internet makes their lives better.

While Ofcom’s annual Internet reports of years past often had a fairly dry flavor, tracking usage such as time spent online on different devices and particular services, the new joint study puts more of an emphasis on attitudes to online content and how people understand (or don’t) the commercial workings of the Internet — delving into more nuanced questions, such as by asking web users whether they understand how and why their data is collected, and assessing their understanding of ad-supported business models, as well as registering relative trust in different online services’ use of personal data.

The report also assesses public support for Internet regulation — and on that front it suggests there is increased support for greater online regulation in a range of areas. Specifically it found that most adults favour tighter rules for social media sites (70% in 2019, up from 52% in 2018); video-sharing sites (64% v. 46%); and instant-messaging services (61% v. 40%).

At the same time it says nearly half (47%) of adult internet users expressed recognition that websites and social media platforms play an important role in supporting free speech — “even where some people might find content offensive”. So the subtext there is that future regulation of harmful Internet content needs to strike the right balance.

On managing personal data, the report found most Internet users (74%) say they feel confident to do so. A majority of UK adults are also happy for companies to collect their information under certain conditions — vs over a third (39%) saying they are not happy for companies to collect and use their personal information.

Those conditions look to be key, though — with only small minorities reporting they are happy for their personal data to be used to program content (17% of adult Internet users were okay with this); and to target them with ads (only 18% didn’t mind that, so most do).

Trust in online services to protect user data and/or use it responsibly also varies significantly, per the report findings — with social media definitely in the dog house on that front. “Among ten leading UK sites, trust among users of these services was highest for BBC News (67%) and Amazon (66%) and lowest for Facebook (31%) and YouTube (34%),” the report notes.

Despite low privacy trust in tech giants, more than a third (35%) of the total time spent online in the UK is on sites owned by Google or Facebook.

“This reflects the primacy of video and social media in people’s online consumption, particularly on smartphones,” it writes. “Around nine in ten internet users visit YouTube every month, spending an average of 27 minutes a day on the site. A similar number visit Facebook, spending an average of 23 minutes a day there.”

And while the report records relatively high awareness that personal data collection is happening online — finding that 71% of adults were aware of cookies being used to collect information through websites they’re browsing (falling to 60% for social media accounts; and 49% for smartphone apps) — most (69%) also reported accepting terms and conditions without reading them.

So, again, mainstream public awareness of how personal data is being used looks questionable.

The report also flags limited understanding of how search engines are funded — despite the bald fact that around half of UK online advertising revenue comes from paid-for search (£6.7BN in 2018). “[T]here is still widespread lack of understanding about how search engines are funded,” it writes. “Fifty-four per cent of adult internet users correctly said they are funded by advertising, with 18% giving an incorrect response and 28% saying they did not know.”

The report also highlights the disconnect between time spent online and digital ad revenue generated by the adtech duopoly, Google and Facebook — which it says together generated an estimated 61% of UK online advertising revenue in 2018; a share of revenue that it points out is far greater than time spent (35%) on their websites (even as those websites are the most visited by adults in the UK).

As in previous years of Ofcom ‘state of the Internet’ reports, the Online Nation study also found that Facebook use still dominates the social media landscape in the UK.

Though use of the eponymous service continues falling (from 95% of social media users in 2016 to 88% in 2018). Even as use of other Facebook-owned social properties — Instagram and WhatsApp — grew over the same period.


The report also recorded an increase in people using multiple social services — with just a fifth of social media users only using Facebook in 2018 (down from 32% in 2018). Though as noted above, Facebook still dominates time spent, clocking up way more time (~23 minutes) per user per day on average vs Snapchat (around nine minutes) and Instagram (five minutes).  

A large majority (74%) of Facebook users also still check it at least once a day.

Overall, the report found that Brits have a varied online diet, though — on average spending a minute or more each day on 15 different internet sites and apps. Even as online ad revenues are not so equally distributed.

“Sites and apps that were not among the top 40 sites ranked by time spent accounted for 43% of average daily consumption,” the report notes. “Just over one in five internet users said that in the past month they had used ‘lots of websites or apps they’ve used before’ while a third (36%) said they ‘only use websites or apps they’ve used before’.”

There is also variety when it comes to how Brits search for stuff online, and while 97% of adult internet users still use search engines the report found a variety of other services also in the mix. 

It found that nearly two-thirds of people (65%) go more often to specific sites to find specific things, such as a news site for news stories or a video site for videos; while 30% of respondents said they used to have a search engine as their home page but no longer do.

The high proportion of searches being registered on shopping websites/apps (61%) also looks interesting in light of the 2017 EU antitrust ruling against Google Shopping — when the European Commission found Google had demoted rival shopping comparison services in search results, while promoting its own, thereby undermining rivals’ ability to gain traffic and brand recognition.

The report findings also indicate that use of voice-based search interfaces remains relatively low in the UK, with just 10% using voice assistants on a mobile phone — and even smaller percentages tapping into smart speakers (7%) or voice AIs on connected TVs (3%).

In another finding, the report suggests recommendation engines play a major part in content discovery.

“Recommendation engines are a key way for platforms to help people discover content and products — 70% of viewing to YouTube is reportedly driven by recommendations, while 35% of what consumers purchase on Amazon comes from recommendations,” it writes. 

In overarching aggregate, the report says UK adults now spend the equivalent of almost 50 days online per year.

While, each week, 44 million Brits use the internet to send or receive email; 29 million send instant messages; 30 million bank or pay bills via the internet; 27 million shop online; and 21 million people download information for work, school or university.

The full report can be found here.

Facebook still a great place to amplify pre-election junk news, EU study finds

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

On the Internet of Women with Moira Weigel

“Feminism,” the writer and editor Marie Shear famously said in an often-misattributed quote, “is the radical notion that women are people.” The genius of this line, of course, is that it appears to be entirely non-controversial, which reminds us all the more effectively of the past century of fierce debates surrounding women’s equality.

And what about in tech ethics? It would seem equally non-controversial that ethical tech is supposed to be good for “people,” but is the broader tech world and its culture good for the majority of humans who happen to be women? And to the extent it isn’t, what does that say about any of us, and about all of our technology?

I’ve known, since I began planning this TechCrunch series exploring the ethics of tech, that it would need to thoroughly cover issues of gender. Because as we enter an age of AI, with machines learning to be ever more like us, what could be more critical than addressing the issues of sex and sexism often at the heart of the hardest conflicts in human history thus far?

Meanwhile, several months before I began envisioning this series I stumbled across the fourth issue of a new magazine called Logic, a journal on technology, ethics, and culture. Logic publishes primarily on paper — yes, the actual, physical stuff, and a satisfyingly meaty stock of it, at that.

In it, I found a brief essay, “The Internet of Women,” that is a must-read, an instant classic in tech ethics. The piece is by Moira Weigel, one of Logic’s founders and currently a member of Harvard University’s “Society of Fellows” — one of the world’s most elite societies of young academics.

A fast-talking 30-something Brooklynite with a Ph.D. from Yale, Weigel’s work combines her interest in sex, gender, and feminism, with a critical and witty analysis of our technology culture.

In this first of a two-part interview, I speak with Moira in depth about some of the issues she covers in her essay and beyond: #MeToo; the internet as a “feminizing” influence on culture; digital media ethics around sexism; and women in political and tech leadership.

Greg E.: How would you summarize the piece in a sentence or so?

Moira W.: It’s an idiosyncratic piece with a couple of different layers. But if I had to summarize it in just a sentence or two I’d say that it’s taking a closer look at the role that platforms like Facebook and Twitter have played in the so-called “#MeToo moment.”

In late 2017 and early 2018, I became interested in the tensions that the moment was exposing between digital media and so-called “legacy media” — print newspapers and magazines like The New York Times and Harper’s and The Atlantic. Digital media were making it possible to see structural sexism in new ways, and for voices and stories to be heard that would have gotten buried, previously.

A lot of the conversation unfolding in legacy media seemed to concern who was allowed to say what where. For me, this subtext was important: The #MeToo moment was not just about the sexualized abuse of power but also about who had authority to talk about what in public — or the semi-public spaces of the Internet.

At the same time, it seemed to me that the ongoing collapse of print media as an industry, and really what people sometimes call the “feminization” of work in general, was an important part of the context.

When people talk about jobs getting “feminized” they can mean many things — jobs becoming lower paid, lower status, flexible or precarious, demanding more emotional management and the cultivation of an “image,” blurring the boundary between “work” and “life.”

The increasing instability or insecurity of media workplaces only make women more vulnerable to the kinds of sexualized abuses of power the #MeToo hashtag was being used to talk about.

Social media firms agree to work with UK charities to set online harm boundaries

Social media giants, including Facebook -owned Instagram, have agreed to financially contribute to UK charities to fund them making recommendations that the government hopes will speed up decisions about removing content that promotes suicide/self-harm or eating disorders on their platforms.

The development follows the latest intervention by health secretary Matt Hancock, who met with representatives from the Facebook, Instagram, Twitter, Pinterest, Google and others yesterday to discuss what they’re doing to tackle a range of online harms.

“Social media companies have a duty of care to people on their sites. Just because they’re global doesn’t mean they can be irresponsible,” he said today.

“We must do everything we can to keep our children safe online so I’m pleased to update the house that as a result of yesterday’s summit, the leading global social media companies have agreed to work with experts… to speed up the identification and removal of suicide and self-harm content and create greater protections online.”

However he failed to get any new commitments from the companies to do more to tackle anti-vaccination misinformation — despite saying last week that he would be heavily leaning on the tech giants to remove anti-vaccination misinformation, warning it posed a serious risk to public health.

Giving an update on his latest social media moot in parliament this afternoon, Hancock said the companies had agreed to do more to address a range of online harms — while emphasizing there’s more for them to do, including addressing anti-vaccination misinformation.

“The rise of social media now makes it easier to spread lies about vaccination so there is a special responsibility on the social media companies to act,” he said, noting that coverage for the measles, mumps and rubella vaccination in England decreased for the fourth year in a row last year — dropping to 91%.

There has been a rise in confirmed measles cases from 259 to 966 over the same period, he added.

With no sign of an agreement from the companies to take tougher action on anti-vaccination misinformation, Hancock was left to repeat their preferred talking point to MPs, segwaying into suggesting social media has the potential to be a “great force for good” on the vaccination front — i.e. if it “can help us to promote positive messages” about the public health value of vaccines.

For the two other online harm areas of focus, suicide/self-harm content and eating disorders, suicide support charity Samaritans and eating disorder charity Beat were named as the two U.K. organizations that would be working with the social media platforms to make recommendations for when content should and should not be taken down.

“[Social media firms will] not only financially support the Samaritans to do the work but crucially Samaritans’ suicide prevention experts will determine what is harmful and dangerous content, and the social media platforms committed to either remove it or prevent others from seeing it and help vulnerable people get the positive support they need,” said Hancock.

“This partnership marks for the first time globally a collective commitment to act, to build knowledge through research and insights — and to implement real changes that will ultimately save lives,” he added.

The Telegraph reports that the value of the financial contribution from the social media platforms to the Samaritans for the work will be “hundreds of thousands” of pounds. And during questions in parliament MPs pointed out the amount pledged is tiny vs the massive profits commanded by the companies. Hancock responded that it was what the Samaritans had asked for to do the work, adding: “Of course I’d be prepared to go and ask for more if more is needed.”

The minister was also pressed from the opposition benches on the timeline for results from the social media companies on tackling “the harm and dangerous fake news they host”.

“We’ve already seen some progress,” he responded — flagging a policy change announced by Instagram and Facebook back in February, following a public outcry after a report about a UK schoolgirl whose family said she killed herself after being exposed to graphic self-harm content on Instagram.

“It’s very important that we keep the pace up,” he added, saying he’ll be holding another meeting with the companies in two months to see what progress has been made.

“We’ll expect… that we’ll see further action from the social media companies. That we will have made progress in the Samaritans being able to define more clearly what the boundary is between harmful content and content which isn’t harmful.

“In each of these areas about removing harms online the challenge is to create the right boundary in the appropriate place… so that the social media companies don’t have to define what is and isn’t socially acceptable. But rather we as society do.”

In a statement following the meeting with Hancock, a spokesperson for Facebook and Instagram said: “We fully support the new initiative from the government and the Samaritans, and look forward to our ongoing work with industry to find more ways to keep people safe online.”

The company also noted that it’s been working with expert organisations, including the Samaritans, for “many years to find more ways to do that” — suggesting it’s quite comfortable playing the familiar political game of ‘more of the same’.

That said, the UK government has made tackling online harms a stated policy priority — publishing a proposal for a regulatory framework intended to address a range of content risks earlier this month, when it also kicked off a 12-week public consultation.

Though there’s clearly a long road ahead to agree a law that’s enforceable, let alone effective.

Hancock resisted providing MPs with any timeline for progress on the planned legislation — telling parliament “we want to genuinely consult widely”.

“This isn’t really issue of party politics. It’s a matter of getting it right so that society decides on how we should govern the Internet, rather than the big Internet companies making those decisions for themselves,” he added.

The minister was also asked by the shadow health secretary, Jonathan Ashworth, to guarantee that the legislation will include provision for criminal sentences for executives for serious breaches of their duty of care. But Hancock failed to respond to the question. 

Talk key takeaways from Facebook’s F8 with TechCrunch writers

Facebook’s annual F8 developer conference is taking over the McEnery Convention Center Center in San Jose this week and TechCrunch will be on the ground covering any and all announcements.

The week is sure to have its fair share of fireworks as the company’s top brass takes the stage to talk about the future of Facebook’s product offerings, privacy, developer tools and more. TechCrunch’s Josh Constine and Frederic Lardinois will be on the ground at the event. Wednesday at 2:00 pm PT, Josh and Frederic will be sharing with Extra Crunch members what they saw, what excited them most, and what the future of Facebook might look.

Tune in to dig into what happened onstage and off and ask Josh and Frederic any and all things Facebook, social or dev tools.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.