All posts in “social media platforms”

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Fabula AI is using social spread to spot ‘fake news’

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”

Online platforms still not clear enough about hate speech takedowns: EC

In its latest monitoring report of a voluntary Code of Conduct on illegal hate speech, which platforms including Facebook, Twitter and YouTube signed up to in Europe back in 2016, the European Commission has said progress is being made on speeding up takedowns but tech firms are still lagging when it comes to providing feedback and transparency around their decisions.

Tech companies are now assessing 89% of flagged content within 24 hours, with 72% of content deemed to be illegal hate speech being removed, according to the Commission — compared to just 40% and 28% respectively when the Code was first launched more than two years ago.

However it said today that platforms still aren’t giving users enough feedback vis-a-vis reports, and has urged more transparency from platforms — pressing for progress “in the coming months”, warning it could still legislate for a pan-EU regulation if it believes it’s necessary.

Giving her assessment of how the (still) voluntary code on hate speech takedowns is operating at a press briefing today, commissioner Vera Jourova said: “The only real gap that remains is transparency and the feedback to users who sent notifications [of hate speech].

“On average about a third of the notifications do not receive a feedback detailing the decision taken. Only Facebook has a very high standard, sending feedback systematically to all users. So we would like to see progress on this in the coming months. Likewise the companies should be more transparent towards the general public about what is happening in their platforms. We would like to see them make more data available about the notices and removals.”

“The fight against illegal hate speech online is not over. And we have no signs that such content has decreased on social media platforms,” she added. “Let me be very clear: The good results of this monitoring exercise don’t mean the companies are off the hook. We will continue to monitor this very closely and we can always consider additional measures if efforts slow down.”

Jourova flagged additional steps taken by the Commission to support the overarching goal of clearing what she dubbed a “sewage of words” off of online platforms, such as facilitating data-sharing between tech companies and police forces to help investigations and prosecutions of hate speech purveyors move forward.

She also noted it continues to provide Member States’ justice ministers with briefings on how the voluntary code is operating, warning again: “We always discuss that we will continue but if it slows down or it stops delivering the results we will consider some kind of regulation.”

Germany passed its own social media hate speech takedown law back in 2016, with the so-called ‘NetzDG’ law coming into force in early 2017. The law provides for fines as high as €50M for companies that fail to remove illegal hate speech within 24 hours and has led to social media platforms like Facebook to plough greater resource into locally sited moderation teams.

While, in the UK, the government announced a plan to legislate around safety and social media last year. Although it has yet to publish a White Paper setting out the detail of its policy plan.

Last week a UK parliamentary committee which has been investigating the impacts of social media and screen use among children recommended the government legislate to place a legal ‘duty of care’ on platforms to protect minors.

The committee also called for platforms to be more transparent, urging them to provide bona fide researchers with access to high quality anonymized data to allow for robust interrogation of social media’s effects on children and other vulnerable users.

Debate about the risks and impacts of social media platforms for children has intensified in the UK in recent weeks, following reports of the suicide of a 14 year old schoolgirl — whose father blamed Instagram for exposing her to posts encouraging self harm, saying he had no doubt content she’d been exposed to on the platform had helped kill her.

During today’s press conference, Jourova was asked whether the Commission intends to extend the Code of Conduct on illegal hate speech to other types of content that’s attracting concern, such as bullying and suicide. But she said the executive body is not intending to expand into such areas.

She said the Commission’s focus remains on addressing content that’s judged illegal under existing European legislation on racism and xenophobia — saying it’s a matter for individual Member States to choose to legislate in additional areas if they feel a need.

“We are following what the Member States are doing because we see… to some extent a fragmented picture of different problems in different countries,” she noted. “We are focusing on what is our obligation to promote the compliance with the European law. Which is the framework decision against racism and xenophobia.

“But we have the group of experts from the Member States, in the so-called Internet forum, where we speak about other crimes or sources of hatred online. And we see the determination on the side of the Member States to take proactive measures against these matters. So we expect that if there is such a worrying trend in some Member State that will address it by means of their national legislation.”

“I will always tell you I don’t like the fragmentation of the legal framework, especially when it comes to digital because we are faced with, more or less, the same problems in all the Member States,” she added. “But it’s true that when you [take a closer look] you see there are specific issues in the Member States, also maybe related with their history or culture, which at some moment the national authorities find necessary to react on by regulation. And the Commission is not hindering this process.

“This is the sovereign decision of the Member States.”

Four more tech platforms joined the voluntary code of conduct on illegal hate speech last year: — namely Google+, Instagram, Snapchat, Dailymotion. While French gaming platform Webedia (jeuxvideo.com) also announced their participation today.

Drilling down into the performance of specific platforms, the Commission’s monitoring exercise found that Facebook assessed hate speech reports in less than 24 hours in 92.6% of the cases and 5.1% in less than 48 hours. The corresponding performance figures for YouTube were 83.8 % and 7.9%; and for Twitter 88.3% and 7.3%, respectively.

While Instagram managed 77.4 % of notifications assessed in less than 24 hours. And Google+, which will in any case closes to consumers this April, managed to assess just 60%.

In terms of removals, the Commission found YouTube removed 85.4% of reported content, Facebook 82.4% and Twitter 43.5% (the latter constituting a slight decrease in performance vs last year). While Google+ removed 80.0% of the content and Instagram 70.6%.

It argues that despite social media platforms removing illegal content “more and more rapidly”, as a result of the code, this has not led to an “over-removal” of content — pointing to variable removal rates as an indication that “the review made by the companies continues to respect freedom of expression”.

“Removal rates varied depending on the severity of hateful content,” the Commission writes. “On average, 85.5% of content calling for murder or violence against specific groups was removed, while content using defamatory words or pictures to name certain groups was removed in 58.5 % of the cases.”

“This suggest that the reviewers assess the content scrupulously and with full regard to protected speech,” it adds.

It is also crediting the code with helping foster partnerships between civil society organisations, national authorities and tech platforms — on key issues such as awareness raising and education activities.

Digital influencers and the dollars that follow them

Animated characters are as old as human storytelling itself, dating back thousands of years to cave drawings that depict animals in motion. It was really in the last century, however — a period bookended by the first animated short film in 1908 and Pixar’s success with computer animation with Toy Story from 1995 onward — that animation leapt forward. Fundamentally, this period of great innovation sought to make it easier to create an animated story for an audience to passively consume in a curated medium, such as a feature-length film.

Our current century could be set for even greater advances in the art and science of bringing characters to life. Digital influencers — virtual or animated humans that live natively on social media — will be central to that undertaking. Digital influencers don’t merely represent the penetration of cartoon characters into yet another medium, much as they sprang from newspaper strips to TV and the multiplex. Rather, digital humans on social media represent the first instance in which fictional entities act in the same plane of communication as you and I — regular people — do. Imagine if stories about Mickey Mouse were told over a telephone or in personalized letters to fans. That’s the kind of jump we’re talking about.

Social media is a new storytelling medium, much as film was a century ago. As with film then, we have yet to transmit virtual characters to this new medium in a sticky way.

Which isn’t to say that there aren’t digital characters living their lives on social channels right now. The pioneers have arrived: Lil’ Miquela, Astro, Bermuda and Shudu are prominent examples. But they are still only notable for their novelty, not yet their ubiquity. They represent the output of old animation techniques applied to a new medium. This TechCrunch article did a great job describing the current digital influencer landscape.

So why haven’t animated characters taken off on social media platforms? It’s largely an issue of scale — it’s expensive and time-consuming to create animated characters and to depict their adventures. One 2017 estimate stated that a 60 to 90-second animation took about 6 weeks to create. An episode of animated TV takes between 13 months to produce, typically with large teams in South Korea doing much of the animation legwork. That pace simply doesn’t work in a medium that calls for new original content multiple times a day.

Yet the technical piece of the puzzle is falling into place, which is primarily what I want to talk about today. Traditionally, virtual characters were created by a team of experts — not scalable — in the following way:

  • Create a 3D model
  • Texture the model and add additional materials
  • Rig the 3D model skeleton
  • Animate the 3D model
  • Introduce character into desired scene

Today, there are generally three types of virtual avatar: realistic high-resolution CGI avatars, stylized CGI avatars and manipulated video avatars. Each has its strengths and pitfalls, and the fast-approaching world of scaled digital influencers will likely incorporate aspects of all three.

The digital influencers mentioned above are all high-resolution CGI avatars. It’s unsurprising that this tech has breathed life into the most prominent digital influencers so far — this type of avatar offers the most creative latitude and photorealism. You can create an original character and have her carry out varied activities.

The process for their creation borrows most from the old-school CGI pipeline described above, though accelerated through the use of tools like Daz3D for animation, Moka Studio for rigging, and Rokoko for motion capture. It’s old wine in new bottles. Naturally, it shares the same bottlenecks as the old-school CGI pipeline: creating characters in this way consumes a lot of time and expertise.

Though researchers, like Ari Shapiro at the University of Southern California Institute for Creative Technologies, are currently working on ways to automate the creation of high-resolution CGI avatars, that bottleneck remains the obstacle for digital influencers entering the mainstream.

Stylized CGI avatars, on the other hand, have entered the mainstream. If you have an iPhone or use Snapchat, chances are you have one. Apple, Samsung, Pinscreen, Loom.ai, Embody Digital, Genies and Expressive.ai are just some of the companies playing in this space. These avatars, while likely to spread ubiquitously à la Bitmoji before them, are limited in scope.

While they extend the ability to create an animated character to anyone who uses an associated app, that creation and personalization is circumscribed: the avatar’s range is limited for the purposes of what we’re discussing in this article. It’s not so much a technology for creating new digital humans as it is a tool for injecting a visual shorthand for someone into the digital world. You’ll use it to embellish your Snapchat game, but storytellers will be unlikely to use these avatars to create a spiritual successor to Mickey Mouse and Buzz Lightyear (though they will be a big advertising / brand partnership opportunity nonetheless).

Video manipulation — you probably know it as deepfakes — is another piece of tech that is speeding virtual or fictional characters into the mainstream. As the name implies, however, it’s more about warping reality to create something new. Anyone who has seen Nicolas Cage’s striking features dropped onto Amy Adams’ body in a Superman film will understand what I’m talking about.

Open-source packages like this one allow almost anyone to create a deepfake (with some technical knowhow — your grandma probably hasn’t replaced her time-honored Bingo sessions with some casual deepfaking). It’s principally used by hobbyists, though recently we’ve seen startups like Synthesia crop up with business use cases. You can use deepfake tech for mimicry, but we haven’t yet seen it used for creating original characters. It shares some of the democratizing aspects of stylized CGI avatars, and there are likely many creative applications for the tech that simply haven’t been realized yet.

While none of these technology stacks on their own currently enable digital humans at scale, when combined they may make up the wardrobe that takes us into Narnia. Video manipulation, for example, could be used to scale realistic high-res characters like Lil’ Miquela through accelerating the creation of new stories and tableaux for her to inhabit. Nearly all of the most famous animated characters have been stylized, and I wouldn’t bet against social media’s Snow White being stylized too. What is clear is that the technology to create digital influencers at scale is nearing a tipping point. When we hit that tipping point, these creations will transform entertainment and storytelling.

Social media should have “duty of care” towards kids, UK MPs urge

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood.

The calls have been made in a report by the UK parliament’s Science and Technology Committee which has been looking into the impacts of social media and screen use among children — to consider whether such tech is “healthy or harmful”.

“Social media companies must also be far more open and transparent regarding how they operate and particularly how they moderate, review and prioritise content,” it writes.

Concerns have been growing about children’s use of social media and mobile technology for some years now, with plenty of anecdotal evidence and also some studies linking tech use to developmental problems, as well as distressing stories connecting depression and even suicide to social media use.

Although the committee writes that its dive into the topic was hindered by “the limited quantity and quality of academic evidence available”. But it also asserts: “The absence of good academic evidence is not, in itself, evidence that social media and screens have no effect on young people.”

“We found that the majority of published research did not provide a clear indication of causation, but instead indicated a possible correlation between social media/screens and a particular health effect,” it continues. “There was even less focus in published research on exactly who was at risk and if some groups were potentially more vulnerable than others when using screens and social media.”

The UK government expressed its intention to legislate in this area, announcing a plan last May to “make social media safer” — promising new online safety laws to tackle concerns.

The committee writes that it’s therefore surprised the government has not commissioned “any new, substantive research to help inform its proposals”, and suggests it get on and do so “as a matter of urgency” — with a focus on identifying people at risk of experiencing harm online and on social media; the reasons for the risk factors; and the longer-term consequences of the tech’s exposure on children.

It further suggests the government should consider what legislation is required to improve researchers’ access to this type of data, given platforms have failed to provide enough access for researchers of their own accord.

The committee says it heard evidence of a variety of instances where social media could be “a force for good” but also received testimonies about some of the potential negative impacts of social media on the health and emotional wellbeing of children.

“These ranged from detrimental effects on sleep patterns and body image through to cyberbullying, grooming and ‘sexting’,” it notes. “Generally, social media was not the root cause of the risk but helped to facilitate it, while also providing the opportunity for a large degree of amplification. This was particularly apparent in the case of the abuse of children online, via social media.

“It is imperative that the government leads the way in ensuring that an effective partnership is in place, across civil society, technology companies, law enforcement agencies, the government and non-governmental organisations, aimed at ending child sexual exploitation (CSE) and abuse online.”

The committee suggests the government commission specific research to establish the scale and prevalence of online CSE — pushing it to set an “ambitious target” to halve reported online CSE in two years and “all but eliminate it in four”.

A duty of care

A further recommendation will likely send a shiver down tech giants’ spines, with the committee urging a duty of care principle be enshrined in law for social media users under 18 years of age to protect them from harm when on social media sites.

Such a duty would up the legal risk stakes considerably for user generated content platforms which don’t bar children from accessing their services.

The committee suggests the government could achieve that by introducing a statutory code of practice for social media firms, via new primary legislation, to provide “consistency on content reporting practices and moderation mechanisms”.

It also recommends a requirement in law for social media companies to publish detailed Transparency Reports every six months.

It is also for a 24 hour takedown law for illegal content, saying that platforms should have to review reports of potentially illegal content and take a decision on whether to remove, block or flag it — and reply the decision to the individual/organisation who reported it — within 24 hours.

Germany already legislated for such a law, back in 2017 — though in that case the focus is on speeding up hate speech takedowns.

In Germany social media platforms can be fined up to €50 million if they fail to comply with the NetzDG law, as its truncated German name is known. (The EU executive has also been pushing platforms to remove terrorist related material within an hour of a report, suggesting it too could legislate on this front if they fail to moderate content fast enough.)

The committee suggests the UK’s media and telecoms regulator, Ofcom would be well-placed to oversee how illegal content is handled under any new law.

It also recommends that social media companies use AI to identify and flag to users (or remove as appropriate) content that “may be fake” — pointing to the risk posed by new technologies such as “deep fake videos”.

More robust systems for age verification are also needed, in the committee’s view. It writes that these must go beyond “a simple ‘tick box’ or entering a date of birth”.

Looking beyond platforms, the committee presses the government to take steps to improve children’s digital literacy and resilience, suggesting PSHE (personal, social and health) education should be made mandatory for primary and secondary school pupils — delivering “an age-appropriate understanding of, and resilience towards, the harms and benefits of the digital world”.

Teachers and parents should also not be overlooked, with the committee suggesting training and resources for teachers and awareness and engagement campaigns for parents.