All posts in “United Kingdom”

YC-backed Muzmatch definitely doesn’t want to be Tinder for Muslims


At first glance, YC-backed Muzmatch‘s dating app might look best described as a ‘Tinder for Muslims’. But co-founders Shahzad Younas and Ryan Brodie are clear about what sets their target audience apart from the casual dating/hook-up crowd: a genuine intent to find a partner in order to get married.

Which is why, they say, they’re definitely not just cloning Tinder for Muslims.

“Our audience is super captivated, they’re so invested in this search,” says Brodie. “For a Muslim in their twenties, their upbringing has been so centered to find a husband or wife. And that is for most Muslims. I think some people think it’s just like JCrush for Jews. But it’s totally not about that from where we stand.

“Not just market size — we’re more than 100 times larger market than the Jewish market, for example — but the real difference is the seriousness and intent. It’s not casual dating. In Islam there’s a concept where… you’re only ever going to be half way there without your spouse. So this is how central it is. This is where almost all our users come at it from.”

Some two years after the launch of the first version of the app, Muzmatch has around 200,000 users, spread across ~160 countries, and is growing around 10 per cent, month on month, according to the co-founders.

“We’ve had weddings across the world,” says Younas. “Right now about 30 people a day are leaving our app and telling us specifically I found my partner on your app or I just got engaged or we just got married.”

Growth thus far has coming organically, via word-of-mouth recommendations in the Muslim community, they say. Around half of Muzmatch users are in the UK; around a third are in the US and Canada; with the rest spread all over the world. Gender wise, roughly two-thirds of are male, and one-third is female. The average age is mid to late twenties.

The founders say the matchmaking app has led to around 6,000 couples getting together so far — and “at least 600 confirmed weddings” — although they can’t be sure the number isn’t higher as not everyone messages them with their stories.

They tell a funny story about how they were emailed by a man from Uganda thanking them for helping him meet his wife via the app — and when they went to check exactly how many users they had in Uganda it was, well, just those two. “When it’s meant to be, it is meant to be!” says Younas.

Despite a few ‘rest of world’ successes to point to, their current “concentrated focus” is on Muslims in the West — tackling what they describe as the “key problem” for this 60-million community: “low density of Muslims”. Which means that Muslim singles searching for a partner of the same faith in towns and cities in places like the UK, US and Canada are likely to face a shortage of potential mates. At least in their immediate vicinity.

These dynamics work in Muzmatch’s favor, reckons Brodie, because their target market is already geared up to putting in extra work to find ‘the one’. And is also therefore likely to appreciate a tech tool that helps make their search easier.

“What’s great for us is there’s already an expectation of movement, so we’ve never had to worry about the network effect. Most dating apps, every user expects to meet the one a mile down the road — luckily for us, that expectation isn’t there, which is brilliant,” he tells TechCrunch.

Another advantage of addressing such an engaged user base, according to the founders, is Muzmatch’s singles are incentivized to fill out their profiles with lots of detailed information — given how many criteria can be at play as part of their search (i.e. over and above just whether they find a potential partner attractive, and relating to other factors such as family, culture, tradition, religious level and so on) — and the app can then utilize all this rich user data for improving its suggested matches.

“With our app, and with the technology in the app, we’re really trying to cater to those specific needs,” says Younas, describing the difficulties Muslims in the West can have meeting a person who meets all their criteria. “We think that conventional Western dating apps don’t really cater to this.”

The business is already profitable, taking revenue including via premium subscriptions and in-app purchases which offer users additional features, such as the ability to be matched with someone before they’ve liked you (as a way to try to get their attention) — though it’s free to join and use the basic app.

“Because there’s a more serious intent, people are more willing to spend vs… a casual dating app — where the expectation is almost free,” argues Younas.

And while Muzmatch’s feature-set has some basic mechanisms that would be familiar to any Tinder user, like the ability to ‘like’ or ‘pass’ on a possible match, and the ability to chat in-app with mutual matches, it also has differences that reflect the needs of its community — which Younas describes as being “essentially” without a casual dating market, as a result of marriage being “such a big part of our faith”.

Half of the world’s population of Muslims are under thirty… The growth in Muslim population across the world is phenomenal.

So, for example, all users have to take a selfie via the app so their profile can be manually verified to help boost trust and keep out spammers; users don’t have to provide their real name though, and can choose not to display photos on their profiles or blur photos unless there’s an active match.

Users are also asked to rate others they have interacted with — and these ratings are fed into the matching algorithm, with the aim of surfacing “quality users” and promoting positive behaviors that mesh well with a community of singles that’s typically really serious about finding a life partner.

Female users can also opt for a chaperoning feature whereby all of their in-app chats are emailed to a wali/guardian, should they wish to observe this type of Islamic etiquette.

There are a few other differences in how males and females experience the app, such as women having more granular controls over who can see their photos, and being able to view more profiles per day before being capped (this is on account of there currently being more male users, say the founders).

“It’s transparent to both sides,” says Younas of the wali/guardian option. “So both parties in that conversation know that there’s a third party involved. And for us these are optional features we give to our users — depending on where they’re at, we don’t necessarily want to push a religious angle on people, but what we want to do is give them the option. So if you’re very religiously inclined you can pick these options.”

“For us being accessible to everyone is really the key to owning this market,” he adds. “There’s 1.8 billion Muslims across the world, and they’re very diverse — in culture, in language, in their outlooks, in particular religious etiquette, so what we’re trying to do is navigate all of that in a very — I wouldn’t necessarily say neutral way — but in a very accessible way to everybody… And so far it’s been working.”

The founders say they are intentionally making an effort to discourage the transactional dynamic that can creep into dating apps like Tinder — so, for instance, there are limits on the number of profiles a user can swipe through in a 12 hour period (although users can also pay to remove the cap); and people can also go back and revisit profiles they previously passed on, or rematch with people they previously unmatched if they change their mind later.

“We’ve actually had many examples of now married couples that have actually gone back and changed their minds,” says Brodie. “Unlike say on Tinder you can actually rematch someone. So you can unmatch if it didn’t work out and then in case six months later, something’s changed… you can rematch them.”

We had a girl message us saying thank god for the rematch feature — because I wouldn’t have got with this guy if you didn’t have it

“We had a girl message us saying thank god for the rematch feature — because I wouldn’t have got with this guy if you didn’t have it,” adds Younas. “So we know this stuff works.”

Younas boostrapped and built the initial app himself, having — as a young Muslim in London — been unimpressed with the quality of existing Muslim dating websites, which he describes as “ugly and horrible”, and having a “terrible reputation”.

Brodie came on board later, after meeting Younas and being excited by the early traction for the MVP — and the pair relaunched Muzmatch last August.

With growing ambitions, they say they started to feel London was not the ideal base to try to scale a consumer app. Hence they applied and got onto Y Combinator’s program — and will be graduating in the 2017 summer batch of YC startups.

“Our ambitions have grown and grown and grown,” says Brodie. “We realized the opportunity we have here and we thought, in London at least, we weren’t going to get the ammo that we needed or the thoughts and the beliefs that you have in the West Coast of America… [YC] has got an incredible track record so we just thought let’s do this.”

While they’ve started with Muslims living in the West, their ambitions scale to the global Muslim market as a whole — seeing big potential to grow beyond their first focus on markets with a low density of Muslims.

Indeed, Brodie argues there’s even more need for a matchmaking app in majority Muslim countries which he says already have big but — as he sees it — ineffective and often expensive matchmaking industries. So, in other words, a high density of potential mates is still a problem for a matchmatching app to fix.

“There’s already a huge market of matchmakers [in countries like Indonesia]. But it’s incredibly ineffective,” he argues. “It’s not just a problem in the West, where there’s low density, even in a country where everyone’s a Muslim, as is the case for everyone, finding partners is very difficult.”

In terms of competition, and setting aside the older generation of matchmaking websites, Brodie says there are a “few” others trying to build dating apps for Muslim singles — a quick search on the App Store brings up Minder and Salaam-Swipe as just two examples — but claims Muzmatch is at least twice as big as “our nearest app competitor”.

“Our competitors are going about this completely the wrong way,” he argues. “They are essentially repackaging Tinder for Muslims which we know just doesn’t work and is why our competition has really bad credibility in the community.”

“The key for us is we’ve tried to go about this with an understanding of the Muslim culture and the particular quirks and sensibilities in terms of how they find a partner,” adds Younas.

“And this is why, for a lot of Muslims, Western dating apps don’t work because it doesn’t really cater to that particular need and intent. So, for us, from the beginning we’ve really thought about that, and ingrained that into our design and into our product. And we think, long term, that will set us apart.”

The founders also reckon Muzmatch might stand a better chance than typical dating apps of monetizing beyond the business of matching and dating itself — by offering related services, such as, for example, helping users find a wedding venue. Which may be important if their users are pairing up and getting married relatively quickly.

“I think we have a better chance than most to achieve monetization post-match. Because just the [short] timespan [between Muslims finding a partner and getting married] and the relationship with us is so close to the events unfolding I think, longer term, this might be an interesting space for us,” says Brodie.

“Right now the Muslim market is huge, so we’re not going to run out of customers,” adds Younas.

As they head into YC demo day, the pair are looking to raise funding but Younas says they intend to “tread carefully”, given Muzmatch is already profitable — the aim is to raise to “really accelerate things but on a more sustainable level”, he says.

They want to invest in areas such as localization and growing the size of the team (from currently just the two of them), so any funding will be going towards preparing for future growth, such as by investing in headcount and backend infrastructure.

“We have global ambitions,” says Younas. “We’re not just looking at the US, Canada and the UK. We really want to be the global player for Muslims worldwide looking for a partner.”

“Without a doubt, in ten years’ time, someone will have achieved this. We want to achieve this — and part of this raise will be making sure we have the ammo to really go for it,” adds Brodie. “We’re not just a niche dating app. This is totally different.

“This is a unique product, for 1.8 billion people… Half of the world’s population of Muslims are under thirty. In countries like Saudi Arabia, two-thirds of their population are under thirty. The growth in Muslim population across the world is phenomenal.”

Wayra UK launches accelerator to tackle the ‘poverty premium’


Wayra, the Telefónica backed accelerator network, is launching a new startup program in the UK that aims to tackle the so-called ‘poverty premium’ — whereby people on low incomes pay more for some goods and services.

The program, called Wayra Fair By Design, will support seven startups per year, falling into four broad areas: energy (primarily electricity and gas); finance; insurance; and geo-based costs which can be imposed due to someone’s geographical residence, such as paying higher prices for food, transport and insurance. Wayra says digital exclusion may also factor in this category.

Accepted startups can expect to receive around £70,000 in cash and services, including access to Wayra’s mentoring and investor network, as well as opportunities to work with Telefónica and its partners; and full access to co-working space at the Open Future_ North building in Oldham, which opens tomorrow.

Wayra says the program will invest in a combination of Community Interest Companies and charities, as well as private limited companies, including tech businesses. Start-ups developing solutions to open up more affordable credit options would be ideal candidates for the program, it adds.

Commenting in a statement, Gary Stewart, Director of Wayra UK, said: “It should not cost more to be poor. An entrepreneur’s central task is to offer a compelling, sustainable solution to big problems, and we can think of fewer problems bigger or more worthy of a solution than this one. We are eager to work with start-ups to make real progress in the battle against inequality.”

The program is backed by a new investment fund — called the Fair By Design Fund — which Wayra says has £8 million ready to deploy now, and a goal of raising £20 million in total — to invest in companies tackling the poverty premium, both via the accelerator program and in separate investments across the UK.

Funding is coming from a partnership between financial institution Big Society Capital, social policy research charity the Joseph Rowntree Foundation, investment fund manager Finance Birmingham and VC Ascension Ventures. The latter two will be managing the new fund.

The fund will invest in companies from seed through to Series A stage and beyond, including seeking deal-flow and co-investment opportunities from other funds, VCs and angel investors.

In another supporting statement, Chris Goulden, deputy director of policy and research at the Joseph Rowntree Foundation, added: “Reducing the cost of essential goods and services is critical for solving poverty in the UK. The poverty premium costs low-income households on average £490 a year. With higher inflation and low wage growth, tackling these premiums is vital for families struggling to make ends meet. This fund is an important step towards finding viable solutions to reducing extra costs faced by those on low incomes.”  

Facebook expands its hate-fighting counterspeech initiative in Europe


Facebook has launched a third counterspeech initiative in Europe, partnering with the not-for-profit Institute for Strategic Dialogue for the launch of the Online Civil Courage Initiative (OCCI), which is aimed at tackling online extremism and hate speech.

COO Sheryl Sandberg launched the initiative in London this morning along with Sasha Havlicek, CEO of the Institute for Strategic Dialogue, and with the UK founding partners for the initiative who are:

  • Brendan Cox, Jo Cox Foundation — an organization named after a UK MP who was murdered by a right-wing extremist last year
  • Mark Gardner, Community Security Trust — an organization that works to combat antisemitism
  • Fiyaz Mughal, Tell MAMA — a support organization for victims of anti-Muslim hate
  • Shaukat Warraich, Imams Online — an online information portal that aims to showcase positive Islamic content

The OCCI will commit financial and marketing support to UK NGOs working to counter online extremism, including the four listed above.

Facebook said the aim is to bring together experts to develop best practice and tools for people to engage in counter speech.

The move follows similar initiatives launched by the company in Germany in January 2016 and in France in March 2017. At the initial launch in Germany Facebook pledged more than €1 million in funding for NGOs under the OCCI program.

It’s not clear if Facebook has since expanded its funding commitment for the program — we’ve asked and will update this post with any response.

In the UK the OCCI will provide:

  • Training for NGOs to help them to monitor and respond to extremist content, and a dedicated support desk so they can communicate directly with Facebook
  • Marketing support for NGOs to undertake counterspeech campaigns through Facebook’s creative shop and Facebook advertising credits
  • Best practice sharing with NGOs, government and other online services
  • Financial support for academic research on online and offline patterns of extremism — and what makes an effective response

Overall, the initiative aims to enable a community of local organisations and activists to “share campaigns, experiences, advice and challenges” — using Facebook’s own Groups feature as their networking media.

To date, Facebook says OCCI across Europe has engaged in direct training at OCCI Counterspeech Labs and workshops with more than 100 anti-hate and anti-extremism organisations, reaching some 3.5 million people online via — you guessed it — its Facebook page.

The company has previously talked about how counterspeech training is a part of its strategy to tackle online extremism, noting this in its first Hard Questions post — which focused on what it’s doing to counter terrorism.

Hard Questions is a series of policy discussions the company announced and initiated last week, soliciting feedback from users on a variety of questions and concerns — from countering the spread of extremist content to considering whether social media is generally good for democracy?

And given Facebook’s staggering size — with the platform now having amassed nearly two billion users globally — the company has clearly reached a tipping point in terms of realizing it must at very least be seen to be acknowledging it has a responsibility to consider the wider impacts of its platform.

The days of Zuckerberg just being able to shrug his shoulders at concerns by claiming Facebook is just a technology platform are well and truly behind it.

Yet it remains to be seen what practical measures and changes to how Facebook does business will flow from this series of grown up public discussions. And cynical voices might say Facebook is seeking to turn criticism of its platform into increased engagement on its platform.

The company has certainly been facing increased attacks in recent times, including from politicians seeking to scapegoat tech platforms for not doing enough to counter extremism.

And — more broadly — for not taking their social responsibilities seriously enough.

A UK parliamentary committee recently slammed tech giants including Facebook for taking a laissez-faire attitude to content moderation, for example — and suggested the government should look at implementing fines for failures on this front. Something it has said it is considering.

Meanwhile, in Germany, a legislative proposal that includes fines of up to €50M for social media firms failing to promptly remove illegal hate speech after a complaint has gained government backing.

Perhaps we therefore should not be surprised that Facebook revealed a new mission statement yesterday — saying it now wants to: “Give people the power to build community and bring the world closer together.”

It’s certainly a slogan that better aligns with current political priorities in a world that’s sounding increasingly divided and divisive.

And one that Facebook will surely be hoping not merely takes the heat away from its platform, but — via the likes of this expanded counterspeech initiative — works to rechannel the negative energy being directed at its platform and turn it into increase engagement on its platform.

UK and France to jointly pressure tech firms over extremist content


The leader of the UK’s new minority government, Theresa May, is in France today for talks with her French counterpart, Emmanuel Macron, and the pair are slated to launch a joint crack down on online extremism.

Under discussion is whether new legal liability is needed for tech companies that fail to remove terrorism-related content — including even potentially fines.

Speaking ahead of her trip to Paris, May said: “The counter-terrorism cooperation between British and French intelligence agencies is already strong, but President Macron and I agree that more should be done to tackle the terrorist threat online.

“In the UK we are already working with social media companies to halt the spread of extremist material and poisonous propaganda that is warping young minds. And today I can announce that the UK and France will work together to encourage corporations to do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks, including exploring the possibility of creating a new legal liability for tech companies if they fail to remove unacceptable content.”

“We are united in our total condemnation of terrorism and our commitment to stamp out this evil,” she added.

The move follows the G7 meeting last month, where May pushed for collective action from the group of nations on tackling online extremism — securing agreement from the group to push for tech firms to do more. “We want companies to develop tools to identify and remove harmful materials automatically,” she said then.

Earlier this month she also called for international co-operation to regulate the Internet to — in her words “prevent the spread of extremism and terrorist planning”. Although she was on the campaign stump at the time, and securing agreements across cross borders to ‘control the Internet’ is hardly something any single political leader, however popular (and May is not that) has in their gift.

The German government has recently backed a domestic proposal to fine social media firms up to €50 million if they fail to promptly remove illegal hate speech from their platforms — within 24 hours after a complaint has been made for “obviously criminal content”, and within seven days for other illegal content.

This has yet to be adopted as legislation. But domestic fines do present a more workable route for governments to try to compel the types of action they want to see from tech firms, albeit only locally.

And while the UK and France have not yet committed to applying fines as a stick to beat social media on content moderation, they are at least eyeing such measures now.

Last month, a UK parliamentary committee urged the government to look at financial penalties for social media companies that fail on content moderation — hitting out at Facebook, YouTube and Twitter for taking a “laissez-faire approach” to moderating hate speech content on their platforms.

Facebook’s content moderation rules have also recently been criticized by child safety charities — so it’s not just terrorism related material that tech firms are facing flak for spreading via their platforms.

We’ve reached out to Facebook, Google and Twitter for comment on the latest developments here and will update this story with any response.

As well as considering creating a new legal liability for tech companies, the UK Prime Minister’s Office said today that the UK and France will lead joint work with the firms in question — including to develop tools to identify and remove harmful material automatically.

“In particular, the Prime Minister and President Macron will press relevant firms to urgently establish the industry-led forum agreed at the G7 summit last month, to develop shared technical and policy solutions to tackle terrorist content on the internet,” the PM’s office said in a statement.

Tech firms do already use tools to try to automate the identification and removal of problem content — although given the vast scale of these user generated content platforms (Facebook, for example, has close to two billion users at this point), and the huge complexity of moderating so much UGC (also factoring in platforms’ typical preference for free speech), there’s clearly no quick and easy tech fix here (the majority of accounts Twitter suspends for promoting terrorism are already identified by its internal spam-fighting tools — but extremist content clearly remains a problem on Twitter).

Earlier this year, Facebook CEO Mark Zuckerberg revealed the company is working on applying AI to try to speed up its content moderation processes, though he also warned that AI aids are “still very early in development” — adding that “many years” will be required to fully develop them.

It remains to be seen whether the threat of new liability legislation will concentrate minds among tech giants to step up their performance on content moderation. Although there are signs they are already doing more.

At the start of this month the European Commission said the firms have made “significant progress” on illegal hate speech takedowns, a year after they agreed to a voluntary Code of Conduct. Facebook also recently announced 3,000 extra moderator staff to beef up its content review team (albeit, that’s still a drop in the ocean vs the 2BN users it has generating content).

Meanwhile, the efficacy of politicians focusing counterterrorism efforts on cracking down on online extremism remains doubtful. And following the recent terror attacks in the UK, May, who served as Home Secretary prior to being PM, faced criticism for making cuts to frontline policing.

Speaking to the Washington Post last week in the wake of the latest terror attack in London, Peter Neumann, director of the London-based International Center for the Study of Radicalization, argued the Internet is not to blame for the recent UK attacks.  “In the case of the most recent attacks in Britain, it wasn’t about the Internet. Many of those involved were radicalized through face-to-face interactions,” he said.

Featured Image: Twin Design/Shutterstock

Facebook culls ‘tens of thousands’ of fake accounts ahead of UK election


Facebook has revealed that it has purged “tens of thousands” of fake accounts in the U.K. ahead of a general election next month.

The BBC reported this non-specific figure earlier today, with Facebook also saying it is monitoring the repeated posting of the same content or a sharp increase in messaging and flagging accounts displaying such activity.

Providing more detail on these measures, Facebook told us: “These changes help us detect fake accounts on our service more effectively — including ones that are hard to spot. We’ve made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity — without assessing the content itself. For example, our systems may detect repeated posting of the same content, or an increase in messages sent. With these changes, we expect we will also reduce the spread of material generated through inauthentic activity, including spam, misinformation, or other deceptive content that is often shared by creators of fake accounts.”

Facebook has previously been accused of liberal bias by demoting conservative views in its Trending Topics feature — which likely explains why it’s so keen to specify that systems it’s built to try to suppress the spread of certain types of “inauthentic” content do not assess “the content itself.”

Another fake news-related tweak Facebook says it has brought to the U.K. to try to combat the spread of misinformation is to take note of whether people share an article they’ve read — with its rational being that if a lot of people don’t share something they’ve read it might be because the information is misleading.

“We’re always looking to improve News Feed by listening to what the community is telling us. We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. In December, we started to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it. We’re now expanding the test to the UK,” Facebook said on this.

The company has also taken out adverts in U.K. national newspapers displaying tips to help people spot fake news — having taken similar steps in France last month prior to its presidential election.

In a statement about its approach to tackling fake news in the U.K., Facebook’s director of policy for the country, Simon Milner, claimed the company is “doing everything we can.”

“People want to see accurate information on Facebook and so do we. That is why we are doing everything we can to tackle the problem of false news,” he said. “We have developed new ways to identify and remove fake accounts that might be spreading false news so that we get to the root of the problem. To help people spot false news we are showing tips to everyone on Facebook on how to identify if something they see is false. We can’t solve this problem alone so we are supporting third party fact checkers during the election in their work with news organisations, so they can independently assess facts and stories.”

Fakebook?

A spokesperson told us that Facebook’s “how to spot” fake news ads (pictured below) are running in U.K. publications, including The Times, The Telegraph, Metro and The Guardian.

Tips the company is promoting include being skeptical of headlines; checking URLs to view the source of the information; asking whether photos look like they have been manipulated; and cross-referencing with other news sources to try to verify whether a report has multiple sources publishing it.

Facebook does not appear to be running these ads in U.K. newspapers with the largest readerships, such as The Sun and The Daily Mail, which suggests the exercise is mostly a PR drive by the company to try to be seen to be taking some very public steps to fight the fake news political hot potato.

The political temperature on this issue is not letting up for Facebook. Last month, for example, a U.K. parliamentary committee said the company must do more to combat fake news — criticizing it for not responding fast enough to complaints.

“They can spot quite quickly when something goes viral. They should then be able to check whether that story is true or not and, if it is fake, blocking it or alerting people to the fact that it is disputed. It can’t just be users referring the validity of the story. They have to make a judgment about whether a story is fake or not,” argued select committee chairman Damian Collins.

Facebook has also been under growing pressure in the U.K. for not swiftly handling complaints about the spread of hate speech, extremist and illegal content on its platform — and earlier this month another parliamentary committee urged the government to consider imposing fines on it and other major social platforms for content moderation failures in a bid to impose better moderation standards.

Add to that Facebook’s specific role in influencing the elections, which again will be facing scrutiny later today when the BBC’s Panorama program screens an investigation of how content spread via Facebook during the U.S. election and the U.K.’s Brexit referendum — including considering how much money the social networking giant makes from fake news.

The BBC is already teasing this spectacularly awkward clip of Milner being interviewed for the program, where he is repeatedly asked how much money the company makes from fake news — and repeatedly fails to provide a specific answer.

Facebook declined to respond on this when we asked for comment on the program’s claims.

Safe to say, there are some very awkward questions for Facebook here (as there has been for Google too, recently, relating to ads being served alongside extremist content on YouTube). And while Milner says the company aspires to reduce “to zero” the money it makes from fake news, it’s clearly not yet in a position to say it does not financially benefit from the spread of misinformation.

And while it’s also true that some traditional media outlets have or can benefit from spreading falsity — earlier this year, for example, The Daily Mail was itself effectively branded a source of fake news by Wikipedia editors who voted to exclude it as a source for the website on the grounds that the information it contains is “generally unreliable” — the issue with Facebook goes beyond having an individually skewed editorial agenda. It’s about a massively scalable distribution technology whose core philosophy is to operate without any preemptive editorial checks and balances at all.

The point is, Facebook’s staggering size, combined with the algorithmic hierarchy of its News Feed, which can create feedback loops of popularity, means its product can act as an amplification platform for fake news. And for all The Daily Mail’s evident divisiveness, it does not control a global distribution platform that’s pushing close to two billion active users.

So, really, it’s Facebook’s unprecedented reach and power that is the core of the issue here when you’re considering whether technology might be undermining democracy.

No other media outlet has ever come close to such scale. And that’s why this issue is intrinsically bound up with Facebook — because it foregrounds the vast power the platform wields, and the commensurate lack of regulation in how it applies that power.

Ads in national newspapers are therefore really best viewed as Facebook trying to influence politicians, as lawmakers wake up to the power of Facebook. So maybe there should be an eleventh tip in Facebook’s false news advert: Consider the underlying agenda.

In the U.K., Facebook says that it is working with local third-party fact-checking organization Full Fact, and with the Google News Lab-backed First Draft organization, to work with “major newsrooms to address rumors and misinformation spreading online during the UK general election” — echoing the approach it announced in Germany in January, ahead of German elections this September… although the effectiveness of that approach has already been questioned.

Facebook says full details of the U.K. initiative will be announced “in due course.” The U.K.’s surprise General Election — called by Prime Minister Theresa May late last month, despite her previously stated intent not to call an election before 2020 — presumably caught the company on the hop.

With just one month to go until polling day in the U.K. it remains to be seen whether May’s election U-turn also caught the fake political news spreaders on the hop.

Featured Image: TechCrunch