All posts in “Europe”

Index has backed Immersive Games Lab, a new startup from founder of Tough Mudder

Immersive Games Lab, a new venture from Tough Mudder co-founder and Chairman Will Dean, has picked up around £2.5 million in seed funding, TechCrunch has learned. According to sources, London-based Index Ventures has led the round.

In a call confirming the close, Dean told me Sweet Capital, and JamJar Investments (the VC fund set up by the 3 Innocent Drinks founders) also participated.

Developing the “next generation” of immersive group gaming, Immersive Games Lab describes itself as “part indoor theme park, part video game, part escape room” and says it will launch a new breed of “captivating group experiences” in London in early 2019.

Little else is known regarding what Immersive Games Lab’s first experience will be, although Dean told me it will be sold in retail spaces, in ticket form, and will be a blend of technology and in-person group activity. It is currently being prototyped and tested in a warehouse in North London.

More broadly, he said the idea of creating a new kind of immersive gaming experience is partly based on the sentiment that we spend too much screen time on our devices, consuming social media in a way that isn’t always good for our mental health.

His previous and hugely successful venture Tough Mudder was all about creating a new, fun experience around exercise — and ultimately helping people become more physically active. Dean says he is keen for Immersive Games Lab to also make a positive dent on people’s lives.

The new venture also builds nicely on Dean’s track record building an experience and community-led consumer proposition — and the type of go-to market strategy that requires. Which is undoubtedly what caught the interest of Index and other investors, in what I understand was an oversubscribed round.

Immersive Games Lab’s other co-founder is David Spindler, who also played a key role at Tough Mudder.

Facebook finds and kills another 512 Kremlin-linked fake accounts

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network.

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.

German HR and recruiting platform Personio raises $40M Series B led by Index

Personio, the German HR and recruiting platform, has raised $40 million in a Series B funding. Leading the round is London-based Index Ventures, with participation from existing investors Northzone and Rocket Internet’s Global Founders.

Founded in 2015, Munich-based Personio has set out to build a “HR operating system” for small and medium-sized companies (SMEs) ranging from 10 and 2,000 employees. The cloud-based software is designed to power all of a company’s HR and recruiting processes, either via the product’s own core functionality or through its ability to integrate with third-party software.

“We believe in the benefit of a holistic HR solution that covers the entire employee life-cycle, while its functionalities need to adapt to individual customer requirements and processes,” Personio co-founder and CEO Hanno Renner tells me.

“That being said, we distinguish between the bread-and-butter HR activities which every company needs to do (e.g. recruiting, on boarding, time off management, payroll etc.) and those that are either industry-specific or rather nice-to-haves”.

Examples of the latter include hardware-based time tracking, and employee engagement, respectively. “We focus our efforts on providing a best-in-class experience for what we consider bread-and-butter HR,” adds Renner. “For more specific requirements, we let our customers choose from a growing number of integrated vertical solutions based on their needs. Data will be synced so Personio acts as the system of record for all HR information and information only needs to be entered once”.

In addition to “out of the box” third-party software integrations, Personio’s claim to offer a HR operating system is backed up by the company’s open API, which is designed to cover various use cases where accessing data that is stored in Personio can add further value to customers. This includes building something as simple as a Slack bot using Personio data, to connecting Personio to a company’s data-warehouse or deeper integrations with internal systems.

More broadly, Renner says this holistic approach, coupled with Personio’s workflow automation that aims to cut down on time wasted on repetitive tasks, is not only resonating with HR managers and recruiters who typically use the product for several hours per day, but is also finding use with managers, executives and other employees. The end result is that HR and recruitment processes can become much more distributed across a company.

To that end, Personio says its Series B funding will be used to help the company attempt to become Europe’s leading provider of human resources software for SMEs. It boasts more than 1,000 clients in 35 countries, seeing over 150,000 employees and several hundred thousand applicants currently being managed within Personio.

“We believe that now is the right timing to actively expand into further regions and the funding as well as Index expertise will certainly help making that move successful,” adds the Personio CEO. “Apart from that, we consider ourselves a product-driven company and hence want to continue to strongly invest into building the best product for our customers which will mean significantly growing our product & engineering team and potentially even opening a new office to facilitate hiring”.

Facebook urged to give users greater control over what they see

Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.

They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.

Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)

“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author Timothy Garton Ash.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”

Here’s a rundown of the report’s nine recommendations:

  1. Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns”; and also generating “a high proportion of contested cases”. Clear and tighter wording could make consistent implementation easier, they believe
  2. Hire more and contextually expert content reviewers — “the issue is quality as well as quantity”, the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise”; and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write
  3. Increase ‘decisional transparency’ — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals
  4. Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group
  5. Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate”, and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest
  6. Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts
  7. Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy) but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more
  8. Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
  9. Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn

In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various Internet platforms — hence growing interest in some form of “industry-wide self-regulatory body”. Though it suggests that achieving that kind of overarching regulation will be “a long and complex task”.

In the meanwhile the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2BN+ active users).

“We recognize that Facebook employees are making difficult, complex, contextual judgements  every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.

We’ve reached out to Facebook for comment on their recommendations.

The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University, and the Hoover Institution, Stanford University.

Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.

Sources: Email security company Tessian is closing in on a $40M round led by Sequoia Capital

Continuing a trend that VCs here in London tell me is seeing an increasing amount of deal-flow in Europe attract the interest of top-tier Silicon Valley venture capital firms, TechCrunch has learned that email security provider Tessian is the latest to raise from across the pond.

According to multiple sources, the London-based company has closed a Series B round led by Sequoia Capital. I understand that the deal could be announced within a matter of weeks, and that the round size is in the region of $40 million. Tessian declined to comment.

Founded in 2013 by three engineering graduates from Imperial College — Tim Sadler, Tom Adams and Ed Bishop — Tessian is deploying machine learning to improve email security. Once installed on a company’s email systems, the machine learning tech analyses an enterprise’s email networks to understand normal and abnormal email sending patterns and behaviours.

Tessian then attempts to detect anomalies in outgoing emails and warns users about potential mistakes, such as a wrongly intended recipient, or nefarious employee activity, before an email is sent. More recently, the startup has begun addressing in-bound email, too. This includes preventing phishing attempts or spotting when emails have been spoofed.

Meanwhile, Tessian (formerly called CheckRecipient) raised $13 million in Series A funding just 7 months ago in a round led by London’s Balderton Capital. The company’s other investors include Accel, Amadeus Capital Partners, Crane, LocalGlobe, Winton Ventures, and Walking Ventures.