All posts in “United Kingdom”

Facebook’s Brexit probe unearths three Russian-bought “immigration” ads

Facebook has provided more details about the extent of Russian digital interference related to the UK’s Brexit vote last year.

Last month the social media giant confirmed that Russian agents had used its platform to try to interfere in the UK’s referendum on EU membership — but said it had not found “significant coordination of ad buys or political misinformation targeting the Brexit vote”.

Today’s findings apparently bear out that conclusion, with Facebook claiming it’s unearthed just three ads and less than $1 spent.

The Brexit related Russian-backed ads ran for four days in May, ahead of the UK’s June referendum vote, and apparently garnered around 200 views on Facebook.

It says the ads targeted both UK and US audiences — and “concerned immigration”, rather than being explicitly about the UK’s EU referendum vote.

Which appears to be in line with the strategy Kremlin agents have deployed in the US, where Russian-bought ads have targeted all sorts of socially divisive issues in an apparent attempt to drive different groups and communities further apart.

The Brexit-related ads were paid for by the same Russian-backed 470 accounts that it previously revealed spent ~$100,000, between June 2016 and May 2017, to run more than 3,000 ads targeting US users.

And Facebook linked these accounts to Russia as a consequence of its investigation into Kremlin interference in the wake of the 2016 US presidential election.

For the Brexit audit, it’s worth noting that Facebook appears to have only looked at identified Internet Research Agency (IRA) pages or account profiles — IRA being the previously unmasked Russian troll-farm — so there could be scope for other Russian-backed accounts to have bought ads intending to meddle with Brexit without Facebook realizing it. (Although given the levels of ad buys by IRA accounts targeting US Facebook users it’s perhaps unlikely there’s a second layer to the Russian political dis-ops campaign. Albeit still possible.)

It also does not look like Facebook has attempted to measure and quantify non-paid Brexit-related disinformation posts by Russian-backed accounts — since it’s only talking in terms of “funded advertisements”. We’ve asked and will update this post with any response.

Update: TechCrunch understands that since the scope of the Electoral Commission enquiry relates to activity funded by Russia, Facebook has — thus far — limited its Brexit scrutiny to ad buys. (Thereby making its scrutiny pretty limited.)

We’ve also asked Facebook to share the three Russian-bought “immigration” ads, and to confirm whether they were anti-immigration in sentiment.

So far the company has provided us with the following extract from a letter to the Electoral Commission as commentary on its findings:

We strongly support the Commission’s efforts to regulate and enforce political campaign finance rules in the United Kingdom, and we take the Commission’s request very seriously.

Further to your request, we have examined whether any of the identified Internet Research Agency (IRA) pages or account profiles funded advertisements to audiences in the United Kingdom during the regulated period for the EU Referendum. We have determined that these accounts associated with the IRA spent a small amount of money ($0.97) on advertisements that delivered to UK audiences during that time. This amount resulted in three advertisements (each of which were also targeted to US audiences and concerned immigration, not the EU referendum) delivering approximately 200 impressions to UK viewers over four days in May 2016.

An Electoral Commission spokesperson we contacted for a response emphasized that its discussions with social media companies are at a very early stage.

The spokesperson also confirmed that Google and Twitter have both also provided information in response to its request they do so, to feed its ongoing enquiry into whether the use of digital ads and bots on social media might break existing political campaigning rules.

In a statement, the spokesperson added: “Facebook, Google and Twitter have responded to us. We welcome their cooperation. There is further work to be done with these companies in response to our request for details of campaign activity on their platforms funded from outside the UK. Following those discussions we will say more about our conclusions.”

At the time of writing Twitter and Google had not responded to a request for details of the information they have passed to the Electoral Commission — which late last month Twitter said it would be providing “in the coming weeks”.

recent academic study of tweet data — looking at how political information diffused on Twitter’s platform specifically around the Brexit vote and the US election — identified more than 156,000 Russian accounts which mentioned #Brexit.

The study also found Russian accounts posted almost 45,000 messages pertaining to the EU referendum in the 48 hours around the vote.

Update: A Google spokesperson has now provided the following response — claiming not to have found any evidence of Russian disinformation ops. “We took a thorough look at our systems and found no evidence of this activity on our platform,” they told us.

Social media’s still unaudited role in political campaigning looks set to remain in the domestic spotlight for the foreseeable future — as the Commission continues to investigate.

Though it remains to be seen whether the body will recommend amending UK law to better regulate political activity on digital platforms.

The UK’s Prime Minister waded into the disinformation debate herself last month by publicly accusing the Russian government of seeking to “weaponize information” by planting fake stories and photoshopped images to try to sow discord in the West.

And the so-far disclosed extent of Russian divisive content targeting the US electorate — which in October Facebook admitted could have reached as many as 126 million people — should give politicians in any democracy plenty of pause for thought about major tech platforms.

Featured Image: Evgeny Gromov/Getty Images

UK to give police new powers to ground drones

The UK government has announced it will introduce draft legislation in the spring aimed at preventing unsafe or criminal use of drones.

Last year it ran a public consultation that recommended addressing safety, security and privacy challenges around drone technology.

Among the measures planned for the forthcoming Drone Bill plus secondary legislation amendments the government has planned are new powers for police to order an operator to ground a drone if it’s deemed necessary.

Police will also be able to seize drone parts to prove it has been used to commit a criminal offense, the government said yesterday.

It had already announced its intention to set out a registration plan for drones weighting 250 grams or more. Yesterday it reiterated that the incoming legislative changes will mean drone owners are required to register their devices.

They will also have to sit safety awareness tests, as well as being required to use certain apps — “so they can access the information needed to make sure any planned flight can be made safely and legally”.

In a statement, aviation minister Baroness Sugg said: “Drones have great potential and we want to do everything possible to harness the benefits of this technology as it develops. But if we are to realize the full potential of this incredibly exciting technology, we have to take steps to stop illegal use of these devices and address safety and privacy concerns.”

“Do not take this lightly — if you use a drone to invade people’s privacy or engage in disruptive behaviour, you could face serious criminal charges,” added assistant chief constable Serena Kennedy, the National Police Chiefs’ Council Lead for Criminal Misuse of Drones, in another supporting statement.

While the UK currently has a Drone Code intended to encourage drone operators to fly safely and responsibly, there have still been multiple reports of near misses between drones and aircraft — and the government clearly feels the code needs to be backed up by new laws and powers.

Yesterday it said it is considering whether to ban drones from flying near airports or above 400 feet — noting these measures could form part of the new regulations.

Safety research it published this summer found that drones weighing 400 grams or more can damage the windscreens of helicopters.

It added that it is also continuing to work “closely” with drone manufacturers to use geofencing technology to prevent drones from entering restricted zones — such as military sites.

Another problematic use of drone tech that has emerged is for smuggling contraband over prison walls. Although it’s not yet clear whether the government wants prisons tp be included in the ‘no fly zones’ manufacturers bake into devices.

“These new laws strike a balance, to allow the vast majority of drone users to continue flying safely and responsibly, while also paving the way for drone technology to revolutionise businesses and public services,” added Sugg.

Also commenting in a statement, Tim Johnson, policy director at the Civil Aviation Authority, said: “Drones can bring economic and workplace safety benefits but to achieve those we need everyone flying a drone now to do so safely. We welcome plans to increase drone operator training, safety awareness and the creation of no-fly zones.”

At the same time as announcing incoming drone regulations draft, the government revealed it’s funding a drone innovation project which launches today — inviting UK cities to get involved in R&D focused on using the tech to transform critical services, such as emergency health services and organ transport, essential infrastructure assessment and repair, and parcel delivery and logistics.

Up to five cities will be able to gain government support for carrying out some drone R&D as part of what it’s dubbed The Flying High Challenge.

The project is being run by Nesta in partnership with the Innovate UK government agency.

Study: Russian Twitter bots sent 45k Brexit tweets close to vote

To what extent — and how successfully — did Russian backed agents use social media to influence the UK’s Brexit vote? Yesterday Facebook admitted it had linked some Russian accounts to Brexit-related ad buys and/or the spread of political misinformation on its platform, though it hasn’t yet disclosed how many accounts were involved or how many rubles were spent.

Today the The Times reported on research conducted by a group of data scientists in the US and UK looking at how information was diffused on Twitter around the June 2016 EU referendum vote, and around the 2016 US presidential election.

The Times reports that the study tracked 156,252 Russian accounts which mentioned #Brexit, and also found Russian accounts posted almost 45,000 messages pertaining to the EU referendum in the 48 hours around the vote.

Although Tho Pham, one of the report authors, confirmed to us in an email that the majority of those Brexit tweets were posted on June 24, 2016, the day after the vote — when around 39,000 Brexit tweets were posted by Russian accounts, according to the analysis.

But in the run up to the referendum vote they also generally found that human Twitter users were more likely to spread pro-leave Russian bot content via retweets (vs pro-remain content) — amplifying its potential impact.

From the research paper:

During the Referendum day, there is a sign that bots attempted to spread more leave messages with positive sentiment as the number of leave tweets with positive sentiment increased dramatically on that day.

More specifically, for every 100 bots’ tweets that were retweeted, about 80-90 tweets were made by humans. Furthermore, before the Referendum day, among those humans’ retweets from bots, tweets by the Leave side accounted for about 50% of retweets while only nearly 20% of retweets had pro-remain content. In the other words, there is a sign that during pre-event period, humans tended to spread the leave messages that were originally generated by bots. Similar trend is observed for the US Election sample. Before the Election Day, about 80% of retweets were in favour of Trump while only 20% of retweets were supporting Clinton.

You do have to wonder whether Brexit wasn’t something of a dry run disinformation campaign for Russian bots ahead of the US election a few months later.

The research paper, entitled Social media, sentiment and public opinions: Evidence from #Brexit and #USElection, which is authored by three data scientists from Swansea University and the University of California, Berkeley, used Twitter’s API to obtain relevant datasets of tweets to analyze.

After screening, their dataset for the EU referendum contained about 28.6M tweets, while the sample for the US presidential election contained ~181.6M tweets.

The researchers say they identified a Twitter account as Russian-related if it had Russian as the profile language but the Brexit tweets were in English.

While they detected bot accounts (defined by them as Twitter users displaying ‘botlike’ behavior) using a method that includes scoring each account on a range of factors such as whether it tweeted at unusual hours; the volume of tweets including vs account age; and whether it was posting the same content per day.

Around the US election, the researchers generally found a more sustained use of politically motivated bots vs around the EU referendum vote (when bot tweets peaked very close to the vote itself).

They write:

First, there is a clear difference in the volume of Russian-related tweets between Brexit sample and US Election sample. For the Referendum, the massive number of Russian-related tweets were only created few days before the voting day, reached its peak during the voting and result days then dropped immediately afterwards. In contrast, Russian-related tweets existed both before and after the Election Day. Second, during the running up to the Election, the number of bots’ Russian-related tweets dominated the ones created by humans while the difference is not significant during other times. Third, after the Election, bots’ Russian-related tweets dropped sharply before the new wave of tweets was created. These observations suggest that bots might be used for specific purposes during high-impact events.

In each data set, they found bots typically more often tweeting pro-Trump and pro-leave views vs pro-Clinton and pro-remain views, respectively.

They also say they found similarities in how quickly information was disseminated around each of the two events, and in how human Twitter users interacted with bots — with human users tending to retweet bots that expressed sentiments they also supported. The researchers say this supports the view of Twitter creating networked echo chambers of opinion as users fix on and amplify only opinions that align with their own, avoiding engaging with different views.

Combine that echo chamber effect with deliberate deployment of politically motivated bot accounts and the platform can be used to enhance social divisions, they suggest.

From the paper:

These results lend supports to the echo chambers view that Twitter creates networks for individuals sharing the similar political beliefs. As the results, they tend to interact with others from the same communities and thus their beliefs are reinforced. By contrast, information from outsiders is more likely to be ignored. This, coupled by the aggressive use of Twitter bots during the high-impact events, leads to the likelihood that bots are used to provide humans with the information that closely matches their political views. Consequently, ideological polarization in social media like Twitter is enhanced. More interestingly, we observe that the influence of pro-leave bots is stronger the influence of pro-remain bots. Similarly, pro-Trump bots are more influential than pro-Clinton bots. Thus, to some degree, the use of social bots might drive the outcomes of Brexit and the US Election.

In summary, social media could indeed affect public opinions in new ways. Specifically, social bots could spread and amplify misinformation thus influence what humans think about a given issue. Moreover, social media users are more likely to believe (or even embrace) fake news or unreliable information which is in line their opinions. At the same time, these users distance from reliable information sources reporting news that contradicts their beliefs. As a result, information polarization is increased, which makes reaching consensus on important public
issues more difficult.

Discussing the key implications of the research, they describe social media as “a communication platform between government and the citizenry”, and say it could act as a layer for government to gather public views to feed into policymaking.

However they also warn of the risks of “lies and manipulations” being dumped onto these platforms in a deliberate attempt to misinform the public and skew opinions and democratic outcomes — suggesting regulation to prevent abuse of bots may be necessary.

They conclude:

Recent political events (the Brexit Referendum and the US Presidential Election) have observed the use of social bots in spreading fake news and misinformation. This, coupled by the echo chambers nature of social media, might lead to the case that bots could shape public opinions in negative ways. If so, policy-makers should consider mechanisms to prevent abuse of bots in the future.

Commenting on the research in a statement, a Twitter spokesperson told us: “Twitter recognizes that the integrity of the election process itself is integral to the health of a democracy. As such, we will continue to support formal investigations by government authorities into election interference where required.”

Its general critique of external bot analysis conducted via data pulled from its API is that researchers are not privy to the full picture as the data stream does not provide visibility of its enforcement actions, nor on the settings for individual users which might be surfacing or suppressing certain content.

The company also notes that it has been adapting its automated systems to pick up suspicious patterns of behavior, and claims these systems now catch more than 3.2M suspicious accounts globally per week.

Since June 2017, it also claims it’s been able to detect an average of 130,000 accounts per day that are attempting to manipulate Trends — and says it’s taken steps to prevent that impact. (Though it’s not clear exactly what that enforcement action is.)

Since June it also says it’s suspended more than 117,000 malicious applications for abusing its API — and say the apps were collectively responsible for more than 1.5BN “low-quality tweets” this year.

It also says it has built systems to identify suspicious attempts to log in to Twitter, including signs that a login may be automated or scripted — techniques it claims now help it catch about 450,000 suspicious logins per day.

The Twitter spokesman noted a raft of other changes it says it’s been making to try to tackle negative forms of automation, including spam. Though he also flagged the point that not all bots are bad. Some can be distributing public safety information, for example.

Even so, there’s no doubt Twitter and social media giants in general remain in the political hotspot, with Twitter, Facebook and Google facing a barrage of awkward questions from US lawmakers as part of a congressional investigation probing manipulation of the 2016 US presidential election.

A UK parliamentary committee is also currently investigating the issue of fake news, and the MP leading that probe recently wrote to Facebook and Twitter to ask them to provide data about activity on their platforms around the Brexit vote.

And while it’s great that tech platforms finally appear to be waking up to the disinformation problem their technology has been enabling, in the case of these two major political events — Brexit and the 2016 US election — any action they have since taken to try to mitigate bot-fueled disinformation obviously comes too late.

While citizens in the US and the UK are left to live with the results of votes that appear to have been directly influenced by Russian agents using US tech tools.

Today, Ciaran Martin, the CEO of the UK’s National Cyber Security Centre (NCSC) — a branch of domestic security agency GCHQ — made public comments stating that Russian cyber operatives have attacked the UK’s media, telecommunications and energy sectors over the past year.

This follow public remarks by the UK prime minister Theresa May yesterday, who directly accused Russia’s Vladimir Putin of seeking to “weaponize information” and plant fake stories.

The NCSC is “actively engaging with international partners, industry and civil society” to tackle the threat from Russia, added Martin (via Reuters).

Asked for a view on whether governments should now be considering regulating bots if they are actively being used to drive social division, Paul Bernal, a lecturer in information technology at the University of East Anglia, suggested top down regulation may be inevitable.

“I’ve been thinking about that exact question. In the end, I think we may need to,” he told TechCrunch. “Twitter needs to find a way to label bots as bots — but that means they have to identify them first, and that’s not as easy as it seems.

“I’m wondering if you could have an ID on twitter that’s a bot some of the time and human some of the time. The troll farms get different people to operate an ID at different times — would those be covered? In the end, if Twitter doesn’t find a solution themselves, I suspect regulation will happen anyway.”

Featured Image: nevodka / iStock Editorial / Getty Images Plus

Facebook says Russia did try to meddle in Brexit vote

BuzzFeed has obtained a statement from Facebook in which the tech giant admits, for the first time, that some Russia-linked accounts may have used its platform to try to interfere in the UK’s European Union referendum vote in June 2016.

Which means Russian agents weren’t just using Facebook to meddle in the 2016 US presidential election, and in other recent elections in the West — such as those in France and Germany.

Elections are of course a huge deal but the result can at least be reversed at the ballot box in time. The in/out Brexit referendum in the UK was no such standard vote. And there is no standard process for reversing the result.

So if Kremlin agents also used Facebook to influence people in the UK to vote for Brexit that would be hugely significant — and further evidence that social media’s connective tissue can be used to drive and inflame societal divisions.

“To date, we have not observed that the known, coordinated clusters in Russia engaged in significant coordination of ad buys or political misinformation targeting the Brexit vote,” a Facebook spokesperson told BuzzFeed in a carefully worded statement.

Which begs the question how much Russian Facebook activity did target the Brexit vote?  We asked Facebook how many socially divisive Russian-backed ads ran before Brexit. Facebook declined to comment.

While its claim not to have found “significant coordination” of Russian activity ahead of the Brexit vote might sound like ‘case closed’ on the EU referendum front, the company has consistently sought to play down the impact of Facebook-distributed Russian misinformation — with CEO Mark Zuckerberg initially describing it as a “pretty crazy idea” that fake news could have influenced voters in the US election.

Nearly half a year later, after conducting an internal investigation, Facebook conceded there had been a Russian disinformation campaign during the US election — but claimed the reach of the operation was “statistically very small” in comparison with overall political activity and engagement.

Then in September another tidbit came out when it said it now believed potential pro-Kremlin entities could have spent up to $150,000 on its platform to buy 3,000 ads to between 2015 and 2017. It said the ads were tied to 470 accounts — some linked to a known Russian troll farm called the Internet Research Agency.

It also agreed to share the Russian backed US political ads with congressional investigators looking into US election-related disinformation. Though it rejected calls to make all the ads public.

Finally, at the end of last month, about a year after its CEO’s denial of the potency of political disinformation on his mega platform, Facebook admitted Russian-backed content could have reached as many as 126 million people in the US.

It now estimates the number of pieces of divisive content at 80,000, after being asked by congressional investigators to report not just direct Russian-bought ads but organic posts, images, events and more, which can also of course become viral vehicles of disinformation on Facebook’s algorithmically driven platform.

So there’s a reason to be cautious about accepting at face value the company’s claim now that Russian Brexit meddling existed on its platform but was not significant.

Giving a speech yesterday, the UK prime minister set out in no uncertain tones her conviction that Russia has been using social media platforms to try to interfere with Western democracies, directly accusing Vladimir Putin of seeking to sow social division by “weaponizing information” and planting fake stories.

Multiple Twitter accounts previously linked to Russia’s Internet Research Agency have also been identified as engaging in Brexit-related tweeting, according to the Times — linking Russian-backed election meddling troll activity to the UK’s EU referendum vote too.

On Friday, Wired detailed some of the Russian-backed Twitter accounts and 2016 Brexit-related tweets — including tweets apparently seeking to conflate Islam with terrorism, and others aiming to stir up anti-immigrant sentiment such as by spreading racial slurs.

We asked Twitter how many accounts it has linked to pro-Kremlin entities that were also tweeting about Brexit ahead of the referendum vote. At the time of writing the company had not responded.

Meanwhile Russia continues to amuse itself with a spot of public Twitter trolling of the UK PM…

A UK parliamentary committee which is investigating fake news has previously requested data from Twitter and Facebook on Russian accounts which posted about the EU referendum.

Commenting on the cache of Russian tweets now linked to Brexit, Damian Collins, the MP leading the inquiry, told Wired: “I think it shows that Russian-controlled accounts have been politically active in the UK as well as America. This could just be the tip of the iceberg because we’ve only really just started looking and doing a proper detailed study of what accounts linked to Russian organisations have been doing politically.”

The UK’s Brexit vote was both a shock result and a close one, with 51.9 per cent voting to leave the EU vs 48.1 per cent voting remain.

It caused huge immediate political upheaval — with the then UK Prime Minister resigning immediately. There was also major drop in the value of pound sterling. (The pound remains down around 11 per cent vs the dollar and 15 per cent vs the euro.)

While Brexit-based uncertainty continues to impact almost every aspect of day-to-day political activity in the UK, given the scale of the task facing ministers to try to unpick more than 40 years of EU agreements — clearly deflecting the government from being able to pursue a wider policy agenda as ministers’ fixed firefighting focus is on trying to enact Brexit without causing even greater disruption to UK businesses and citizens.

Scores of European ministers and civil servants are also having to expend further resources to manage Brexit vis-a-vis their own sets of priorities and to shape whatever comes after.

The incentive for Russia to have sought to run a disinformation campaign to encourage disunity in the European Union by encouraging a vote for Brexit is clear: Instability weakens your opponents.

Whether Putin’s agents were merely dabbling with Brexit disinformation as they geared up for a more major disinformation push focused on the US election remains to be seen. But given the closeness of the Brexit vote — and the long term disruption Brexit will undoubtedly cause — then any Russia-backed interference deserves to be quantified in full.

So we’re all looking at you, Facebook.

Featured Image: Evgeny Gromov/Getty Images

Neos launches IoT powered home insurance UK-wide

What do you get if you combine the Internet of Things with the business of home insurance? UK startup Neos is hoping the answer is prevention rather than (just) payouts.

Its home insurance product is intended to lean on sensor tech and wireless connectivity to reduce home-related risks — like fire and water damage, break ins and burglary — by having customers install a range of largely third-party Internet-connected sensors inside their home, included in the price of the insurance product. So it’s a smart home via the insured backdoor, as it were.

Customers also get an app to manage the various sensors so they can monitor and even control some of the connected components, which can include motion sensors, cameras and smoke detectors.

The Neos app is also designed to alert users to potentially problematic events — like the front door being left open or water starting to leak under their kitchen sink — the associated risk of which a little timely intervention might well mitigate.

It sees additional revenue opportunity there too — and is aiming to connect customers with repair services via its platform. So the service could help a customer who’s away on holiday arrange for a plumber to come in and fix their leaky sink, for example (there’s no smart locks currently involved in the equation though — Neos customer can name trusted keyholders to be contacted in their absence).

“The vision really is about moving insurance from a traditional claims, payout type solution… to one that’s much more preventative, and technology’s really the enabler for that,” says co-founder Matt Poll. “We also think that customers get quite a raw deal from their insurance company… for being a really good customer and not claiming… And no value.

“So what we’re trying to do is to provide value to customers throughout the term of their policy — allowing them to monitor their own homes, using our cameras and the devices that we give them. If there is an issue, they’ll get alerted. Most importantly they or us through our monitoring center and assistance service can put the things right… In that sense both the customer and us benefit if we’re successful.”

On the insurance cover front Poll claims there’s no new responsibilities being placed on customers’ shoulders — despite all the sensor kit that’s installed as part of the package. “There’s no responsibility placed on the customer. We’re really clear about that,” he tells TechCrunch. “Customers do ask this question — oh what if I don’t arm the alarm, does that mean I’m not covered? And our answer is simply of course you’re covered.”

The startup was founded 18 months ago by Poll, an ex-insurance guy, combining with a more technical co-founder. The team market tested their proposition last year in and around London, partnering with Hiscox on the insurance product offering for that trial. They’re now launching their own branded, own insurance offering nationwide.

Neos is actually offering a range of home insurance products, including a combined contents plus buildings insurance offering (or either/or), across three pricing tiers — aiming to support different levels of coverage and different types of customers, such as flat vs house dwellers, for example, or homeowners vs tenants.

While it’s generally aiming to be tech agnostic when it comes to which smart home sensors can be used — supporting a range of third party devices — Neos has developed its own smart water valve, for example, as Poll says it couldn’t find an appropriate existing bit of IoT kit in the market for that.

“It uses machine learning to monitor an individual’s water signature within their property over a period of a couple of weeks and then we can identify from that if there’s any leaks — small or large — and most importantly if a leak does arrive the customer or our monitoring sensor can turn the water off remotely,” he notes.

It’s also built its own hub to control the firmware on the third party devices its platform is integrating with. “We want to put ourselves out there to give customers the best solution for the job and move as the market moves,” says Poll on Neos’ overall philosophy towards hardware.

Despite all this additional kit to be installed in customers’ homes, Poll bills the insurance products as competitively priced (and positioned) vs more traditional insurance offerings. Neos’ prices vary from “approximately £15 to £50 per month”, which it says includes “all the necessary hardware, 24/7 monitoring and assistance plus the comprehensive insurance cover”.

“We’ve got some good early traction and I think the price point that we’ve come in at is attractive, and the value proposition is there,” says Poll, noting that the product will be on price comparison sites “by the end of this month — at the very latest”, as well as being offered through property website Zoopla, which is a distribution partner (and investor) in Neos.

He also says the insurance quote process has been radically simplified by Neos drawing on a range of publicly available data so that potential customers don’t have to answer to a large number of questions just to get a quote.

“We can actually give customers a full quote from just their postcode and their address,” he says. “We use 261 different data sources… One of our partners and early investors is Zoopla. They have a lot of data that they provide us. We also use data from Landmark and Land Registry — local authority data.

“Because all this data’s publicly available. We don’t ultimately need to ask how many bedrooms or bathrooms you’ve got — in most cases we already know that data. Actually in most cases we know the square footage of your property which is a much more accurate predictor of risk anyway.”

Another strand of the go-to-market approach is it’s also working with existing insurance brands to white label its offering — setting it up to scale more quickly into markets (and regulations) outside the UK.

“We’re just about to launch an Aviva-backed solution,” says Poll. “A lot of the big insurers are looking in this space but haven’t done anything… So we’ve had a lot of interest outside of just our direct Neos brand from larger insurers based here in the UK, Europe and also in the US.”

He says Neos is also hopeful of signing a “large scale partner in the US” — one of the top five home insurance companies — which would be a second strand to its white labeling/enterprise solution bow if they nail that deal down.

“Markets like the US… are very different from a regulation point of view and cost of entry for a small business like us to enter, so that model makes sense. But we’re very much — certainly now and we’ll always be — focused on the Neos direct to consumer brand,” he adds.

Poll says he’s hoping for a minimum of “tens of thousands” of customers within a year’s time for Neos’ b2c play — and “ideally” significant growth above that. “If you add in the b2b play as well in terms of customers actually utilizing our platform I think the potential is significantly higher than that,” he adds.

The startup has previously raised £5m in Series A funding led by Aviva Ventures and with BBC sporting personality Gary Lineker also investing. As well as Zoopla, another strategic partnership is with Munich Re, which has also invested.

Interesting takeaways from its beta period include that customers were keen to have help installing all the sensor kit (Neos is offering an installation service for a fixed price if users don’t want to fit the kit relying on its instructions themselves), and that security concerns appeared to be more of a smart home driver for the product than risks such as water leaks, so Neos tweaked some of the sensor bundles it’s now offering.

Poll also says customer feedback from the trial pushed it to offer a fix on premiums for their first three years (assuming a customer makes no claims) to reassure potential customers that it isn’t seeking to use smart home hardware to lock then in to its products and then quickly inflate premiums.

“It’s interesting how customer perceptions are,” he says, arguing there’s “a mistrust of the insurance industry as a whole” — which is something else Neos is hoping can be fixed with a little IoT-enabled preventative visibility.