All posts in “United Kingdom”

Zuckerberg again snubs UK parliament over call to testify

Facebook has once again eschewed a direct request from the UK parliament for its CEO, Mark Zuckerberg, to testify to a committee investigating online disinformation — without rustling up so much as a fig-leaf-sized excuse to explain why the founder of one of the world’s most used technology platforms can’t squeeze a video call into his busy schedule and spare UK politicians’ blushes.

Which tells you pretty much all you need to know about where the balance of power lies in the global game of (essentially unregulated) U.S. tech platforms giants vs (essentially powerless) foreign political jurisdictions.

At the end of an 18-page letter sent to the DCMS committee yesterday — in which Facebook’s UK head of public policy, Rebecca Stimson, provides a point-by-point response to the almost 40 questions the committee said had not been adequately addressed by CTO Mike Schroepfer in a prior hearing last month — Facebook professes itself disappointed that the CTO’s grilling was not deemed sufficient by the committee.

“While Mark Zuckerberg has no plans to meet with the Committee or travel to the UK at the present time, we fully recognize the seriousness of these issues and remain committed to providing any additional information required for their enquiry into fake news,” she adds.

So, in other words, Facebook has served up another big fat ‘no’ to the renewed request for Zuckerberg to testify — after also denying a request for him to appear before it in March, when it instead sent Schroepfer to claim to be unable to answer MPs’ questions.

At the start of this month committee chair Damian Collins wrote to Facebook saying he hoped Zuckerberg would voluntarily agree to answer questions. But the MP also took the unprecedented step of warning that if the Facebook founder did not do so the committee would issue a formal summons for him to appear the next time Zuckerberg steps foot in the UK.

Hence, presumably, that addendum line in Stimson’s letter — saying the Facebook CEO has no plans to travel to the UK “at the present time”.

The committee of course has zero powers to comply testimony from a non-UK national who is resident outside the UK — even though the platform he controls does plenty of business within the UK.

Last month Schroepfer faced five hours of close and at times angry questions from the committee, with members accusing his employer of lacking integrity and displaying a pattern of intentionally deceptive behavior.

The committee has been specifically asking Facebook to provide it with information related to the UK’s 2016 EU referendum for months — and complaining the company has narrowly interpreted its requests to sidestep a thorough investigation.

More recently research carried out by the Tow Center unearthed Russian-bought UK targeted immigration ads relevant to the Brexit referendum among a cache Facebook had provided to Congress — which the company had not disclosed to the UK committee.

At the end of the CTO’s evidence session last month the committee expressed immediate dissatisfaction — claiming there were almost 40 outstanding questions the CTO had failed to answer, and calling again for Zuckerberg to testify.

It possibly overplayed its hand slightly, though, giving Facebook the chance to serve up a detailed (if not entirely comprehensive) point-by-point reply now — and use that to sidestep the latest request for its CEO to testify.

Still, Collins expressed fresh dissatisfaction today, saying Facebook’s answers “do not fully answer each point with sufficient detail or data evidence”, and adding the committee would be writing to the company in the coming days to ask it to address “significant gaps” in its answers. So this game of political question and self-serving answer is set to continue.

In a statement, Collins also criticized Facebook’s response at length, writing:

It is disappointing that a company with the resources of Facebook chooses not to provide a sufficient level of detail and transparency on various points including on Cambridge Analytica, dark ads, Facebook Connect, the amount spent by Russia on UK ads on the platform, data collection across the web, budgets for investigations, and that shows general discrepancies between Schroepfer and Zuckerberg’s respective testimonies. Given that these were follow up questions to questions Mr Schroepfer previously failed to answer, we expected both detail and data, and in a number of cases got excuses.

If Mark Zuckerberg truly recognises the ‘seriousness’ of these issues as they say they do, we would expect that he would want to appear in front of the Committee and answer questions that are of concern not only to Parliament, but Facebook’s tens of millions of users in this country. Although Facebook says Mr Zuckerberg has no plans to travel to the UK, we would also be open to taking his evidence by video link, if that would be the only way to do this during the period of our inquiry.

For too long these companies have gone unchallenged in their business practices, and only under public pressure from this Committee and others have they begun to fully cooperate with our requests. We plan to write to Facebook in the coming days with further follow up questions.

In terms of the answers Facebook provides to the committee in its letter (plus some supporting documents related to the Cambridge Analytica data misuse scandal) there’s certainly plenty of padding on show. And deploying self-serving PR to fuzz the signal is a strategy Facebook has mastered in recent more challenging political times (just look at its ‘Hard Questions’ series to see this tactic at work).

At times Facebook’s response to political attacks certainly looks like an attempt to drown out critical points by deploying self-serving but selective data points — so, for instance, it talks at length in the letter about the work it’s doing in Myanmar, where its platform has been accused by the UN of accelerating ethnic violence as a result of systematic content moderation failures, but declines to state how many fake accounts it’s identified and removed in the market; nor will it disclose how much revenue it generates from the market.

Asked by the committee what the average time to respond to content flagged for review in the region, Facebook also responds in the letter with the vaguest of generalized global data points — saying: “The vast majority of the content reported to us is reviewed within 24 hours.” Nor does it specify if that global average refers to human review — or just an AI parsing the content.

Another of the committee’s questions is: ‘Who was the person at Facebook responsible for the decision not to tell users affected in 2015 by the Cambridge Analytica data misuse scandal?’ On this Facebook provides three full paragraphs of response but does not provide a direct answer specifying who decided not to tell users at that point — so either the company is concealing the identity of the person responsible or there simply was no one in charge of that kind of consideration at that time because user privacy was so low a priority for the company that it had no responsibility structures in place to enforce it.

Another question — ‘who at Facebook heads up the investigation into Cambridge Analytica?’ — does get a straight and short response, with Facebook saying its legal team, led by general counsel Colin Stretch, is the lead there.

It also claims that Zuckerberg himself only become aware of the allegations that Cambridge Analytica may not have deleted Facebook user data in March 2018 following press reports.

Asked what data it holds on dark ads, Facebook provides some information but it’s also being a bit vague here too — saying: “In general, Facebook maintains for paid advertisers data such as name, address and banking details”, and: “We also maintain information about advertiser’s accounts on the Facebook platform and information about their ad campaigns (most advertising content, run dates, spend, etc).”

It does also confirms it can retain the aforementioned data even if a page has been deleted — responding to another of the committee’s questions about how the company would be able to audit advertisers who set up to target political ads during a campaign and immediately deleted their presence once the election was over.

Though, given it’s said it only generally retains data, we must assume there are instances where it might not retain data and the purveyors of dark ads are essentially untraceable via its platform — unless it puts in place a more robust and comprehensive advertiser audit framework.

The committee also asked Facebook’s CTO whether it retains money from fraudulent ads running on its platform, such as the ads at the center of a defamation lawsuit by consumer finance personality Martin Lewis. On this Facebook says it does not “generally” return money to an advertiser when it discovers a policy violation — claiming this “would seem perverse” given the attempt to deceive users. Instead it says it makes “investments in areas to improve security on Facebook and beyond”.

Asked by the committee for copies of the Brexit ads that a Cambridge Analytica linked data company, AIQ, ran on its platform, Facebook says it’s in the process of compiling the content and notifying the advertisers that the committee wants to see the content.

Though it does break out AIQ ad spending related to different vote leave campaigns, and says the individual campaigns would have had to grant the Canadian company admin access to their pages in order for AIQ to run ads on their behalf.

The full letter containing all Facebook’s responses can be read here.

Brexit data transfer gaps a risk for UK startups, MPs told

The uncertainty facing digital businesses as a result of Brexit was front and center during a committee session in the UK parliament today, with experts including the UK’s information commissioner responding to MPs’ questions about how and even whether data will continue to flow between the UK and the European Union once the country has departed the bloc — in just under a year’s time, per the current schedule.

The risks for UK startups vs tech giants were also flagged, with concerns voiced that larger businesses are better placed to weather Brexit-based uncertainty thanks to greater resources at their disposal to plug data transfer gaps resulting from the political upheaval.

Information commissioner Elizabeth Denham emphasized the overriding importance of the UK data protection bill being passed. Though that’s really just the baby step where the Brexit negotiations are concerned.

Parliamentarians have another vote on the bill this afternoon, during its third reading, and the legislative timetable is tight, given that the pan-EU General Data Protection Act (GDPR) takes direct effect on May 25 — and many provisions in the UK bill are intended to bring domestic law into line with that regulation, and complete implementation ahead of the EU deadline.

Despite the UK referendum vote to pull the country out of the EU, the government has committed to complying with GDPR — which ministers hope will lay a strong foundation for it to secure a future agreement with the EU that allows data to continue flowing, as is critical for business. Although what exactly that future data regime might be remains to be seen — and various scenarios were discussed during today’s hearing — hence there’s further operational uncertainty for businesses in the years ahead.

“Getting the data policy right is of critical importance both on the commercial side but also on the security and law enforcement side,” said Denham. “We need data to continue to flow and if we’re not part of the unified framework in the EU then we have to make sure that we’re focused and we’re robust about putting in place measures to ensure that data continues to flow appropriately, that it’s safeguarded and also that there is business certainty in advance of our exit from the EU.

“Data underpins everything that we do and it’s critically important.”

Another witness to the committee, James Mullock, a partner at law firm Bird & Bird, warned that the Brexit-shaped threat to UK-EU data flows could result in a situation akin to what happened after the long-standing Safe Harbor arrangement between the EU and the US was struck down in 2015 — leaving thousands of companies scrambling to put in place alternative data transfer mechanisms.

“If we have anything like that it would be extremely disruptive,” warned Mullock. “And it will, I think, be extremely off-putting in terms of businesses looking at where they will headquarter themselves in Europe. And therefore the long term prospects of attracting businesses from many of the sectors that this country supports so well.”

“Essentially what you’re doing is you’re putting the burden on business to find a legal agreement or a legal mechanism to agree data protection standards on an overseas recipient so all UK businesses that receive data from Europe will be having to sign these agreements or put in place these mechanisms to receive data from the European Union which is obviously one of our very major senders of data to this country,” he added of the alternative legal mechanisms fall-back scenario.

Another witness, Giles Derrington, head of Brexit policy for UK technology advocacy organization, TechUK, explained how the collapse of Safe Harbor had saddled businesses with major amounts of bureaucracy — and went on to suggest that a similar scenario befalling the UK as a result of Brexit could put domestic startups at a big disadvantage vs tech giants.

“We had a member company who had to put in place two million Standard Contractual Clauses over the space of a month or so [after Safe Harbor was struck down],” he told the committee. “The amount of cost, time, effort that took was very, very significant. That’s for a very large company.

“The other side of this is the alternatives are highly exclusionary — or could be highly exclusionary to smaller businesses. If you look at India for example, who have been trying to get an adequacy agreement with the EU for about ten years, what you’ve actually found now is a gap between those large multinationals, who can put in place binding corporate rules, standard contractual clauses, have the kind of capital to be able to do that — and it gives them an access to the European market which frankly most smaller businesses don’t have from India.

“We obviously wouldn’t want to see that in a UK tech sector which is an awful lot of startups, scale-ups, and is a key part of the ecosystem which makes the UK a tech hub within Europe.”

Denham made a similar point. “Binding corporate rules… might work for multinational companies [as an alternative data transfer mechanism] that have the ability to invest in that process,” she noted. “Codes of conduct and certification are other transfer mechanisms that could be used but there are very few codes of practice and certification mechanisms in place at this time. So, although that could be a future transfer mechanism… we don’t have codes and certifications that have been approved by authorities at this time.”

“I think it would be easier for multinational companies and large companies, rather than small businesses and certainly microbusinesses, that make up the lion’s share of business in the UK, especially in tech,” she added of the fall-back scenarios.

Giving another example of the scale of the potential bureaucracy nightmare, Stephen Hurley, head of Brexit planning and policy for UK ISP British Telecom, told the committee it has more than 18,000 suppliers. “If we were to put in place Standard Contractual Clauses it would be a subset of those suppliers but we’d have to identify where the flows of data would be coming from — in particular from the EU to the UK — and put in place those contractual clauses,” he said.

“The other problem with the contractual clauses is they’re a set form, they’re a precedent form that the Commission issues. And again that isn’t necessarily designed to deal with the modern ways of doing business — the way flows of data occurs in practice. So it’s quite a cumbersome process. And… [there’s] uncertainty as well, given they are currently under challenge before the European courts, a lot of companies now are already doing a sort of ‘belt and braces’ where even if you rely on Privacy Shield you’ll also put in place an alternative transfer mechanism to allow you to have a fall back in case one gets temporarily removed.”

A better post-Brexit scenario than every UK business having to do the bureaucratic and legal leg-work themselves would be the UK government securing a new data flow arrangement with the EU. Not least because, as Hurley mentioned, Standard Contractual Clauses are subject to a legal challenge, with legal question marks now extended to Privacy Shield too.

But what shape any such future UK-EU data transfer arrangement could take remains tbc.

The panel of witnesses agreed that personal data flows would be very unlikely to be housed within any future trade treaty between the UK and the EU. Rather data would need to live within a separate treaty or bespoke agreement, if indeed such a deal can be achieved.

Another possibility is for the UK to receive an adequacy decision from the EC — such as the Commission has granted to other third countries (like the US). But there was consensus on the panel that some form of bespoke data arrangement would be a superior outcome — for legal reasons but also for reciprocity and more.

Mullock’s view is a treaty would be preferable as it would be at lesser risk of a legal challenge. “I’m saying a treaty is preferable to a decision but we should take what we can get,” he said. “But a treaty is the ultimate standard to aim for.”

Denham agreed, underlining how an adequacy decision would be much more limiting. “I would say that a bespoke agreement or a treaty is preferable because that implies mutual recognition of each of our data protection frameworks,” she said. “It contains obligations on both sides, it would contain dispute mechanisms. If we look at an adequacy decision by the Commission that is a one-way decision judging the standard of UK law and the framework of UK law to be adequate according to the Commission and according to the Council. So an agreement would be preferable but it would have to be a standalone treaty or a standalone agreement that’s about data — and not integrate it into a trade agreement because of the fundamental rights element of data protection.”

Such a bespoke arrangement could also offer a route for the UK to negotiate and retain some role for her office within EU data protection regulation after Brexit.

Because as it stands, with the UK set to exit the EU next year — and even if an adequacy decision was secured — the ICO will lose its seat at the table at a time when EU privacy laws are setting the new global standard, thanks to GDPR.

“Unless a role for the ICO was negotiated through a bespoke agreement or a treaty there’s no way in law at present that we could participate in the one-stop shop [element of GDPR, which allows for EU DPAs to co-ordinate regulatory actions] — which would bring huge advantages to both sides and also to British businesses,” said Denham.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators.”

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table,” she added.

Hurley also made the point that if the ICO is not inside the GDPR one-stop shop mechanism then UK companies will have to choose another data protection agency within the EU to act as their lead regulator — describing this as “again another burden which we want to avoid”.

The panel was asked about opportunities for domestic divergence on elements of GDPR once the UK is outside the EU. But no one saw much advantage to be eked out outside a regulatory regime that is now responsible for the de facto global standard for data protection.

“GDPR is by no means perfect and there are a number of issues that we have with it. Having said that because GDPR has global reach it is now effectively being seen as we have to comply with this at an international level by a number of our largest members, who are rolling it out worldwide — not just Europe-wide — so the opportunities for divergence are quite limited,” said Derrington. “Particularly actually in areas like AI. AI requires massive amounts of data sets. So you can’t do that just from a UK only data-set of 60 million people if you took everyone. You need more data than that.

“If you were to use European data, which most of them would, then that will require you to comply with GDPR. So actually even if you could do things which would make it easier for some of the AI processes to happen by doing so you’d be closing off your access to the data-sets — and so most of the companies I’ve spoken to… see GDPR as that’s what we’re going to have to comply with. We’d much rather it be one rule… and to be able to maintain access to [EU] data-sets rather than just applying dual standards when they’re going to have to meet GDPR anyway.”

He also noted that about two-thirds of TechUK members are small and medium sized businesses, adding: “A small business working in AI still needs massive amounts of data.

“From a tech sector perspective, considering whether data protection sits in the public consciousness now, actually don’t see there being much opportunity to change GDPR. I don’t think that’s necessarily where the centre of gravity amongst the public is — if you look at the data protection bill, as it went through both houses, most of the amendments to the bill were to go further, to strengthen data protection. So actually we don’t necessarily see this is idea that we will significantly walk back GDPR. And bear in mind that any company which are doing any work with the EU would have to comply with GDPR anyway.”

The possibility for legal challenges to any future UK-EU data arrangement were also discussed during the hearing, with Denham saying that scrutiny of the UK’s surveillance regime once it is outside the EU is inevitable — though she suggested the government will be able to win over critics if it can fully articulate its oversight regime.

“Whether the UK proceeds with an adequacy assessment or whether we go down the road of looking at a bespoke agreement or a treaty we know, as we’ve seen with the Privacy Shield, that there will be scrutiny of our intelligence services and the collection, use and retention of data. So we can expect that,” she said, before arguing the UK has a “good story” to tell on that front — having recently reworked its domestic surveillance framework and included accepting the need to make amendments to the law following legal challenges.

“Accountability, transparency and oversight of our intelligence service needs to be explained and discussed to our [EU] colleagues but there is no doubt that it will come under scrutiny — and my office was part of the most recent assessment of the Privacy Shield. And looking at the US regime. So we’re well aware of the kind of questions that are going to be asked — including our arrangement with the Five Eyes, so we have to be ready for that,” she added.

Kaptivo looks to digitally transform the lowly whiteboard

At Kaptivo, a company that’s bringing high-tech image recognition, motion capture and natural language processing technologies to the lowly whiteboard, executives are hoping that the second time is the charm.

The Cambridge, U.K. and San Mateo, Calif.-based company began life as a company called Light Blue Optics, and had raised $50 million in financing since its launch in 2004. Light Blue Optics was working on products like Kaptivo’s white board technology and an interactive touch and pen technology, which was sold earlier in the year to Promethean, a global education technology solutions company.

With a leaner product line and a more focused approach to the market, Kaptivo emerged in 2016 from Light Blue Optics’ shadow and began selling its products in earnest.

Founding chief executive Nic Lawrence (the previous head of Light Blue Optics) even managed to bring in investors from his old startup to Kaptivo, raising $6 million in fresh capital from Draper Esprit (a previous backer), Benhamou Global Ventures and Generation Ventures.

“The common theme has been user interfaces,” Lawrence said. “We saw the need for a new product category. We sold off parts of our business and pushed all our money into Kaptivo.”

What initially began as a business licensing technology, Lawrence saw a massive market opening up in technologies that could transform the humble whiteboard into a powerful tool for digital business intelligence with the application of some off the shelf technology and Kaptivo’s proprietary software.

Kaptivo’s technology does more than just create a video of a conference room, Lawrence says.

“In real time we’re removing the people from the scene and enhancing the content written on the board,”  he said.”

Optical character recognition allows users to scribble on a white board and Kaptivo’s software will differentiate between text and images. The company’s subscription service even will convert text to other languages.

The company has a basic product and a three-year cloud subscription that it sells for $999. That’s much lower than the thousands of dollars a high-end smart conferencing system would cost, according to Lawrence. The hardware alone is $699, and a one-year subscription to its cloud services sells for $120, Lawrence said.

Kaptivo has sold more than 2,000 devices globally already and has secured major OEM partners like HP, according to a statement. Kaptivo customers include BlueJeans, Atlassian and Deloitte, as well as educational institutions including George Washington University, Stanford University and Florida Institute of Technology.

The product is integrated with Slack and Trello and BlueJeans video conferencing, Lawrence said. In the first quarter of 2018 alone, the company has sold about 5,000 units.

The vision is “to augment every existing whiteboard,” Lawrence said. “You can bring [the whiteboard] into the 21st century with one of these. Workers can us their full visual creativity as part of a remote meeting.”

Massage-on-demand company, Soothe, raises $31 million

The massage-on-demand service Soothe seems to be rubbing investors the right way with the close of a new $31 million round of funding.

The Series C round from late-stage and growth capital investment firm, The Riverside Company, caps a busy first quarter for the massage service. It also relocated from Los Angeles to Las Vegas; named a new chief executive; and announced new geographies where its massage booking platform is now available. 

As part of the new round, chief executive and founder Merlin Kauffman is stepping down from the role and assuming the mantle of executive chairman. Current chief financial officer Simon Heyrick is stepping into the chief executive role.

The former CFO of MarketShare, Heyrick has helped the company expand to more than 11,000 massage therapists in its network.

The company said the new round would help keep massage therapists in its network with pricing that can be up to three times more than those therapists would make in their local markets.

Beyond the new financing and a new boss, Soothe also is heading to new markets, launching services in Manchester, U.K.; Australia’s Gold Coast, Pittsburgh and Hartford, Conn. (some of those places are not like the others).

Soothe isn’t the only player in the massage marketplace. New York-based Zeel also has an offering for folks who want to book massages on the fly. Zeel claims a geographic reach of 85 U.S. cities, while Soothe claims roughly 60 cities worldwide.

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.