All posts in “data protection”

Digital minister’s app lands on data watchdog’s radar after privacy cock-up

UK digital minister Matt Hancock, who’s currently busy with legislative updates to the national data protection framework, including to bring it in line with the EU’s strict new privacy regime, nonetheless found time to launch an own-brand social networking app this week.

The eponymously titled: Matt Hancock MP App.

To cut a long story short, the Matt app quickly ran into a storm of criticism for displaying an unfortunately lax attitude to privacy and data protection. Such as pasting in what appeared to be a very commercially minded privacy policy — which iOS users couldn’t even see prior to agreeing to it in order to download the app… [Insert facepalm emoji of choice]

In the words of one privacy consultant, who quickly raised concerns via Twitter: “You’d think the Digital Minister and one responsible for data protection package would get privacy right.”

Well — news just in! — the UK’s data protection watchdog isn’t entirely sure about that latter point, because it’s now looking into the app’s operation after privacy concerns were raised.

“We are checking reports about the operation of this app and have seen other similar examples of such concerns in apps as they are developed. So to help developers, we produced specific guidance on privacy in mobile apps,” an ICO spokesperson told TechCrunch in response to questions about the Matt app.

“The Data Protection Act exists to protect individuals’ privacy. Anyone developing an app needs to comply with data protection laws, ensuring privacy is at the forefront of their design,” the spokesperson added, pointing to the agency’s contact page as a handy resource for “anybody with concerns about how their personal data has been handled”.

(For the full lowdown on the Matt Hancock privacy snafu, I suggest reading The Register‘s gloriously titled report: What a Hancock-up: MP’s social network app is a privacy disaster.

This forensic Twitter thread, by the aforementioned consultant, @PrivacyMatters, is also a great exploration of the myriad areas where Matt Hancock’s app appears to be messing up in data protection T&C terms.)

Here’s a few screenshots of the app for the curious…

Of course the minister didn’t intend to generate his own personal privacy snafu.

He intended the Matt Hancock App to be a place for people in his West Suffolk constituency to keep up on news about Matt Hancock, MP.

Among the touted “Core benefits for Constituents” are:

  • Never miss out on local matters via private networks
  • A safe, trusted, environment where abuse is not tolerated and user data is not exploited

But Hancock outsourced the app’s development to a UK company called Disciple Media, which builds so-called “mobile-first community platforms” for third parties — including musicians and social media influencers.

And whose privacy policy is replete with circumspect words like “may” and “including” — making it about as clear as mud what exactly the company (and indeed what Matt Hancock MP) will be doing with Matt Hancock App users’ personal data.

Here’s a sample problematic para from the app’s privacy policy (emphasis ours):

when you sign up [to?] the App you provide consent so that we may disclose your personal information to the Publisher, the Publisher’s management company, agent, rights image company, the Publisher’s record label or publisher (as applicable) and any other third parties, for use in conjunction with additional user promotions or offers they may run from time to time or in relation to the sale of other goods and services. You may unsubscribe from such promotions or offers or communications at any time by following the instructions set out in such promotion or offer or communication;

If you’re wondering whether Hancock has also started his own rock band or record label; spoiler — as far as we’re aware he hasn’t. Rather, as we understand it, the policy issued with the app was originally created for musician clients which Disciple more often works with (one example on that front: The Rolling Stones).

We also understand the privacy policy was uploaded in error to the Matt app, according to sources familiar with the matter, and it is in the process of being reviewed for possible amendments.

Tapping around in the app itself, other aspects also point to it having been rushed out — for example, expanding comments didn’t seem to work for some of the posts we tried. And three dots in the upper corner of photos occasionally does nothing; occasionally asks if you want to ‘turn off notifications’; and occasionally offers both choices; plus a third option of asking if you want to report a post.

Meanwhile, as others have pointed out, by calling the app after the man himself users get the unfortunate notification that “Matt Hancock would like to access your photos” if they choose to upload an image. Awkward to say the least.

Although it’s less clear whether reports that the app might also be breaching iOS rules by accessing users’ photos even if they’ve denied camera roll access stand up to scrutiny as iOS 11 does let users grant one-time access to a photo.

Hancock’s parliamentary office is deferring all awkward questions about the Matt Hancock App to Disciple. We know because we rang and they redirected us to company’s contact details.

We wanted to ask Hancock’s people what user data his office is harvesting, via his own-brand app, and what the data will be used for. And why Hancock decided to build the app with Disciple (which the app’s press release specifies hasn’t been paid; the company is seemingly providing the service as a donation in kind — presumably for the hopes of associated publicity, so, er, careful what you wish for).

We also wanted to know what Hancock thought he could achieve by launching an own-brand app which isn’t already possible to do with pre-existing communication tools (and via constituency surgeries).

And whether the app was vetted by any government agencies prior to launch — given Hancock’s position as a sitting minister, and the potential for some wider reputational damage on account of the unfortunate juxtaposition with his ministerial portfolio.

Eventually a different Hancock staffer send us this statement: “This app is ICO registered and GDPR compliant. It is consistent with measures in the Data Protection Bill currently before Parliament. And is App Store certified by Apple, using standard Apple technology.”

Re: GDPR, we suggest the minister reads our primer because we’re rather less confident than he apparently is that his app, as is, under this current privacy policy and structure, would pass muster under the new EU-wide standard (which comes into force in May).

As regards the why of the Matt app, the staffer sent us a line from Matt’s weekly newsletter — where he writes: “Working with a brilliant British startup called Disciple Media, I’ve launched this app to build a safe, moderated, digital community where my West Suffolk constituents and I can discuss the issues that matter to them.”

Hancock’s office did not respond to our questions about the exact data they are collecting and for what specific purposes (pro tip: That’s basically a GDPR requirement guys!).

But we’ll update this post if the minister delivers any further insights on the digital activity being done under (and in) his name. (As an aside, an email we sent to his constituency email address also bounced back with a fatal delivery error. Digital credibility score at his point: Distressingly low.)

Meanwhile, Disciple Media has so far declined to provide a public response to our questions — though they have promised a statement. Which we’ll drop in here when/if it lands.

The company is in the process of pivoting its business model from a revenue share arrangement to a SaaS monthly subscription — which a spokesman describes as “more ‘easy Squarespace for mobile/mobile web communities’ than ‘social media’”.

So — in theory at least — the business should be heading away from the need to lean on the data slurping of app users’ personal information to power marketing-generated revenues to keep the money rolling in. At least if it gets enough paying monthly customers (Hancock not being one of them).

We’re told it has relied on private investment thus far but is also actively seeking to raise VC.

Facebook to roll out global privacy settings hub — thanks to GDPR

Facebook COO Sheryl Sandberg has said major privacy changes are coming to the platform later this year, as it prepares to comply with the European Union’s incoming data protection regulation.

Speaking at a Facebook event in Brussels yesterday, she said the company will be “rolling out a new privacy center globally that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data” (via Reuters).

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support General Data Protection Regulation (aka: GDPR) compliance.

From May 25 this year, the updated privacy framework will apply across the 28 Member State bloc — and any multinationals processing European citizens’ personal data will need to ensure they are compliant. Not least because the regulation includes beefed up liabilities for companies that fail to meet its standards. Under GDPR, penalties can scale as large as 4% of a company’s global turnover.

In Facebook’s case, based on its 2016 full year revenue, the new rules mean it could be facing fines that exceed a billion dollars — giving the company a rather more sizable incentive to ensure it meets the EU’s privacy standards and isn’t found to be playing fast and loose with users’ data.

Sandberg said the incoming changes will give the company “a very good foundation to meet all the requirements of the GDPR and to spur us on to continue investing in products and in educational tools to protect privacy”.

“Our apps have long been focused on giving people transparency and control,” she also remarked — a claim that any long-time Facebook user might laugh at rather long and hard.

Long history of hostility to privacy

Facebook has certainly made a lot of changes to privacy and control over the years, though its focus has rarely seemed aimed at “giving people transparency and control”.

Instead, many of its shifts and tweaks have been positioned to give the company more ways to exploit user data while simultaneously nudging people to give up more privacy (and thus hand it more options for exploiting their data).

Here, for example, is an EFF assessment of a 2009 Facebook privacy change — ostensibly, Facebook claimed at the time, to give users “greater control over their information”:

These new “privacy” changes are clearly intended to push Facebook users to publicly share even more information than before. Even worse, the changes will actually reduce the amount of control that users have over some of their personal data.

Among the changes Facebook made back then was to “recommend” preselected defaults to users that flipped their settings to share the content they post to Facebook with everyone on the Internet. (This recommendation was also pushed at users who had previously specified they wanted to limit any sharing to only their “Networks and Friends”.)

Clearly that was not a pro-privacy change. As we warned at the time it could (and did) lead to “a massive privacy fiasco” — given it encouraged Facebookers to inadvertently share more than they meant to.

A mere six months later — facing a major backlash and scrutiny from the FTC — Facebook was forced to rethink, and it put out what it claimed was a set of “drastically simplified” privacy controls.

Though it still took the company until May 2014 to change the default visibility of users’ statuses and photos to ‘friends’ — i.e. rather than the awful ‘public’ default.

Following the 2009 privacy debacle, a subsequent 2011 FTC settlement barred Facebook from making any deceptive privacy claims. The company also settled with the Irish DPA at the end of the same year — after privacy complaints had sparked an audit in Europe.

So in 2012, when Facebook decided to update its privacy policy — to give itself greater control over users’ data — it was forced to email all its users about the changes, as a consequence of those earlier regulatory settlements.

But it took direct action from EU privacy campaigner Max Schrems to force Facebook to put the proposed changes up for a worldwide vote — by mobilizing opinion online and triggering a long standing Facebook policy governance clause (which the company couldn’t exactly ignore, even as the structure of the clause essentially made it impossible for a user vote to block the changes).

At the time Schrems was also campaigning for Facebook to implement an ‘Opt-In’ instead of an ‘Opt-Out’ system for all data use and features; and also for limits on use of users’ data for ads. So, in other words, for exactly the sorts of changes GDPR is likely to bring in — with its requirement, for instance, that data controllers obtain meaningful consent from users to process their personal data (or else find another legal basis for handling their data).

What’s crystal clear is that, time and again, it’s taken regulatory and/or privacy campaigner pressure to push Facebook away from user-hostile data practices.

And that prior to regulatory crackdown the company’s intent was to reduce users’ privacy by pushing them to make more of their data public.

But even since then the company has continued to act in a privacy hostile way.

Another major low in Facebook’s privacy record came in 2016, when its subsidiary company, messaging giant WhatsApp, announced a privacy U-turn — saying it would begin sharing user data with Facebook for ad-targeting purposes, including users’ phone numbers and their last seen status on the app.

This hugely controversial anti-privacy move quickly attracted the ire of European privacy regulators — forcing Facebook to partially suspend data-sharing in the region. (The company remains under scrutiny in the EU over other types of WhatsApp-Facebook data-sharing which it has not paused.)

Facebook was eventually fined $122M by the European Commission, in May last year, for providing “incorrect or misleading” information to the regulators that had assessed its 2014 acquisition of WhatsApp (not a privacy fine, btw, a penalty purely for process failing).

At the time Facebook had claimed it could not automatically match user accounts between the two platforms — before going on to do just that two years later.

The company also only gave WhatsApp users a time-limited, partial opt-out for the data-sharing. Again, an approach that just wouldn’t wash under GDPR.

EU citizens who consent to their personal data being processed will also have a suite of associated rights — such as being able to ask for the data to be deleted, and the ability to withdraw their consent at any time. (Read our GDPR primer for a full overview of the changes fast incoming.)

While the full impact of the regulation will take time to shake out — the exact shape and tone of Facebook’s new global privacy settings center remains to be seen, for example — European Union lawmakers are already rightly celebrating a long overdue shift in the balance of power between platforms and consumers.

Featured Image: Bryce Durbin/TechCrunch


European Union lawmakers proposed a comprehensive update to the bloc’s data protection and privacy rules in 2012.

Their aim: To take account of seismic shifts in the handling of information wrought by the rise of the digital economy in the years since the prior regime was penned — all the way back in 1995 when Yahoo was the cutting edge of online cool and cookies were still just tasty biscuits.

Here’s the EU’s executive body, the Commission, summing up the goal:

The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritised. The reform will allow European citizens and businesses to fully benefit from the digital economy.

For an even shorter tl;dr the EC’s theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

The General Data Protection Regulation (aka GDPR) was agreed after more than three years of negotiations between the EU’s various institutions.

It’s set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.

Meanwhile businesses operating in the EU are being bombarded with ads from a freshly energized cottage industry of ‘privacy consultants’ offering to help them get ready for the new regs — in exchange for a service fee. It’s definitely a good time to be a law firm specializing in data protection.

GDPR is a significant piece of legislation whose full impact will clearly take some time to shake out. In the meanwhile, here’s our guide to the major changes incoming and some potential impacts.

Data protection + teeth

A major point of note right off the bat is that GDPR does not merely apply to EU businesses; any entities processing the personal data of EU citizens need to comply. Facebook, for example — a US company that handles massive amounts of Europeans’ personal data — is going to have to rework multiple business processes to comply with the new rules. Indeed, it’s been working on this for a long time already.

Last year the company told us it had assembled “the largest cross functional team” in the history of its family of companies to support GDPR compliance — specifying this included “senior executives from all product teams, designers and user experience/testing executives, policy executives, legal executives and executives from each of the Facebook family of companies”.

“Dozens of people at Facebook Ireland are working full time on this effort,” it said, noting too that the data protection team at its European HQ (in Dublin, Ireland) would be growing by 250% in 2017. It also said it was in the process of hiring a “top quality data protection officer” — a position the company appears to still be taking applications for.

The new EU rules require organizations to appoint a data protection officer if they process sensitive data on a large scale (which Facebook very clearly does). Or are collecting info on many consumers — such as by performing online behavioral tracking. But, really, which online businesses aren’t doing that these days?

The extra-territorial scope of GDPR casts the European Union as a global pioneer in data protection — and some legal experts suggest the regulation will force privacy standards to rise outside the EU too.

Sure, some US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes, and treating personal data obtained from different geographies differently, i.e. rather than streamlining everything under a GDPR compliant process. But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.

Ultimately, it may be easier (and less risky) for businesses to treat GDPR as the new ‘gold standard’ for how they handle all personal data, regardless of where it comes from.

And while not every company harvests Facebook levels of personal data, almost every company harvests some personal data. So for those with customers in the EU GDPR cannot be ignored. At very least businesses will need to carry out a data audit to understand their risks and liabilities.

Privacy experts suggest that the really big change here is around enforcement. Because while the EU has had long established data protection standards and rules — and treats privacy as a fundamental right — its regulators have lacked the teeth to command compliance.

But now, under GDPR, financial penalties for data protection violations step up massively.

The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M).

This really is a massive change. Because while data protection agencies (DPAs) in different EU Member States can impose financial penalties for breaches of existing data laws these fines are relatively small — especially set against the revenues of the private sector entities that are getting sanctioned.

In the UK, for example, the Information Commissioner’s Office (ICO) can currently impose a maximum fine of just £500,000. Compare that to the annual revenue of tech giant Google (~$90BN) and you can see why a much larger stick is needed to police data processors.

It’s not necessarily the case that individual EU Member States are getting stronger privacy laws as a consequence of GDPR (in some instances countries have arguably had higher standards in their domestic law). But the beefing up of enforcement that’s baked into the new regime means there’s a better opportunity for DPAs to start to bark and bite like proper watchdogs.

GDPR inflating the financial risks around handling personal data should naturally drive up standards — because privacy laws are suddenly a whole lot more costly to ignore.

More types of personal data that are hot to handle

So what is personal data under GDPR? It’s any information relating to an identified or identifiable person (in regulatorspeak people are known as ‘data subjects’).

While ‘processing’ can mean any operation performed on personal data — from storing it to structuring it to feeding it to your AI models. (GDPR also includes some provisions specifically related to decisions generated as a result of automated data processing but more on that below).

A new provision concerns children’s personal data — with the regulation setting a 16-year-old age limit on kids’ ability to consent to their data being processed. However individual Member States can choose (and some have) to derogate from this by writing a lower age limit into their laws.

GDPR sets a hard cap at 13-years-old — making that the defacto standard for children to be able to sign up to digital services. So the impact on teens’ social media habits seems likely to be relatively limited.

The new rules generally expand the definition of personal data — so it can include information such as location data, online identifiers (such as IP addresses) and other metadata. So again, this means businesses really need to conduct an audit to identify all the types of personal data they hold. Ignorance is not compliance.

GDPR also encourages the use of pseudonymization (such as encrypting personal data and storing the encryption key separately and securely) — as a pro-privacy, pro-security technique that can help minimize the risks of processing personal data. Although pseudonymized data is likely to still be considered personal data; certainly where a risk of reidentification remains. So it does not get a general pass from requirements under the regulation.

Data has to be rendered truly anonymous to be outside the scope of the regulation. (And given how often ‘anonymized’ data-sets have been shown to be re-identifiable, relying on any anonymizing process to be robust enough to have zero risk of re-identification seems, well, risky.)

The incoming data protection rules apply to both data controllers (i.e. entities that determine the purpose and means of processing personal data) and data processors (entities that are responsible for processing data on behalf of a data controller — aka subcontractors).

Indeed, data processors have some direct compliance obligations under GDPR, and can also be held equally responsible for data violations, with individuals able to bring compensation claims directly against them, and DPAs able to hand them fines or other sanctions.

So the intent for the regulation is there be no diminishing in responsibility down the chain of data handling subcontractors. GDPR aims to have every link in the processing chain be a robust one.

For companies that rely on a lot of subcontractors to handle data operations on their behalf there’s clearly a lot of risk assessment work to be done.

As noted above, there is a degree of leeway for EU Member States in how they implement some parts of the regulation (such as with the age of data consent for kids).

Consumer protection groups are calling for the UK government to include an optional GDPR provision on collective data redress to its DP bill, for example — a call the government has so far rebuffed.

But the wider aim is for the regulation to harmonize as much as possible data protection rules across all Member States to reduce the regulatory burden on digital businesses trading around the bloc.

On data redress, European privacy campaigner Max Schrems — most famous for his legal challenge to US government mass surveillance practices that resulted in a 15-year-old data transfer arrangement between the EU and US being struck down in 2015 — is currently running a crowdfunding campaign to set up a not-for-profit privacy enforcement organization to take advantage of the new rules and pursue strategic litigation on commercial privacy issues.

Schrems argues it’s simply not viable for individuals to take big tech giants to court to try to enforce their privacy rights, so thinks there’s a gap in the regulatory landscape for an expert organization to work on EU citizen’s behalf. Not just pursuing strategic litigation in the public interest but also promoting industry best practice.

The proposed data redress body — called noyb; short for: ‘none of your business’ — is being made possible because GDPR allows for collective enforcement of individuals’ data rights. And that provision could be crucial in spinning up a centre of enforcement gravity around the law. Because despite the position and role of DPAs being strengthened by GDPR, these bodies will still inevitably have limited resources vs the scope of the oversight task at hand.

Some may also lack the appetite to take on a fully fanged watchdog role. So campaigning consumer and privacy groups could certainly help pick up any slack.

Privacy by design and privacy by default

Another major change incoming via GDPR is ‘privacy by design’ no longer being just a nice idea; privacy by design and privacy by default become firm legal requirements.

This means there’s a requirement on data controllers to minimize processing of personal data — limiting activity to only what’s necessary for a specific purpose, carrying out privacy impact assessments and maintaining up-to-date records to prove out their compliance.

Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable. (And we’ve sure seen a whole lot of those hellish things in tech.) The core idea is that consent should be an ongoing, actively managed process; not a one-off rights grab.

As the UK’s ICO tells it, consent under GDPR for processing personal data means offering individuals “genuine choice and control” (for sensitive personal data the law requires a higher standard still — of explicit consent).

There are other legal bases for processing personal data under GDPR — such as contractual necessity; or compliance with a legal obligation under EU or Member State law; or for tasks carried out in the public interest — so it is not necessary to obtain consent in order to process someone’s personal data. But there must always be an appropriate legal basis for each processing.

Transparency is another major obligation under GDPR, which expands the notion that personal data must be lawfully and fairly processed to include a third principle of accountability. Hence the emphasis on data controllers needing to clearly communicate with data subjects — such as by informing them of the specific purpose of the data processing.

The obligation on data handlers to maintain scrupulous records of what information they hold, what they are doing with it, and how they are legally processing it, is also about being able to demonstrate compliance with GDPR’s data processing principles.

But — on the plus side for data controllers — GDPR removes the requirement to submit notifications to local DPAs about data processing activities. Instead, organizations must maintain detailed internal records — which a supervisory authority can always ask to see.

It’s also worth noting that companies processing data across borders in the EU may face scrutiny from DPAs in different Member States if they have users there (and are processing their personal data). Although the GDPR sets out a so-called ‘one-stop-shop’ principle — that there should be a “lead” DPA to co-ordinate supervision between any “concerned” DPAs — this does not mean that once it applies a cross-EU-border operator like Facebook is only going to be answerable to the concerns of the Irish DPA.

Indeed, Facebook’s tactic of only claiming to be under the jurisdiction of a single EU DPA looks to be on borrowed time. And the one-stop-shop provision in the GDPR seems more about creating a co-operation mechanism to allow multiple DPAs to work together in instances where they have joint concerns. Rather than offering a way for multinationals to go ‘forum shopping’ — which the regulation does not permit (per WP29 guidance).

Another change: Privacy policies that contain vague phrases like ‘We may use your personal data to develop new services’ or ‘We may use your personal data for research purposes’ will not pass muster under the new regime. So a wholesale rewriting of vague and/or confusingly worded T&Cs is something Europeans can look forward to this year.

Add to that, any changes to privacy policies must be clearly communicated to the user on an ongoing basis. Which means no more references in the privacy statement telling users to ‘regularly check for changes or updates’ — that just won’t be workable.

The onus is firmly on the data controller to keep the data subject fully informed of what is being done with their information. (Which almost implies that good data protection practice could end up tasting a bit like spam, from a user PoV.)

The overall intent behind GDPR is to inculcate an industry-wide shift in perspective regarding who ‘owns’ user data — disabusing companies of the notion that other people’s personal information belongs to them just because it happens to be sitting on their servers.

“Organizations should acknowledge they don’t exist to process personal data but they process personal data to do business,” is how analyst Gartner research director Bart Willemsen sums this up. “Where there is a reason to process the data, there is no problem. Where the reason ends, the processing should, too.”

The data protection officer (DPO) role that GDPR brings in as a requirement for many data handlers is intended to help them ensure compliance.

This officer, who must report to the highest level of management, is intended to operate independently within the organization, with warnings to avoid an internal appointment that could generate a conflict of interests.

Which types of organizations face the greatest liability risks under GDPR? “Those who deliberately seem to think privacy protection rights is inferior to business interest,” says Willemsen, adding: “A recent example would be Uber, regulated by the FTC and sanctioned to undergo 20 years of auditing. That may hurt perhaps similar, or even more, than a one-time financial sanction.”

“Eventually, the GDPR is like a speed limit: There not to make money off of those who speed, but to prevent people from speeding excessively as that prevents (privacy) accidents from happening,” he adds.

Another right to be forgotten

Under GDPR, people who have consented to their personal data being processed also have a suite of associated rights — including the right to access data held about them (a copy of the data must be provided to them free of charge, typically within a month of a request); the right to request rectification of incomplete or inaccurate personal data; the right to have their data deleted (another so-called ‘right to be forgotten’ — with some exemptions, such as for exercising freedom of expression and freedom of information); the right to restrict processing; the right to data portability (where relevant, a data subject’s personal data must be provided free of charge and in a structured, commonly used and machine readable form).

UK’s Carphone Warehouse fined nearly $540k for 2015 hack

The UK’s data watchdog has handed mobile phone retailer Carphone Warehouse a £400,000 fine — just shy of the £500k maximum the regulator can currently issue — for security failings attached to a 2015 hack that compromised the personal data of some three million customers and 1,000 employees.

Compromised customer data included: Names, addresses, phone numbers, dates of birth, marital status and, for more than 18,000 customers, historical payment card details. While exposed records for some Carphone Warehouse employees, including name, phone numbers, postcode, and car registration details.

Commenting on the penalty in a statement, the UK’s information commissioner Elizabeth Denham said: “A company as large, well-resourced, and established as Carphone Warehouse, should have been actively assessing its data security systems, and ensuring systems were robust and not vulnerable to such attacks.

“Carphone Warehouse should be at the top of its game when it comes to cyber-security, and it is concerning that the systemic failures we found related to rudimentary, commonplace measures.”

The Information Commissioner’s Office (ICO) said it identified “multiple inadequacies” in the company’s approach to data security during its investigation, and determined the company had failed to take adequate steps to protect people’s personal information.

Intruders had been able to use valid login credentials to access Carphone Warehouse’s system via out-of-date WordPress software, the ICO said.

Inadequacies in the organisation’s technical security measures were also exposed by the incident, with important elements of the software in use on the affected systems being out of date and the company failing to carry out routine security testing.

There were also inadequate measures in place to identify and purge historic data, it added.

“There will always be attempts to breach organisations’ systems and cyber-attacks are becoming more frequent as adversaries become more determined. But companies and public bodies need to take serious steps to protect systems, and most importantly, customers and employees,” said Denham.

“The law says it is the company’s responsibility to protect customer and employee personal information. Outsiders should not be getting to such systems in the first place. Having an effective layered security system will help to mitigate any attack — systems can’t be exploited if intruders can’t get in.”

A Carphone Warehouse spokesman provided the following response statement on the fine:

We accept today’s decision by the ICO and have co-operated fully throughout its investigation into the illegal cyberattack on a specific system within one of Carphone Warehouse’s UK divisions in 2015. 

As the ICO notes in its report, we moved quickly at the time to secure our systems, to put in place additional security measures and to inform the ICO and potentially affected customers and colleagues. The ICO noted that there was no evidence of any individual data having been used by third parties.

Since the attack in 2015 we have worked extensively with cyber security experts to improve and upgrade our security systems and processes.

We are very sorry for any distress or inconvenience the incident may have caused.

In October 2016 the ICO issued a £400k penalty to UK ISP TalkTalk also for a 2015 data breach — though in that instance only around 157,000 customer accounts were affected.

The maximum fine that data protection regulators in the European Union will be able to hand out will step to step up significantly in a matter of months — to £17M or 4 per cent of a company’s annual turnover — as the EU’s General Data Protection Regulation comes into force in May.

As well as inflating the maximum penalties for data protection failures, the GDPR imposes an obligation on companies processing EU citizens’ data to bake in data protection by design.

Featured Image: Chris Ratcliffe/Getty Images

The light and dark of AI-powered smartphones

Analyst Gartner put out a 10-strong listicle this week identifying what it dubbed “high-impact” uses for AI-powered features on smartphones that it suggests will enable device vendors to provide “more value” to customers via the medium of “more advanced” user experiences.

It’s also predicting that, by 2022, a full 80 per cent of smartphones shipped will have on-device AI capabilities, up from just 10 per cent in 2017.

More on-device AI could result in better data protection and improved battery performance, in its view — as a consequence of data being processed and stored locally. At least that’s the top-line takeout.

Its full list of apparently enticing AI uses is presented (verbatim) below.

But in the interests of presenting a more balanced narrative around automation-powered UXes we’ve included some alternative thoughts after each listed item which consider the nature of the value exchange being required for smartphone users to tap into these touted ‘AI smarts’ — and thus some potential drawbacks too.

Uses and abuses of on-device AI

1)   “Digital Me” Sitting on the Device

“Smartphones will be an extension of the user, capable of recognising them and predicting their next move. They will understand who you are, what you want, when you want it, how you want it done and execute tasks upon your authority.”

“Your smartphone will track you throughout the day to learn, plan and solve problems for you,” said Angie Wang, principle research analyst at Gartner. “It will leverage its sensors, cameras and data to accomplish these tasks automatically. For example, in the connected home, it could order a vacuum bot to clean when the house is empty, or turn a rice cooker on 20 minutes before you arrive.”

Hello stalking-as-a-service. Is this ‘digital me’ also going to whisper sweetly that it’s my ‘number one fan’ as it pervasively surveils my every move in order to fashion a digital body-double that ensnares my free will within its algorithmic black box… 

Invasion Of The Body Snatchers GIF by SBS Movies - Find & Share on GIPHY

Or is it just going to be really annoyingly bad at trying to predict exactly what I want at any given moment, because, y’know, I’m a human not a digital paperclip (no, I am not writing a fucking letter).  

Oh and who’s to blame when the AI’s choices not only aren’t to my liking but are much worse? Say the AI sent the robo vacuum cleaner over the kids’ ant farm when they were away at school… is the AI also going to explain to them the reason for their pets’ demise? Or what if it turns on my empty rice cooker (after I forgot to top it up) — at best pointlessly expending energy, at worst enthusiastically burning down the house.

We’ve been told that AI assistants are going to get really good at knowing and helping us real soon for a long time now. But unless you want to do something simple like play some music, or something narrow like find a new piece of similar music to listen to, or something basic like order a staple item from the Internet, they’re still far more idiot than savant. 

2)   User Authentication

“Password-based, simple authentication is becoming too complex and less effective, resulting in weak security, poor user experience, and a high cost of ownership. Security technology combined with machine learning, biometrics and user behaviour will improve usability and self-service capabilities. For example, smartphones can capture and learn a user’s behaviour, such as patterns when they walk, swipe, apply pressure to the phone, scroll and type, without the need for passwords or active authentications.”

More stalking-as-a-service. No security without total privacy surrender, eh? But will I get locked out of my own devices if I’m panicking and not behaving like I ‘normally’ do — say, for example, because the AI turned on the rice cooker when I was away and I arrived home to find the kitchen in flames. And will I be unable to prevent my device from being unlocked on account of it happening to be held in my hands — even though I might actually want it to remain locked in any particular given moment because devices are personal and situations aren’t always predictable. 

And what if I want to share access to my mobile device with my family? Will they also have to strip naked in front of its all-seeing digital eye just to be granted access? Or will this AI-enhanced multi-layered biometric system end up making it harder to share devices between loved ones? As has indeed been the case with Apple’s shift from a fingerprint biometric (which allows multiple fingerprints to be registered) to a facial biometric authentication system, on the iPhone X (which doesn’t support multiple faces being registered)? Are we just supposed to chalk up the gradual goodnighting of device communality as another notch in ‘the price of progress’?

3)   Emotion Recognition

“Emotion sensing systems and affective computing allow smartphones to detect, analyse, process and respond to people’s emotional states and moods. The proliferation of virtual personal assistants and other AI-based technology for conversational systems is driving the need to add emotional intelligence for better context and an enhanced service experience. Car manufacturers, for example, can use a smartphone’s front camera to understand a driver’s physical condition or gauge fatigue levels to increase safety.”

No honest discussion of emotion sensing systems is possible without also considering what advertisers could do if they gained access to such hyper-sensitive mood data. On that topic Facebook gives us a clear steer on the potential risks — last year leaked internal documents suggested the social media giant was touting its ability to crunch usage data to identify feelings of teenage insecurity as a selling point in its ad sales pitches. So while sensing emotional context might suggest some practical utility that smartphone users may welcome and enjoy, it’s also potentially highly exploitable and could easily feel horribly invasive — opening the door to, say, a teenager’s smartphone knowing exactly when to hit them with an ad because they’re feeling low.

If indeed on-device AI means locally processed emotion sensing systems could offer guarantees they would never leak mood data there may be less cause for concern. But normalizing emotion-tracking by baking it into the smartphone UI would surely drive a wider push for similarly “enhanced” services elsewhere — and then it would be down to the individual app developer (and their attitude to privacy and security) to determine how your moods get used. 

As for cars, aren’t we also being told that AI is going to do away with the need for human drivers? Why should we need AI watchdogs surveilling our emotional state inside vehicles (which will really just be nap and entertainment pods at that point, much like airplanes). A major consumer-focused safety argument for emotion sensing systems seems unconvincing. Whereas government agencies and businesses would surely love to get dynamic access to our mood data for all sorts of reasons…

4)   Natural-Language Understanding

“Continuous training and deep learning on smartphones will improve the accuracy of speech recognition, while better understanding the user’s specific intentions. For instance, when a user says “the weather is cold,” depending on the context, his or her real intention could be “please order a jacket online” or “please turn up the heat.” As an example, natural-language understanding could be used as a near real-time voice translator on smartphones when traveling abroad.”

While we can all surely still dream of having our own personal babelfish — even given the cautionary warning against human hubris embedded in the biblical allegory to which the concept alludes — it would be a very impressive AI assistant that could automagically select the perfect jacket to buy its owner after they had casually opined that “the weather is cold”.

I mean, no one would mind a gift surprise coat. But, clearly, the AI being inextricably deeplinked to your credit card means it would be you forking out for, and having to wear, that bright red Columbia Lay D Down Jacket that arrived (via Amazon Prime) within hours of your climatic observation, and which the AI had algorithmically determined would be robust enough to ward off some “cold”, while having also data-mined your prior outerwear purchases to whittle down its style choice. Oh, you still don’t like how it looks? Too bad.  

The marketing ‘dream’ pushed at consumers of the perfect AI-powered personal assistant involves an awful lot of suspension of disbelief around how much actual utility the technology is credibly going to provide — i.e. unless you’re the kind of person who wants to reorder the same brand of jacket every year and also finds it horribly inconvenient to manually seek out a new coat online and click the ‘buy’ button yourself. Or else who feels there’s a life-enhancing difference between having to directly ask an Internet connected robot assistant to “please turn up the heat” vs having a robot assistant 24/7 spying on you so it can autonomously apply calculated agency to choose to turn up the heat when it overheard you talking about the cold weather — even though you were actually just talking about the weather, not secretly asking the house to be magically willed warmer. Maybe you’re going to have to start being a bit more careful about the things you say out loud when your AI is nearby (i.e. everywhere, all the time). 

Humans have enough trouble understanding each other; expecting our machines to be better at this than we are ourselves seems fanciful — at least unless you take the view that the makers of these data-constrained, imperfect systems are hoping to patch AI’s limitations and comprehension deficiencies by socially re-engineering their devices’ erratic biological users by restructuring and reducing our behavioral choices to make our lives more predictable (and thus easier to systemize). Call it an AI-enhanced life more ordinary, less lived.

5)   Augmented Reality (AR) and AI Vision

“With the release of iOS 11, Apple included an ARKit feature that provides new tools to developers to make adding AR to apps easier. Similarly, Google announced its ARCore AR developer tool for Android and plans to enable AR on about 100 million Android devices by the end of next year. Google expects almost every new Android phone will be AR-ready out of the box next year. One example of how AR can be used is in apps that help to collect user data and detect illnesses such as skin cancer or pancreatic cancer.”

While most AR apps are inevitably going to be a lot more frivolous than the cancer detecting examples being cited here, no one’s going to neg the ‘might ward off a serious disease’ card. That said, a system that’s harvesting personal data for medical diagnostic purposes amplifies questions about how sensitive health data will be securely stored, managed and safeguarded by smartphone vendors. Apple has been pro-active on the health data front — but, unlike Google, its business model is not dependent on profiling users to sell targeted advertising so there are competing types of commercial interests at play.

And indeed, regardless of on-device AI, it seems inevitable that users’ health data is going to be taken off local devices for processing by third party diagnostic apps (which will want the data to help improve their own AI models) — so data protection considerations ramp up accordingly. Meanwhile powerful AI apps that could suddenly diagnose very serious illnesses also raise wider issues around how an app could responsibly and sensitively inform a person it believes they have a major health problem. ‘Do no harm’ starts to look a whole lot more complex when the consultant is a robot.  

6) Device Management

“Machine learning will improve device performance and standby time. For example, with many sensors, smartphones can better understand and learn user’s behaviour, such as when to use which app. The smartphone will be able to keep frequently used apps running in the background for quick re-launch, or to shut down unused apps to save memory and battery.”

Another AI promise that’s predicated on pervasive surveillance coupled with reduced user agency — what if I actually want to keep an app open that I normally close directly or vice versa; the AI’s template won’t always predict dynamic usage perfectly. Criticism directed at Apple after the recent revelation that iOS will slow performance of older iPhones as a technique for trying to eke better performance out of older batteries should be a warning flag that consumers can react in unexpected ways to a perceived loss of control over their devices by the manufacturing entity.   

7) Personal Profiling

“Smartphones are able to collect data for behavioural and personal profiling. Users can receive protection and assistance dynamically, depending on the activity that is being carried out and the environments they are in (e.g., home, vehicle, office, or leisure activities). Service providers such as insurance companies can now focus on users, rather than the assets. For example, they will be able to adjust the car insurance rate based on driving behaviour.”

Insurance premiums based on pervasive behavioral analysis — in this case powered by smartphone sensor data (location, speed, locomotion etc) — could also of course be adjusted in ways that end up penalizing the device owner. Say if a person’s phone indicated they brake harshly quite often. Or regularly exceed the speed limit in certain zones. And again, isn’t AI supposed to be replacing drivers behind the wheel? Will a self-driving car require its rider to have driving insurance? Or aren’t traditional car insurance premiums on the road to zero anyway — so where exactly is the consumer benefit from being pervasively personally profiled? 

Meanwhile discriminatory pricing is another clear risk with profiling. And for what other purposes might a smartphone be utilized to perform behavioral analysis of its owner? Time spent hitting the keys of an office computer? Hours spent lounged out in front of the TV? Quantification of almost every quotidian thing might become possible as a consequence of always-on AI — and given the ubiquity of the smartphone (aka the ‘non-wearable wearable’) — but is that actually desirable? Could it not induce feelings of discomfort, stress and demotivation by making ‘users’ (i.e. people) feel they are being microscopically and continuously judged just for how they live? 

The risks around pervasive profiling appear even more crazily dystopian when you look at China’s plan to give every citizen a ‘character score’ — and consider the sorts of intended (and unintended) consequences that could flow from state level control infrastructures powered by the sensor-packed devices in our pockets. 

8)   Content Censorship/Detection

“Restricted content can be automatically detected. Objectionable images, videos or text can be flagged and various notification alarms can be enabled. Computer recognition software can detect any content that violates any laws or policies. For example, taking photos in high security facilities or storing highly classified data on company-paid smartphones will notify IT.”

Personal smartphones that snitch on their users for breaking corporate IT policies sound like something straight out of a sci-fi dystopia. Ditto AI-powered content censorship. There’s a rich and varied (and ever-expanding) tapestry of examples of AI failing to correctly identify, or entirely misclassifying, images — including being fooled by deliberately adulterated graphics  — as well a long history of tech companies misapplying their own policies to disappear from view (or otherwise) certain pieces and categories of content (including really iconic and really natural stuff) — so freely handing control over what we can and cannot see (or do) with our own devices at the UI level to a machine agency that’s ultimately controlled by a commercial entity subject to its own agendas and political pressures would seem ill-advised to say the least. It would also represent a seismic shift in the power dynamic between users and connected devices. 

9) Personal Photographing

“Personal photographing includes smartphones that are able to automatically produce beautified photos based on a user’s individual aesthetic preferences. For example, there are different aesthetic preferences between the East and West — most Chinese people prefer a pale complexion, whereas consumers in the West tend to prefer tan skin tones.”

AI already has a patchy history when it comes to racially offensive ‘beautification’ filters. So any kind of automatic adjustment of skin tones seems equally ill-advised.  Zooming out, this kind of subjective automation is also hideously reductive — fixing users more firmly inside AI-generated filter bubbles by eroding their agency to discover alternative perspectives and aesthetics. What happens to ‘beauty is in the eye of the beholder’ if human eyes are being unwittingly rendered algorithmically color-blind? 

10)    Audio Analytic

“The smartphone’s microphone is able to continuously listen to real-world sounds. AI capability on device is able to tell those sounds, and instruct users or trigger events. For example, a smartphone hears a user snoring, then triggers the user’s wristband to encourage a change in sleeping positions.”

What else might a smartphone microphone that’s continuously listening to the sounds in your bedroom, bathroom, living room, kitchen, car, workplace, garage, hotel room and so on be able to discern and infer about you and your life? And do you really want an external commercial agency determining how best to systemize your existence to such an intimate degree that it has the power to disrupt your sleep? The discrepancy between the ‘problem’ being suggested here (snoring) and the intrusive ‘fix’ (wiretapping coupled with a shock-generating wearable) very firmly underlines the lack of ‘automagic’ involved in AI. On the contrary, the artificial intelligence systems we are currently capable of building require near totalitarian levels of data and/or access to data and yet consumer propositions are only really offering narrow, trivial or incidental utility.

This discrepancy does not trouble the big data-mining businesses that have made it their mission to amass massive data-sets so they can fuel business-critical AI efforts behind the scenes. But for smartphone users asked to sleep beside a personal device that’s actively eavesdropping on bedroom activity, for e.g., the equation starts to look rather more unbalanced. And even if YOU personally don’t mind, what about everyone else around you whose “real-world sounds” will also be being snooped on by your phone, regardless of whether they like it or not. Have you asked them if they want an AI quantifying the noises they make? Are you going to inform everyone you meet that you’re packing a wiretap? 

Featured Image: Erikona/Getty Images