All posts in “identity management”

Apple ad focuses on iPhone’s most marketable feature — privacy

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

[embedded content]

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

Okta to acquire workflow automation startup Azuqua for $52.5M

During its earnings report yesterday afternoon, Okta announced it intends to acquire Azuqua, a Bellevue, Washington workflow automation startup for $52.5 million.

In a blog post announcing the news, Okta co-founder and COO Frederic Kerrest saw the combining of the two companies as a way to move smoothly between applications in a complex workflow without having to constantly present your credentials.

“With Okta and Azuqua, IT teams will be able to use pre-built connectors and logic to create streamlined identity processes and increase operational speed. And, product teams will be able to embed this technology in their own applications alongside Okta’s core authentication and user management technology to build…integrated customer experiences,” Kerrest wrote.

In a modern enterprise, people and work are constantly shifting and moving between applications and services and combining automation software with identity and access management could offer a seamless way to move between them.

This represents Okta’s largest acquisition to-date and follows Stormpath almost exactly two years ago and ScaleFT last July. Taken together, you can see a company that is trying to become a more comprehensive identity platform.

Azuqua, which had raised $16 million since it launched in 2013, appears to have given investors  a pretty decent return. When the deal closes, Okta intends to bring its team on board and leave them in place in their Bellevue offices, creating a Northwest presence for the San Francisco company. Azuqua customers include Airbnb, McDonald’s, VMware and Hubspot,

Okta was founded in 2009 and raised over $229 million before going public April, 2017.

Children are being “datafied” before we’ve understood the risks, report warns

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

Myki raises $4M Series A to decentralize identity management for enterprises

Myki, a startup based between Beirut and New York which offers both a consumer and enterprise identity management solution to store sensitive information offline, today announced at TechCrunch Disrupt in San Francisco that it’s raised a $4 million Series A to scale its operations.

The round was led by Dubai-based VC BECO Capital with participation from Beirut-based LEAP Ventures and B&Y Venture Partners, all of which are returning investors. Myki plans to expand its U.S. operations with its “decentralised Identity Management” solution for enterprise.

Priscilla Elora Sharuk, who co-founded the startup with Antoine Vincent Jabberer in 2015, said: “Online security and data privacy is not a privilege, it is a right, and that is why at Myki we empower our users with the tools to securely manage their digital identity.”

Myki actually launched on the TechCrunch Disrupt Battlefield stage in September of 2016, and has since gone on to win several plaudits from tech industry outlets for its free and powerful password management, and amassing more than 250,000 users worldwide.

Back in May, on the TechCrunch Disrupt Berlin stage, Myki announced a partnership with self-sovereign identity application Blockpass to combine self-sovereign identity and offline password security.

Myki is going after the consumer password space, with biometric authentication such as touch ID and Face ID; the enterprise with “Myki for Teams”; and a solution for Managed Service Providers.

Okta nabs ScaleFT to build out ‘Zero Trust’ security framework

Okta, the cloud identity management company, announced today it has purchased a startup called ScaleFT to bring the Zero Trust concept to the Okta platform. Terms of the deal were not disclosed.

While Zero Trust isn’t exactly new to a cloud identity management company like Okta, acquiring ScaleFT gives them a solid cloud-based Zero Trust foundation on which to continue to develop the concept internally.

“To help our customers increase security while also meeting the demands of the modern workforce, we’re acquiring ScaleFT to further our contextual access management vision — and ensure the right people get access to the right resources for the shortest amount of time,” Okta co-founder and COO Frederic Kerrest said in a statement.

Zero Trust is a security framework that acknowledges work no longer happens behind the friendly confines of a firewall. In the old days before mobile and cloud, you could be pretty certain that anyone on your corporate network had the authority to be there, but as we have moved into a mobile world, it’s no longer a simple matter to defend a perimeter when there is effectively no such thing. Zero Trust means what it says: you can’t trust anyone on your systems and have to provide an appropriate security posture.

The idea was pioneered by Google’s “BeyondCorp” principals and the founders of ScaleFT are adherents to this idea. According to Okta, “ScaleFT developed a cloud-native Zero Trust access management solution that makes it easier to secure access to company resources without the need for a traditional VPN.”

Okta wants to incorporate the ScaleFT team and, well, scale their solution for large enterprise customers interested in developing this concept, according to a company blog post by Kerrest.

“Together, we’ll work to bring Zero Trust to the enterprise by providing organizations with a framework to protect sensitive data, without compromising on experience. Okta and ScaleFT will deliver next-generation continuous authentication capabilities to secure server access — from cloud to ground,” Kerrest wrote in the blog post.

ScaleFT CEO and co-founder Jason Luce will manage the transition between the two companies, while CTO and co-founder Paul Querna will lead strategy and execution of Okta’s Zero Trust architecture. CSO Marc Rogers will take on the role of Okta’s Executive Director, Cybersecurity Strategy.

The acquisition allows the Okta to move beyond purely managing identity into broader cyber security, at least conceptually. Certainly Roger’s new role suggests the company could have other ideas to expand further into general cyber security beyond Zero Trust.

ScaleFT was founded in 2015 and has raised $2.8 million over two seed rounds, according to Crunchbase data.