All posts in “identity management”

Why identity startup Auth0’s founder still codes: It makes him a better boss

If you ask Eugenio Pace to describe himself, “engineer” would be fairly high on the list.

“Being a CEO is pretty busy,” he told TechCrunch in a call last week. “But I’m an engineer in my heart — I am a problem solver,” he said.

Pace, an Argentinan immigrant to the U.S., founded identity management company Auth0 in 2013 after more than a decade at Microsoft. Auth0, pronounced “auth-zero,” has been described as like Stripe for payments or Twilio for messaging. App developers can add a few lines of code and it immediately gives their users access to the company’s identity management service.

That means the user can securely log in to the app without building a homebrew username and password system that’s invariably going to break. Any enterprise paying for Auth0 can also use its service to securely logon to the company’s internal network.

“Nobody cares about authentication, but everybody needs it,” he said.

Pace said Auth0 works to answer two simple questions. “Who are you, and what can you do?” he said.

“Those two questions are the same regardless of the device, the app, or whether if I’m an employee of somebody or if I am an individual using an app, or if I am using a device where there’s no human attached to it,” he said.

Whoever the users are, the app needs to know if the person using the app or service is allowed to, and what level of access or functionality they can get. “Can you transfer these funds?,” he said. “Can you approve these expense reports? Can you open the door of my house?” he explained.

Pace left Microsoft in 2012 and founded Auth0 during the emergence of Azure, which transformed Microsoft from a software giant into a cloud company. It was at Microsoft where he found identity management was one of the biggest headaches for developers moving their apps to the cloud. He wrote book after book, and edition after edition. “I felt like I could keep writing books about the problem — or I can just solve the problem,” he said.

So he did.

Instead of teaching developers how to become experts in identity management, he wanted to give them the tools to employ a sign-on solution without ever having to read a book.

Okta unveils $50M in-house venture capital fund

Identity management software provider Okta, which went public two years ago in what was one of the first pure-cloud subscription-based company IPOs, wants to fund the next generation of identity, security and privacy startups.

At its big customer conference Oktane, where the company has also announced a new level of identity protection at the server level, chief operating officer Frederic Kerrest (pictured above, right, with chief executive officer Todd McKinnon) will unveil a $50 million investment fund meant to back early-stage startups leveraging artificial intelligence, machine learning and blockchain technology.

“We view this as a natural extension of what we are doing today,” Okta senior vice president Monty Gray told TechCrunch. Gray was hired last year to oversee corporate development, i.e. beef up Okta’s M&A strategy.

Gray and Kerrest tell TechCrunch that Okta Ventures will invest capital in existing Okta partners, as well as other companies in the burgeoning identity management ecosystem. The team managing the fund will look to Okta’s former backers, Sequoia, Andreessen Horowitz and Greylock, for support in the deal sourcing process.

Okta Ventures will write checks sized between $250,000 and $2 million to eight to 10 early-stage businesses per year.

“It’s just a way of making sure we are aligning all our work and support with the right companies who have the right vision and values because there’s a lot of noise around identity, ML and AI,” Kerrest said. “It’s about formalizing the support strategy we’ve had for years and making sure people are clear of the fact we are helping these organizations build because it’s helpful to our customers.”

Okta Ventures’ first bet is Trusted Key, a blockchain-based digital identity platform that previously raised $3 million from Founders Co-Op. Okta’s investment in the startup, founded by former Microsoft, Oracle and Symantec executives, represents its expanding interest in the blockchain.

“Blockchain as a backdrop for identity is cutting edge if not bleeding edge,” Gray said.

Okta, founded in 2009, had raised precisely $231 million from Sequoia, Andreessen Horowitz, Greylock, Khosla Ventures, Floodgate and others prior to its exit. The company’s stock has fared well since its IPO, debuting at $17 per share in 2017 and climbing to more than $85 apiece with a market cap of $9.6 billion as of Tuesday closing.

Apple ad focuses on iPhone’s most marketable feature — privacy

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

[embedded content]

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

Okta to acquire workflow automation startup Azuqua for $52.5M

During its earnings report yesterday afternoon, Okta announced it intends to acquire Azuqua, a Bellevue, Washington workflow automation startup for $52.5 million.

In a blog post announcing the news, Okta co-founder and COO Frederic Kerrest saw the combining of the two companies as a way to move smoothly between applications in a complex workflow without having to constantly present your credentials.

“With Okta and Azuqua, IT teams will be able to use pre-built connectors and logic to create streamlined identity processes and increase operational speed. And, product teams will be able to embed this technology in their own applications alongside Okta’s core authentication and user management technology to build…integrated customer experiences,” Kerrest wrote.

In a modern enterprise, people and work are constantly shifting and moving between applications and services and combining automation software with identity and access management could offer a seamless way to move between them.

This represents Okta’s largest acquisition to-date and follows Stormpath almost exactly two years ago and ScaleFT last July. Taken together, you can see a company that is trying to become a more comprehensive identity platform.

Azuqua, which had raised $16 million since it launched in 2013, appears to have given investors  a pretty decent return. When the deal closes, Okta intends to bring its team on board and leave them in place in their Bellevue offices, creating a Northwest presence for the San Francisco company. Azuqua customers include Airbnb, McDonald’s, VMware and Hubspot,

Okta was founded in 2009 and raised over $229 million before going public April, 2017.

Children are being “datafied” before we’ve understood the risks, report warns

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.