All posts in “AI”

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former Director of Engineering for Faceboook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate, and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in under a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from NVIDIA and Google are virtually available for anyone to run their test.

Individual users can get on for free, specify the type of GPU they need to compute their experiment, and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a ‘spell hyper’ command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But, perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the ‘size’ of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google, or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent as well as a designer.

Harrowing video shows what the future of work might look like

Afraid a machine will take over your job? Ever thought it might take over your boss’ job instead, turning you into a servant of an AI whose inner workings you cannot comprehend?

A new video short by designer and film-maker Keiichi Matsuda shows what that might look like — and how it might end. 

[embedded content]

In the video, titled “Merger,” Matsuda envisions a futuristic work environment that might feel alien to us now, but a lot of it was actually based on today’s real life. In an email, Matsuda told me the interface was built around principles he’d developed in his concept UX design work for commercial clients. 

“It kind of works,” he wrote. “The script for the video was built around real advice I found in productivity blogs.”

The four-minute video asks an important question: When (if) artificial intelligence starts giving us tasks instead of the other way around, will be able to cope with the demands? And, if not, how much will we have to change to keep up?

Matsuda’s work went viral in 2016, when he published a video called Hyper-Reality,  imagining a near-unbearable, AR/VR-infested future. Even though it’s now nearly three years old, that video still looks incredibly fresh and hits home better than any movie I’ve seen — and that includes expensive Hollywood productions. 

[embedded content]

Check out more of Matsuda’s visual work on Instagram and Facebook

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f87102%2fad683b32 7936 4d2f be5b e10d93be2ecc

Computer vision startup AnyVision pulls in new funding from Lightspeed

While there have been a few massive surveillance startups in China that have raised funds on the back of computer vision advances, there’s seemed to be less fervor outside of that market. Tel Aviv-based AnyVision is aiming to leverage its computer vision chops in tracking people and objects to create some pretty clear utility for the enterprise world.

After announcing a $27 million Series A in mid-2018, the computer vision startup is bringing Lightspeed Venture Partners into the raise, closing out the round at $43 million.

“When you have a company with the technology AnyVision has, and the market need that I’m hearing from across industries, what you need to do is push the gas pedal and build an organization which can monetize and take on this opportunity to grow massively,” Lightspeed partner Raviraj Jain told TechCrunch.

Right now the 200-person company has its eyes on the security and identity markets as it aims to bring its computer vision technology into more industry-tailored solutions.

The company’s “Better Tomorrow” product delivers camera-agnostic surveillance insights from its object and human-tracking tech. “Sesame” is the company’s consumer-facing play for bringing mobile banking authentication to hundreds of millions of phones. The company is still looking to release a retail analytics platform to customers as well.

These people aren’t real. Can you tell?

The image above looks like a collage of photographs, but in fact, it’s been generated by an artificial intelligence. And as real as they may look, the people in the image aren’t actual humans. 

In a new paper (via The Verge), a group of Nvidia’s researchers explain how they’ve created these images by employing a type of AI, called generative adversarial network (GAN), in novel ways. And their results are truly mind-boggling. 

The paper is titled “A Style-Based Generator Architecture for Generative Adversarial Networks” and signed by Tero Karras, Samuli Laine and Timo Aila, all from Nvidia. In it, the researchers show how they’ve redesigned the GAN’s architecture with a new approach called “style-based design.” 

“Our generator thinks of an image as a collection of “styles.” 

“The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair),” the paper says. 

In layman’s terms, after being trained, the GAN produced images that are pretty much indistinguishable from photographs of real people, completely on its own. 

“Our generator thinks of an image as a collection of “styles,” where each style controls the effects at a particular scale,” the researchers explain in a video accompanying the paper. These styles are attributes such as pose, hair, face shape, eyes and facial features. And researchers can play with these styles and get different results, as seen in the video, below. 

[embedded content]

It’s not just people that GAN can create in this way. 

In the paper, the researchers use the GAN to create images of bedrooms, cars and cats. 

Image: Nvidia/Arxiv.org

Amazingly, the concept of GANs was introduced just four years ago by researchers from the University of Montreal. 

Check the image from that paper below to see how much progress has been made since then. 

Image: Universite de Montr EAL/arxiv.org 

It’s easy to see this technology used in the creation of realistic-looking images for marketing or advertising purposes, for example. But it’s just as easy to imagine someone using it to create fake “evidence” of events that never happened in order to promote some agenda. 

At the speed this tech is progressing, it soon might be impossible to tell whether you’re looking at a real photograph or a computer generated image. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f90022%2fc7dc785b ce44 4640 b0da 060371991a53