All posts in “AI”

Lily raises $2M from NEA and others for a personal stylist service that considers feelings, not just fit


One of the reasons recently IPO’d Stitch Fix became so popular among female shoppers is because of how it pairs the convenience of home try-on for clothing and accessories with a personal styling service that adapts to your tastes over time. But often, personal stylists bring their own subjective takes on fashion to their customers. A new startup called Lily aims to offer a more personalized service that takes into account not just what’s on trend or what looks good, but also how women feel about their bodies and how the right clothing can impact those perceptions.

The company has now closed on $2 million in seed funding from NEA and other investors to further develop its technology, which today involves an iOS application, web app and API platform that retailers can integrate with their own catalogs and digital storefronts.

To better understand a woman’s personal preferences around fashion, Lily uses a combination of algorithms and machine learning techniques to recommend clothing that fits, flatters and makes a woman feel good.

At the start, Lily asks the user a few basic questions about body type and style preferences, but it also asks women how perceive their body.

For example, if Lily asks about bra size, it wouldn’t just ask for the size a woman wears, but also how they think of this body part.

“I’m well-endowed,” a woman might respond, even if she’s only a full B or smaller C – which is not necessarily the reality. This sort of response helps to teach Lily about how the woman thinks of her body and its various parts, to help it craft its recommendations. That same woman may want to minimize her chest, or she may like to show off her cleavage, she may say.

But as she shops Lily’s recommendations in this area, the service learns what sorts of items the woman actually chooses and then adapts accordingly.

This focus on understanding women’s feelings about clothing is something that sets Lily apart.

“Women are looking for clothes to spotlight the parts of their body they feel most comfortable with and hide the ones that make them feel insecure,” explains Lily co-founder and CEO, Purva Gupta. “A customer makes a decision because based on whether a specific cut will hide her belly or downplay a feature they don’t like. Yet stores do nothing to guide women toward these preferences or take the time to understand the reasons behind their selections,” she says.

Gupta came up with the idea for Lily after moving to New York from India, where she felt overwhelmed by the foreign shopping culture. She was surrounded by so much choice, but didn’t know how to find the clothing that would fit her well, or those items that would make her feel good when wearing them.

She wondered if her intimidation was something American women – not just immigrants like herself – also felt. For a year, Gupta interviewed others, asking them one question: what prompted them to buy the last item of clothing they purchased, either online or offline? She learned that those choices were often prompted by emotions.

Being able to create a service that could match up the right clothing based on those feelings was a huge challenge, however.

“I knew that this was a very hard problem, and this was a technology problem,” says Gupta. “There’s only one way to solve this at scale – to use technology, especially artificial intelligence, deep learning and machine learning. That’s going to help me do this at scale at any store.”

To train Lily’s algorithms, the company spent two-and-half years building out its collection of 50 million plus data points and analyzing over a million product recommendations for users. The end result is that an individual item of clothing may have over 1,000 attributes assigned to it, which is then used to match up with the thousands of attributes associated with the user in question.

“This level of detail is not available anywhere,” notes Gupta.

In Lily’s app, which works as something of a demo of the technology at hand, users can shop recommendations from 60 stores, ranging from Forever 21 to Nordstrom, in terms of price. (Lily today makes affiliate revenue from sales).

In addition, the company is now beginning to pilot its technology with a handful of retailers on their own sites – details it plans to announce in a few months’ time. This will allow shoppers to get unique, personalized recommendations online that could also be translated to the offline store in the form of reserved items awaiting you when you’re out shopping.

Though it’s early days for Lily, its hypothesis is proving correct, says Gupta.

“We’ve seen between 10x to 20x conversion rates,” she claims. “That’s what’s very exciting and promising, and why these big retailers are talking to us.”

The pilots tests are paid, but the pricing details for Lily’s service for retailers are not yet set in stone so the company declined to speak about them.

The startup was also co-founded by CTO Sowmiya Chocka Narayanan, previously of Box and Pocket Gems. It’s is now a team of 16 full-time in Palo Alto.

In addition to NEA, other backers include Global Founders Capital, Triplepoint Capital, Think + Ventures, Varsha Rao (Ex-COO of Airbnb, COO of Clover Health), Geoff Donaker (Ex-COO of Yelp), Jed Nachman(COO, Yelp), Unshackled Ventures and others.

Nvidia just released the most insanely powerful graphics processor ever made

Nvidia just announced the Titan V, the most powerful graphics processing unit (GPU) of all time. 

This isn’t the type of graphics card your gamer friends all bought on Black Friday, either. This is a graphics card powerful enough for use on research in artificial intelligence and machine learning. 

The GPU contains 21.1 billion (yes, billion) transistors, and delivers 110 teraflops of horsepower, 9 times that of any other Nvidia processor. For perspective, the Xbox One X delivers only six teraflops, and that was really impressive when it was announced earlier this year.

[embedded content]

Don’t get any ideas, though — the Titan V is $3,000. Good gaming GPUs are usually $500-$600. 

But it could mean huge things for research into and development of artificial intelligence. In October, Google’s AI AlphaGo Zero became the best Go player in the world, in what was widely considered to be a massive leap forward in deep learning, using $25 million worth of computing hardware. 

With such a powerful and efficient GPU freeing up resources for other innovations, it’s exciting (and terrifying) to imagine what the next big AI will be able to do. 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f11%2f188cd60a 4b46 a4db%2fthumb%2f00001

Amazon’s new cloud-based IDE could shake up the industry

More than a year after acquiring Cloud9, Amazon has launched it’s cloud-based integrated development environment (IDE) called AWS Cloud9. The product is getting attention because it makes it easier than ever for any size company to access machine learning and optimize its software development.  

Now a competitor of similar Microsoft and Google products, AWS Cloud9 improves collaboration among developers. The IDE offers real-time coding across different languages and remote locations. It comes with JavaScript, Python, PHP, so developers won’t have to download languages before starting work.  

Early buzz surrounds the impact it will have on expanding AI’s influence over software and, subsequently, app and website development. MIT Technology Review calls out its ability to connect directly with AWS’ cloud AI, giving developers the opportunity to build and think in ways not previously possible. 

With a pay-as-you-go model, AWS Cloud9 could be a game-changer for startups and ultimately change the nature of user interaction with web-based products. 

Human pilot beats AI in drone race


Anything you can do, AI can do better. Eventually.

On October 12, NASA put on its own demonstration, pitting an AI-piloted racing drone against world-renowned drone pilot Ken Loo.

Researchers at NASA’s Jet Propulsion Laboratory, who have spent the last two years working on drone autonomy (which was funded by Google), built three custom drones equipped with cameras for vision and algorithms that would help them fly at high speeds while still avoiding obstacles.

The drones, named Batman, Joker, and Nightwing, used algorithms that were integrated with Google’s Tango technology, which helps AI map out 3D spaces.

These drones could fly up to 80mph in a straight line, but on this particularly cramped course, were only able to hit 40mph.

In a press release, NASA explained the pros and cons of both the autonomous drones and the human pilot. While the AI-powered drones were able to fly more consistently, they were also more cautious and, at times, ran into problems with motion blur at higher speeds. On the other hand, Loo was able to learn the course after a few laps and fly with much more agility than the autonomous drones, but is susceptible to fatigue.

“This is definitely the densest track I’ve ever flown,” Loo said in the release. “One of my faults as a pilot is I get tired easily. When I get mentally fatigued, I start to get lost, even if I’ve flown the course 10 times.”

[embedded content]

Long story short, the AI and the human started out with similar lap times, but Loo eventually won out and ended up with a faster average lap time than the AI.

The implications here are big: autonomous drones may eventually be used for surveillance, emergency response, or inventory in warehouses.

Putting the “AI” in ThAInksgiving

Your holiday dinner table is set. Your guests are ready to gab. And, then, in between bites, someone mentions Alexa and AI. “What’s this stuff I’m hearing about killer AI? Cars that decide who to run over? This is crazy!”

Welcome to Thanksgiving table talk circa 2017.

It’s true that AI and machine learning are changing the world, and in a few years, it will be embedded in all of the technology in our lives.

So maybe it makes sense to help folks at home better understand machine learning. After all, without deep knowledge of current tech, autonomous vehicles seem dangerous, Skynet is coming, and the (spoiler warning!) AI-controlled human heat farms of The Matrix are a real possibility.

This stems from a conflation of the very real and exciting concept of machine learning and the very not real concept of “general artificial intelligence,” which is basically as far off today as it was when science fiction writers first explored the idea a hundred years ago.

That said, you may find yourself in a discussion on this topic during the holidays this year, either of your own volition or by accident. And you’ll want to be prepared to argue for AI, against it, or simply inject facts as you moderate the inevitably heated conversation.

But before you dive headlong into argument mode, it’s important that you both know what AI is (which, of course, you do!) and that you know how to explain it.

Starters

Might I recommend a brush up with this post by our own Devin Coldewey that does a good job of both explaining what AI is, the difference between weak and strong AI, and the fundamental problems of trying to define it.

This post also provides an oft-used analogy for AI: The Chinese Room.

Picture a locked room. Inside the room sit many people at desks. At one end of the room, a slip of paper is put through a slot, covered in strange marks and symbols. The people in the room do what they’ve been trained to: divide that paper into pieces, and check boxes on slips of paper describing what they see — diagonal line at the top right, check box 2-B, cross shape at the bottom, check 17-Y, and so on. When they’re done, they pass their papers to the other side of the room. These people look at the checked boxes and, having been trained differently, make marks on a third sheet of paper: if box 2-B checked, make a horizontal line, if box 17-Y checked, a circle on the right. They all give their pieces to a final person who sticks them together and puts the final product through another slot.

The paper at one end was written in Chinese, and the paper at the other end is a perfect translation in English. Yet no one in the room speaks either language.

The analogy, created decades ago, has its shortcomings when you really get into it, but it’s actually a pretty accurate way of describing machine learning systems, which are composed of many, many tiny processes unaware of their significance in a greater system to accomplish complex tasks.

Within that frame of reference, AI seems rather benign. And when AI is given ultra-specific tasks, it is benign. And it’s already all around us right now.

Machine learning systems help identify the words you speak to Siri or Alexa, and help make the voice the assistant responds with sound more natural. An AI agent learns to recognize faces and objects, allowing your pictures to be categorized and friends tagged without any extra work on your part. Cities and companies use machine learning to dive deep into huge piles of data like energy usage in order to find patterns and streamline the systems.

But there are instances in which this could spin out of control. Imagine an AI that was tasked with efficiently manufacturing postcards. After a lot of trial and error, the AI would learn how to format a postcard, and what types of pictures work well on postcards. It might then learn the process for manufacturing these postcards, trying to eliminate inefficiencies and errors in the process. And then, set to perform this task to the best of its ability, it might try to understand how to increase production. It might even decide it needs to cut down more trees in order to create more paper. And since people tend to get in the way of tree-cutting, it might decide to eliminate people.

This is of course a classic slippery slope argument, with enough flaws to seem implausible. But because AI is often a black box — we put data in and data comes out, but we don’t know how the machine got from A to B — it’s hard to say what the long-term outcome of AI might be. Perhaps Skynet was originally started by Hallmark.

Entree

Here’s what we do know:

Right now, there is no “real” AI out there. But that doesn’t mean smart machines can’t help us in many circumstances.

On a practical level, consider self-driving cars. It’s not just about being able to read or watch TV during your commute; think about how much it would benefit the blind and disabled, reduce traffic and improve the efficiency of entire cities, and save millions of lives that would have been lost in accidents. The benefits are incalculable.

At the same time, think of the those who work as drivers in one capacity or another: truckers, cabbies, bus drivers and others may soon be replaced by AIs, putting millions worldwide out of work permanently.