All posts in “machine learning”

Amazon’s next conquest will be apparel

Late last year, after Amazon announced it had acquired the rights to J.R.R. Tolkien’s epic “Lord of the Rings” saga for $250 million, I wrote how the move underscored Amazon’s relentless pursuit to build one platform to “rule them all.” Now that Amazon is investing half a billion dollars into developing a Middle Earth show – making it the most expensive TV series ever made – it won’t be a surprise to see Jeff Bezos front and center at the Emmys soon.

But Hollywood isn’t the only industry Amazon wants to upend. Based on the company’s great ambitions in apparel, it may not be long before we also see Bezos at New York Fashion Week next to Anna Wintour.

The 800-Pound Gorilla in the Fashion World

As traditional retail continues to recede, direct to commerce fashion brands continue to emerge. I’ve previously shared how Stitch Fix, Warby Parker, Everlane and Allbirds are just a few innovative companies proving the success of this model. As the master of D2C commerce, Amazon has been fine-tuning its fashion operation for over 15 years.

Amazon originally got into apparel all the way back in 2002 and acquired online shoe retailer Zappos for $1.2 billion in 2009, marking the largest purchase in its history at the time. But the company’s quest to dominate fashion has faced several historical obstacles, chief among them that people have not trusted buying apparel online out of a desire to try on the items first and that Amazon was not perceived as a “cool” brand.

Headwinds are now tailwinds. Online shopping for apparel took off and is now the highest online-penetration CPG sector; the majority of women have shopped for clothing online. E-commerce accounts for nearly twice as big a proportion of total clothing sales as it does for retail more broadly (17 percent vs. 10 percent). Amazon, meanwhile, has honed its apparel strategy, providing free returns, better photography and greater selection. Today, the company is the largest apparel retailer by gross merchandise volume. Mission accomplished? Not quite.

Building A Private-Label ‘Fashion House’

An actual Amazon fashion shoot

Bonobos CEO Andy Dunn once said, “Selling a bunch of other people’s stuff is a low margin game that requires a lot of capital and, ultimately, it’s hard to beat Jeff Bezos at that.” This is true, but when it comes to apparel, Bezos has greater ambitions than selling other people’s stuff. Currently, though, that’s mostly what Amazon does.

According to analysis from Coresight Research, nearly 14 percent of listings on the U.S. Amazon Fashion site are from Amazon itself, while third-party sellers account for the remaining 86 percent. Amazon is highly incentivized to increase its share of that pie. Apparel is a highly profitable category for the company, with 40 percent peak gross margins in the last 10 years. Additionally, Prime members heavily overindex for buying apparel on Amazon – nearly two-thirds have done so in the past year.

As it ramps up its private-label offerings, Amazon is clearly keen to move beyond selling the apparel equivalent of batteries and diapers through its Amazon Essentials brand. It started selling thigh-high velvet boots in September, and Coresight’s analysis indicates that the company is focusing on higher-value categories.

If its recent Lord of the Rings rights acquisition was an attempt to further capture young affluent consumers’ eyeballs, and Whole Foods an attempt to lock down their stomachs, it follows that Amazon would want to ensnare their wardrobes as well. Acquiring a hot digitally native vertical brand – or brands – would be a speedy way to accomplish that. Walmart has already pursued this strategy by buying Bonobos, Modcloth and others; Amazon could take a similar path and seek to bring buzzy brands like Everlane into the everything store. This could also go a long way in helping Amazon shed its “uncool” label.

Becoming A Fashion (Power)House

The Echo Look is just one sign Amazon is serious about dominating fashion

Last year, Amazon introduced a number of innovations designed to turbocharge its apparel business and make the online shopping experience as frictionless as possible. It launched Prime Wardrobe, a Stitch Fix-style service that allows you to try three or more items on at home before sending back the items you don’t want for free in a resealable box with a prepaid label.

 It also debuted Echo Look, a new Alexa-powered device that the company dubs a “hands-free camera and style assistant.” The addition of a camera enables the device to record and comment on its owner’s clothing choices, using a combination of machine learning and human stylist feedback. This advice also takes the form of recommendations, which can drive revenue to Amazon Fashion, and specifically its private-label brands.

Amazon is iterating on and rolling out more features for the Echo Look, including curated content and even crowdsourced (human!) style feedback. It also created an AI algorithm for designing clothes and patented an AR mirror that lets you virtually try on clothes. The value of such a mirror was validated recently by L’Oreal’s acquisition of ModiFace, a company that produces technology that powers similar applications in beauty AR.

Analyzing all these moves together, Amazon’s apparel strategy begins to crystallize. First it sells tons of clothes to learn how clothes are sold. Then it starts selling its own clothes to generate higher gross margin. And now has it has Prime Wardrobe to increase lock-in and reduce points at which customers can choose not to buy Amazon’s own clothing (all while gathering more data about individual preferences); and Echo Look to be its data collection and voice-commerce portal (and as an added bonus, it can route ambiguous purchase requests to its growing inventory of private-label items). If this strategy is successful, it will give Amazon an enormous data moat to drive high-margin sales – a competitive advantage that will be extremely difficult for fashion retailers and brands to replicate.

Bezos doesn’t need to even ask.

Amazon has become increasingly dominant in several increasingly important arenas: cloud services, voice assistants, self-serving brick-and-mortar stores with Amazon Go, and of course its now-traditional role as the online everything store. The company is poised to add apparel to this growing list as it changes the way people shop for clothing (again) and entices more of its customers to buy Amazon’s own threads. And it bears mentioning that Amazon Fashion will get a helpful hand from Amazon Studios as well. Bezos once shared that, “When we win a Golden Globe, it helps us sell more shoes.” If he has his way, Amazon will be doing a lot more of both in the coming years.

Spectral Edge’s image enhancing tech pulls in $5.3M

Cambridge, U.K.-based startup Spectral Edge has closed a $5.3M Series A funding round from existing investors Parkwalk Advisors and IQ Capital.

The team, which in 2014 spun the business out of academic research at the University of East Anglia, has developed a mathematical technique for improving photographic imagery in real-time, also using machine learning technology. 

As we’ve reported previously, their technology — which can be embedded in software or in silicon — is designed to enhance pictures and videos on mass-market devices. Mooted use cases include for enhancing low light smartphone images, improving security camera footage or even for drone cameras. 

This month Spectral Edge announced its first customer, IT services provider NTT data, which said it would be incorporating the technology into its broadcast infrastructure offering — to offer its customers an “HDR-like experience”, via improved image quality, without the need for them to upgrade their hardware.

“We are in advanced trials with a number of global tech companies — household names — and hope to be able to announce more deals later this year,” CEO Rhodri Thomas tells us, adding that he expects 2-3 more deals in the broadcast space to follow “soon”, and enhance viewing experiences “in a variety of ways”.

On the smartphone front, Thomas says the company is waiting for consumer hardware to catch up — noting that RGB-IR sensors “haven’t yet begun to deploy on smartphones on a great scale”.

Once the smartphone hardware is there he reckons its technology will be able to help with various issues such as white balancing and bokeh processing.

“Right now there is no real solution for white balancing across the whole image [on smartphones] — so you’ll get areas of the image with excessive blues or yellows, perhaps, because the balance is out — but our tech allows this to be solved elegantly and with great results,” he suggests. “We also can support bokeh processing by eliminating artifacts that are common in these images.”

The new funding is going towards ramping up Spectral Edge’s efforts to commercialize its tech, including by growing the R&D team to 12 — with hires planned for specialists in image processing, machine learning and embedded software development.

The startup will also focus on developing real-world apps for smartphones, webcams and security applications alongside its existing products for the TV & display industries.

“The company is already very IP strong, with 10 patent families in the world (some granted, some filed and a couple about to be filed),” says Thomas. “The focus now is productizing and commercializing.”

“In a year, I expect our technology to be launched or launching on major flagship [smartphone] devices,” he adds. “We also believe that by then our CVD (color vision deficiency) product, Eyeteq, is helping millions of people suffering from color blindness to enjoy significantly better video experiences.”

Arm chips with Nvidia AI could change the Internet of Things

Nvidia and Arm today announced a partnership that’s aimed at making it easier for chip makers to incorporate deep learning capabilities into next-generation consumer gadgets, mobile devices and Internet of Things objects. Mostly, thanks to this partnership, artificial intelligence could be coming to doorbell cams or smart speakers soon.

Arm intends to integrate Nvidia’s open-source Deep Learning Accelerator (NVDLA) architecture into its just-announced Project Trillium platform. Nvidia says this should help IoT chip makers incorporate AI into their products.

“Accelerating AI at the edge is critical in enabling Arm’s vision of connecting a trillion IoT devices,” said Rene Haas, EVP, and president of the IP Group, at Arm. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”

Announced last month, Arm’s Project Trillium is a series of scalable processors designed for machine learning and neural networks. NVDLA open-source nature allows Arm to offer a suite of developers tools on its new platform. Together, with Arm’s scalable chip platforms and Nvidia’s developer’s tools, the two companies feel they’re offering a solution that could result in billions of IoT, mobile and consumers electronic devices gaining access to deep learning.

Deepu Tallam, VP and GM of Autonomous Machines at Nvidia, explained it best with this analogy: “NVDLA is like providing all the ingredients for somebody to make it a dish including the instructions. With Arm [this partnership] is basically like a microwave dish.”

3D printed rocket maker Relativity raises $35M to simplify satellite launches

LA-based space startup Relativity has raised $35 million in Series B funding, in a new round led by Playground Global, and including existing investors Social Capital, Y Combinator Continuity and Mark Cuban. The funding will help the startup expand its automated, 3D-printing process for manufacturing rockets quickly and with greatly reduced complexity, with the ultimate aim of making it easier and cheaper to send satellites into space.

Relativity’s goal is to introduce a highly automated rocket construction process that relies on nearly 100 percent 3D printed rocket parts, to create custom, mission-specific rockets that can launch payloads the size of small cars, or much larger than those of some of its cubesat-targeting competitors. It boasts a process that has reduced rocket part count from around 100,000 to just 1,000, while also dropping labor and build time, using machine learning and even proprietary base materials to achieve these drastic reductions.

Basically, Relativity wants to play in the same ballpark as SpaceX for some prospective missions, and it’s getting closer to be able to do that. It has a 20-year test site partnerships with NASA Stennis, for use of its E4 Test Complex, and this will allow the would-be launcher to develop and quality as many as 36 complete rockets per year on the 25 acre space, with an option to grow its footprint to as many as 250 acres for launches.

Rocket’s 3D metal printer, aptly named ‘Stargate,’ is the largest of its kind in the world, and aims to be ale to go from raw materials to a flight-ready vehicle in just 60 days. The process overall should save between two and four years of time per launch overall, which would mean a drastic improvement in time allotment for mission conception to execution for commercial clients.

Rainforest Connection enlists machine learning to listen for loggers and jaguars in the Amazon

The vastness that makes the Amazon rainforest so diverse and fertile also makes it extremely difficult to protect. Rainforest Connection is a project started back in 2014 that used solar powered second-hand phones as listening stations that could alert authorities to sounds of illegal logging. And applying machine learning has supercharged the network’s capabilities.

The original idea is still in play: modern smartphones are powerful and versatile tools, and work well as wireless sound detectors. But as founder Topher White explained in an interview, the approach is limited to what you can get the phones to detect.

Originally, he said, the phones just listened for certain harmonics indicating, for example, a chainsaw. But bringing machine learning into the mix wrings much more out of the audio stream.

“Now we’re talking about detecting species, gunshots, voices, things that are more subtle,” he said. “And these models can improve over time. We can go back into years of recordings to figure out what patterns we can pull out of this. We’re turning this into a big data problem.”

White said he realized early on that the phones couldn’t do that kind of calculation, though — even if their efficiency-focused CPUs could do it, the effort would probably drain the battery. So he began working with Google’s TensorFlow platform to perform the training and integration of new data in the cloud.

Google also helped produce a nice little documentary about one situation where Guardians could help native populations deter loggers and poachers:

[embedded content]

That’s in the Amazon, obviously, but Rainforest Connection has also set up stations in Cameroon and Sumatra, with others on the way.

Machine learning models are particularly good at finding patterns in noisy data that sound logical but defy easy identification through other means.

For instance, White said, “We should be able to detect animals that don’t make sounds. Jaguars might not always be vocalizing, but the animals around them are, birds and things.” The presence of a big cat then, might be easier to detect by listening for alarmed bird calls than for its near-silent movement through the forest.

The listening stations can be placed as far as 25 kilometers (about 15 miles) from the nearest cell tower. And since a device can detect chainsaws a kilometer away and some species half a kilometer away, it’s not like they need to be on every tree.

But, as you may know, the Amazon is rather a big forest. He wants more people to get involved, especially students. White partnered with Google to launch a pilot program where kids can build their own “Guardian,” as the augmented phone kits are called. When I talked with him it was moments before one such workshop in LA.

Topher White and students at one of the Guardian building workshops.

“We’ve already done three schools and I think a couple hundred students, plus three more in about half an hour,” he told me. “And all these devices will be deployed in the Amazon over the next three weeks. On Earth day they’ll be able to see them, and download to app to stream the sounds. It’s to show these kids that what they do can have an immediate effect.”

“An important part is making it inclusive, proving these things can be built by anyone in the world, and showing how anyone can access the data and do something cool with it. You don’t need to be a data scientist to do it,” he continued.

Getting more people involved is the key to the project, and to that end Rainforest Connection is working on a few new tricks. One is an app you’ll be able to download this summer “where people can put their phone on their windowsill and get alerts when there’s a species in the back yard.”

The other is a more public API; currently only partners like companies and researchers can access it. But with a little help all the streams from the many online Guardians will be available for anyone to listen to, monitor, and analyze. But that’s all contingent on having money.

“If we want to keep this program going, we need to find some funding,” White said. “We’re looking at grants and at corporate sponsorship — it’s a great way to get kids involved too, in both technology and ecology.”

Donations help, but partnerships with hardware makers and local businesses are more valuable. Want to join up? You can get at Rainforest Connection here.