All posts in “Art”

Y Combinator-backed VIDA turns artwork into fashion, accessories and more


VIDA, an e-commerce startup that allows artists to upload their designs to be printed on real-world materials – like fabric, leather, metal and more – which are then sold as unique products, has grown its community of artists to over 100,000 members since its launch a few years ago. The company is now participating in startup accelerator Y Combinator, following its recent collaborations with big names, including Cher, Steve Madden, Warner Bros. and others.

The idea for VIDA comes from founder Umaimah Mendhro, originally from Pakistan, a Harvard Business School grad who has worked in the past at Microsoft and San Francisco-based market accelerator West.

Mendhro had once wanted to be an artist, having taught herself to cut, sketch, sew, stitch, screen print, paint, and more. But she was worried that she couldn’t make a living by way of art alone, which eventually led her to take another path.

With VIDA, Mendhro merges her interests in art and technology by offering a platform where artists can submit their designs, which then become clothing through VIDA’s use of direct-to-fabric digital printing and, more recently, other methods to expand printing to harder materials.

With the digital printing technique, the process of transferring a design to fabric is quicker than traditional methods. This allows VIDA to print items on demand at scale, instead of holding inventory. It’s also now using 3D printing to design the molds for its jewelry collections, and plans to soon move into other areas, like 3D knitting and laser cutting.

Once printed, VIDA creates a branded page for its artists which they can promote however they see fit. The artists recoup 10 percent of the net sales of all their products sold on VIDA, as VIDA handles everything else associated with the manufacturing and sale of those items beyond the design.

When it first launched, VIDA had only a couple of types of products available – silk tops and a few styles of scarves.

Today, the company has branched out to numerous areas – tops, bottoms, wraps, bags, scarves, items for the home like pillows and tapestries, pocket squares, bags, jewelry, and more. It has also grown its community to over 100,000 artists and creatives from over 150 countries worldwide. The site hosts over 2 million individual SKUs, and is adding around 5,000 more daily.

[embedded content]

VIDA isn’t sharing customer numbers or sales figures, but it worked with Cher this year, in a collaboration with HSN. It also worked with Warner Bros. on a collection of Wonder Woman-inspired items, also for HSN.

While VIDA’s larger vision is about making a platform where any idea can become a product, Mendhro says it also appeals to a new kind of consumer.

“We’re rejecting the standardized, mass-produced goods that have been dominating in the retail industry. We want something that’s unique, that tells a story, that has a part of us in there, and something that feels authentic and genuine,” she says.

Despite the custom-made nature of the products, many are surprisingly affordable. For example, the custom bags are in the $40 to $50 range – lower than a new Nine West purse or other mass market brand.

The company also appeals to the socially conscious shopper, as it gives back to those manufacturing its goods in factories through initiatives like its literacy programs and women’s empowerment programs in Pakistan, India and Turkey.

The team of just over a dozen is based in San Francisco, and plans to raise additional funds following Y Combinator’s Demo Day to expand beyond fabrics and further scale the business.

The startup is backed by $5.5 million in funding from Google Ventures, Azure Capital, and Slow Ventures. This is a continuation of the $1.3 million seed round TechCrunch previously reported in 2014, when VIDA was in an earlier stage.

Artist ironically uses AI to make portraits of people with jobs likely displaced by AI

The Most Famous Artist is all about reverse-engineering art to find what works on social media. In his latest project, he’s using artificial intelligence to create like-able and sell-able work that also comments on AI’s potential to kill jobs and industries.

Earlier this month the artist, known as Matty Mo, painted three soon-to-be demolished houses in Los Angeles bright pink. They became a hit for photoshoots and selfies, or in Mo’s parlance, an “Instagram honeypot.” He’s moved onto his next project looking into AI and tech. The show kicked off Tuesday with a one-day gallery pop-up in downtown San Francisco.

Mo said his big, public stunts like the pink houses are what he considers “interrogations.” For that project he was looking into gentrification and community. With his latest project, “Artificial Intelligence: The End of Art As We Know It” he’s starting a conversation about big data, robots,and AI in everyday life.

He worked with anonymous hackers to create large portraits of digitized and filtered faces of factory workers, art dealers, pilots, artists, taxi drivers — all professions he believes won’t exist once machines can do a better job.

To make the portraits he built his own proprietary AI-assisted computer program that takes images and online filters to create stylized prints of everyday people and celebrities, like Tesla CEO Elon Musk, Facebook CEO Mark Zuckerberg, performer Kanye West, and reality show star Kim Kardashian West. Mo says all these people will be impacted by an AI takeover or are helping propel this technology.

For his gallery show he used only filters based on the artist Chuck Close — but his program can take in any style and photo (he looked for iconic images online) and create large pieces that he tests out on Instagram to see how many likes and purchase clicks he gets.

Elon Musk with a Chuck Close-style filter.

Elon Musk with a Chuck Close-style filter.

At the gallery Tuesday afternoon Mo said “great artists use the tools of our time to tell the story of our time.” He wanted to present the work in a traditional art space to show how something can be perceived as beautiful art without knowing that a robot or computer program made the work. He believes knowing how it’s made can change its perception.

Pilots, factory workers, and taxi drivers are getting pushed out by AI.

Pilots, factory workers, and taxi drivers are getting pushed out by AI.

After Tuesday’s showing, the work lives on online, where the portraits are going for about $500. His computer program is still being shaped and learning his preferences as he trains it to eventually create stylized prints that are optimized to do well on a platform like Instagram. 

As Mo said, “It’s AI assisting artists or artists assisting AI.”

Welcome to the future.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f1094%2fc6e6ef07 58f2 4620 ad06 7ea7a6bdcf46

Watch your raw memories become mind-blowing abstract art

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f80499%2ff20d1ecd c0ee 421b b2ff 40c3bc253814

Imagine if you could turn your memories and emotions into compelling, abstract paintings. It’s basically every artist’s dream. 

A London-based creative technology studio, random quark, has found a way to visually represent emotions by scanning people’s brain activity to create awe-inspiring paintings. 

Equipping individuals with commercial EEG headsets in a room with dim lights and free of noise, the company asks you to close their eyes and think of an emotionally charged memory, happy or sad. 

As the device scans the brain’s electrical activity from the left to the right, it creates a dataset which constitutes a unique insight into the individuals’ memory and mood at that time. 

But this deluge of information needs to be translated into the canvas in a visual way that is a unique piece of art. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f80540%2f006cab8a a617 4e2c 898c 5c721637e577

Bird swarms

Random quark’s Theodoros Papatheodorou and Tom Chambers adopted a technique that draws inspiration from generative art — that’s a flocking algorithm inspired by bird swarm or movement of fish in the sea. 

“Some rules of the swarm rely on stochastic/random decisions and therefore are unique,” Papatheodorou said. “At the same time, keeping some rules the same we managed to create a uniform visual output. Basically, all look like they were made by the same painter, on a different day.”

Flocking algorithms also give the chance to make use of the massive amounts of data that comes out of the brain during the EEG test, without reducing it to a few inputs. 

To determine what people are feeling, random quark relies on a theory called lateralization of emotion which basically says that activity in the left side of the brain is associated with positive feelings while increased activity on the right is linked to negative feelings. 

“We measure the asymmetry between left/right hemispheres as well as the overall activation of the brain (alpha/beta/gamma waves) and we plot this data in a 2D valence-activation graph which is known as the Geneva Emotions wheel where all the human emotions are plotted,” Papatheodorou said. 

Raw emotions

For the purpose of the experiment, random quark filtered and reduced the emotions to 7 major ones — joy, sadness, anger, love, disgust, fear, surprise — and measure only them, giving them a score. 

The feelings are ranked by intensity, with a confidence level for each one. Then, the system picks only the first 2 emotions and proportionally assign a unique shade of colour to each particle of the system.

“Around 100,000 agents are released on the canvas and their movements are guided by a swarm algorithm that we have written which is partly affected by the raw EEG data,” Chambers said. 

“One particle leaves a random trail, but when you have 100, 1000, 10,000, 100,000 particles you start to have a painting that looks like brush strokes, because these particles kind of coalesce together, they break apart and they interact with each other to make the painting.”

True colours 

How the swarm travels on the canvas is directly linked to the raw brainwaves as we read them – shaping in effect the patterns on the “paper”, Papatheodorou says. 

The colours’ association is arbitrary although it’s based on common symbols for emotions — red for anger, blue for sadness and so on — and the authors stress the artistic (not scientific) nature of their work. 

The project started when Saatchi Wellness, a creative agency, asked random quark to find a way to represent emotions in an intuitive and accessible way. 

The paintings have been exhibited in a gallery in London. Papatheodorou and Chambers also used emotional techniques for Saatchi Wellness website where the due basically scan the “twitter-sphere” and extract the emotional state of the world using machine learning. 

“We process random tweets in real-time and we measure their emotional state using IBM’s Watson to measure basic feelings. We then use these parameters about how the world feels to guide the swarm you see on their website.”

Possible applications

Random quark sees the brainwaves painting as an experiment in human computer interaction. “What we see as the future of computing is a way of giving computers context so they can make better decisions.  When you talk to Alexa or Google Home if they can understand how you’re felling it can engage with you in a more intelligent way,” Chambers said. 

In the future computers will be able to understand more context like emotions which will enable them to become more adaptive, with applications such as adjusting your home environment to suit your mood when you come home from work. It can also impact storytelling in a way that makes it more interactive so that the story can adapt to your reactions to the way the story is being told. 

“Rather than writing a book or a song, artists in the future will likely write programs that generate them uniquely for each read or listen based on all the cues it takes in,” Papatheodorou said.

Wow!

WATCH: Memories are colour and brain waves are brush strokes in these mesmerizing paintings

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f80539%2fd4a31664 d6fd 4e96 9014 683612a65ec0

Look into these AI-generated people’s eyes and let the nightmares wash over you

When technology is used to make art, sometimes it produces beautiful results. And other times things get a little … strange.

A new video going around Twitter shows an eerie girl moving her eyes as a cursor moves around on the screen. The thing is … it’s not a real person in the video. 

These beautiful and unsettling videos (okay, they’re more than a little creepy) were made by artist and researcher Branislav Ulicny, who created this AI-generated art by using neural networks combined with two other existing existing technology-based art projects.

“Virtual humans are kinda my obsession, so whenever I stumble upon some interesting data, I try to see what I can make out of it,” Ulicny said in an email. He was inspired by projects like Pickle Cat to work on a similar interactive experience. 

Image: michael tyka

To create this new, unsettling videos, Ulicny used the work of Michael Tyka, an artist who works with neural networks and created a series of AI-generated portraits, as the base portrait.

“It uses a technique called “generative adversarial networks” (“GAN“) where two artificial neural networks are playing an adversarial game: one (the “Generator”) tries to generate increasingly convincing output, while a the second (the “Critic”) tries to learn to distinguish real photos from generated ones. With time, the generated output becomes increasingly realistic, as both adversaries try to outwit each other,” Tyka explained via email. 

In other words: two algorithms work together to improve each other and create the most realistic images possible.

“Using machine learning as an artistic tool is a fascinating and nascent field with many opportunities for experimentation,” says Tyka.

Ulicny then combined the portraits with Yaroslav Ganin’s DeepWarp, a project that uses images and produces “gaze manipulation” or eye movement. Here is an example of DeepWarp in action on a photo of Chris Pine: 

Image: deepwarp

Put together, the two result in an unsettling mix of art and terror:

Image: Branislav Ulicny

The gaze follows the movement of your mouse on desktop or touch on mobile and is creepily accurate. You can try it out for yourself here.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f80518%2f6c3d1252 1d5e 41be 95f5 706e7ccdddc8