All posts in “Virtual Reality”

VR helps us remember

Researchers at the University of Maryland have found that people remember information better if it is presented in VR vs. on a two dimensional personal computer. This means VR education could be an improvement on tablet or device-based learning.

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” said Amitabh Varshney, dean of the College of Computer, Mathematical, and Natural Sciences at UMD.

The study was quite complex and looked at recall in forty subjects who were comfortable with computers and VR. The researchers was an 8.8 percent improvement in recall.

To test the system they created a “memory palace” where they placed various images. This sort of “spatial mnemonic encoding” is a common memory trick that allows for better recall.

“Humans have always used visual-based methods to help them remember information, whether it’s cave drawings, clay tablets, printed text and images, or video,” said lead researcher Eric Krokos. “We wanted to see if virtual reality might be the next logical step in this progression.”

From the study:

Both groups received printouts of well-known faces–including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe–and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting–Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed.

The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces–and also the location of the image relative to the user’s own body.

Desktop users could perform the feat but VR users performed it statistically better, a fascinating twist on the traditional role of VR in education. The researchers believe that VR adds a layer of reality to the experience that lets the brain build a true “memory palace” in 3D space.

“Many of the participants said the immersive ‘presence’ while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display,” wrote the researchers.

“This leads to the possibility that a spatial virtual memory palace–experienced in an immersive virtual environment–could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” said researcher Catherine Plaisant.

How Facebook’s new 3D photos work

In May, Facebook teased a new feature called 3D photos, and it’s just what it sounds like. But beyond a short video and the name, little was said about it. But the company’s computational photography team has just published the research behind how the feature feature works and, having tried it myself, I can attest that the results are really quite compelling.

In case you missed the teaser, 3D photos will live in your news feed just like any other photos, except when you scroll by them, touch or click them, or tilt your phone, they respond as if the photo is actually a window into a tiny diorama, with corresponding changes in perspective. It will work for both ordinary pictures of people and dogs, but also landscapes and panoramas.

It sounds a little hokey, and I’m about as skeptical as they come, but the effect won me over quite quickly. The illusion of depth is very convincing, and it does feel like a little magic window looking into a time and place rather than some 3D model — which, of course, it is. Here’s what it looks like in action:

I talked about the method of creating these little experiences with Johannes Kopf, a research scientist at Facebook’s Seattle office, where its Camera and computational photography departments are based. Kopf is co-author (with University College London’s Peter Hedman) of the paper describing the methods by which the depth-enhanced imagery is created; they will present it at SIGGRAPH in August.

Interestingly, the origin of 3D photos wasn’t an idea for how to enhance snapshots, but rather how to democratize the creation of VR content. It’s all synthetic, Kopf pointed out. And no casual Facebook user has the tools or inclination to build 3D models and populate a virtual space.

One exception to that is panoramic and 360 imagery, which is usually wide enough that it can be effectively explored via VR. But the experience is little better than looking at the picture printed on butcher paper floating a few feet away. Not exactly transformative. What’s lacking is any sense of depth — so Kopf decided to add it.

The first version I saw had users moving their ordinary cameras in a pattern capturing a whole scene; by careful analysis of parallax (essentially how objects at different distances shift different amounts when the camera moves) and phone motion, that scene could be reconstructed very nicely in 3D (complete with normal maps, if you know what those are).

But inferring depth data from a single camera’s rapid-fire images is a CPU-hungry process and, though effective in a way, also rather dated as a technique. Especially when many modern cameras actually have two cameras, like a tiny pair of eyes. And it is dual-camera phones that will be able to create 3D photos (though there are plans to bring the feature downmarket).

By capturing images with both cameras at the same time, parallax differences can be observed even for objects in motion. And because the device is in the exact same position for both shots, the depth data is far less noisy, involving less number-crunching to get into usable shape.

Here’s how it works. The phone’s two cameras take a pair of images, and immediately the device does its own work to calculate a “depth map” from them, an image encoding the calculated distance of everything in the frame. The result looks something like this:

Apple, Samsung, Huawei, Google — they all have their own methods for doing this baked into their phones, though so far it’s mainly been used to create artificial background blur.

The problem with that is that the depth map created doesn’t have some kind of absolute scale — for example, light yellow doesn’t mean 10 feet, while dark red means 100 feet. An image taken a few feet to the left with a person in it might have yellow indicating 1 foot and red meaning 10. The scale is different for every photo, which means if you take more than one, let alone dozens or a hundred, there’s little consistent indication of how far away a given object actually is, which makes stitching them together realistically a pain.

That’s the problem Kopf and Hedman and their colleagues took on. In their system, the user takes multiple images of their surroundings by moving their phone around; it captures an image (technically two images and a resulting depth map) every second and starts adding it to its collection.

In the background, an algorithm looks at both the depth maps and the tiny movements of the camera captured by the phone’s motion detection systems. Then the depth maps are essentially massaged into the correct shape to line up with their neighbors. This part is impossible for me to explain because it’s the secret mathematical sauce that the researchers cooked up. If you’re curious and like Greek, click here.

Not only does this create a smooth and accurate depth map across multiple exposures, but it does so really quickly: about a second per image, which is why the tool they created shoots at that rate, and why they call the paper “Instant 3D Photography.”

Next the actual images are stitched together, the way a panorama normally would be. But by utilizing the new and improved depth map, this process can be expedited and reduced in difficulty by, they claim, around an order of magnitude.

Because different images captured depth differently, aligning them can be difficult, as the left and center examples show — many parts will be excluded or produce incorrect depth data. The one on the right is Facebook’s method.

Then the depth maps are turned into 3D meshes (a sort of two-dimensional model or shell) — think of it like a papier-mache version of the landscape. But then the mesh is examined for obvious edges, such as a railing in the foreground occluding the landscape in the background, and “torn” along these edges. This spaces out the various objects so they appear to be at their various depths, and move with changes in perspective as if they are.

Although this effectively creates the diorama effect I described at first, you may have guessed that the foreground would appear to be little more than a paper cutout, since, if it were a person’s face captured from straight on, there would be no information about the sides or back of their head.

This is where the final step comes in of “hallucinating” the remainder of the image via a convolutional neural network. It’s a bit like a content-aware fill, guessing on what goes where by what’s nearby. If there’s hair, well, that hair probably continues along. And if it’s a skin tone, it probably continues too. So it convincingly recreates those textures along an estimation of how the object might be shaped, closing the gap so that when you change perspective slightly, it appears that you’re really looking “around” the object.

The end result is an image that responds realistically to changes in perspective, making it viewable in VR or as a diorama-type 3D photo in the news feed.

In practice it doesn’t require anyone to do anything different, like download a plug-in or learn a new gesture. Scrolling past these photos changes the perspective slightly, alerting people to their presence, and from there all the interactions feel natural. It isn’t perfect — there are artifacts and weirdness in the stitched images if you look closely and of course mileage varies on the hallucinated content — but it is fun and engaging, which is much more important.

The plan is to roll the feature out mid-summer. For now the creation of 3D photos will be limited to devices with two cameras — that’s a limitation of the technique — but anyone will be able to view them.

But the paper does also address the possibility of single-camera creation by way of another convolutional neural network. The results, only briefly touched on, are not as good as the dual-camera systems, but still respectable and better and faster than some other methods currently in use. So those of us still living in the dark age of single cameras have something to hope for.

Jumping out of a virtual plane is perfect for those who can’t or won’t skydive IRL

I’ve never quite got round to throwing myself out of a plane. Somehow, I wake up every day and find something marginally less terrifying to do with my waking hours and here we are. Jumping out of a virtual plane while floating three feet from the ground and being held by someone standing up, however? That I can get behind.

Which is how I ended up at iFly’s new VR experience near Universal Studios in LA this week, dressed in a jump suit, strapped into a Samsung Gear, and floating over Hawaii while bemused tourists wandering the retail center outside the theme park gawped and ‘grammed.

iFly has wind tunnels across the U.S. (and indeed much of Europe) and says it’s flown 9 million people since 1998. The addition of VR headsets, though, is a very new thing and for $20 more than their standard package price, it’s an insanely addictive add-on.

You begin your virtual journey with the standard iFly experience, getting kitted up and briefed on the flying rules. Essentially it comes down to keeping your legs straight, your head facing forward and your mind chilled. Fall into the wind and let the force (and an instructor) do the rest. There are a few hand signals but that’s about it.

My first flight went … OK. I spent some of the time flailing and falling down to the grille, much to the amusement of the guy operating the wind machine, but by the second practice run I had it mostly down. At Universal CityWalk, the tunnel is in the middle of the lively shopping center. In terms of entertainment value for passersby, it’s between a branch of Margaritaville and a band playing covers of The Killers — literally and otherwise.

Gracefulness personified.

Gracefulness personified.

Image: iFly

Once I’d, ahem, mastered the art of free-falling, my instructor Joe strapped on the Gear. At first you can see through it to the real world, albeit with hardly any sense of distance or depth. Once you’re at the tunnel entrance it switches to virtual mode and you’re in the plane, watching someone count you down, and then you’re off. 

It’s pretty stunning. While the wind whipped up to 120-mph-plus and my body hit terminal velocity, I watched Hawaii’s scenery hurtle towards me, safe in the knowledge that sudden death was an unlikely ending.

The films do a great job of replicating the thrill of skydiving, with fellow divers performing tricks, clouds whizzing by to give a sense of speed and the all-important parachute opening above before you flop back out of the tunnel to safety. It goes by so fast you’ll want to line up for a second trip immediately.

The camera operator stayed pretty much static while filming the flights, so your body tends to mirror theirs, which avoids the usual motion sickness issues associated with VR. In fact, iFly’s Director of Product Development Mason Barrett insists no one has yet had an issue with queasiness. There are no inner ear issues either, as you’re not actually experiencing any pressure, although you do wear earplugs under the helmet.

IFly currently offers four destinations to virtually experience — Hawaii, Dubai, the Swiss Alps and Southern California — with more planned. The company is focussing on “locations on every sky diver’s bucket list,” Barrett says, with some ambitious plans for future films.

How does BASE jumping in a virtual wingsuit sound? Or barreling through a fantasy world, perhaps joining a Quidditch game with Harry Potter or flying parallel to Iron Man? Those are the kind of dreams iFly is hoping to realize if they can get a major studio on board.

Virtual skydiving has been a dream of the company’s since its inception two decades ago, but technology has only recently caught up. In the past, the experience would have involved a white screen next to the tunnel, Barrett says, but consumer grade tech offers a much more immersive experience. And it could be great for those that can’t fly IRL, whether due to fear or disability. Kids can take a virtual dive from 8, Barrett says, while you can’t legally leap from the plane until you’re 18.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2018%2f5%2f2d5c4a2b 0165 2393%2fthumb%2f00001

You can fling yourself out of a virtual plane at any one of 28 places offering the iFly Virtual Reality experience across the country. 

There are dozens of sites offering iFly across the U.S.

There are dozens of sites offering iFly across the U.S.

Image: ifly

You just need to be over eight years old and weigh less than 260 pounds. Those aged between 8 and 12 can only do it once per day.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2018%2f5%2ffd1860c8 8960 495a%2fthumb%2f00001

With Nickelodeon’s new VR experience, walk a mile in Spongebob’s shoes—er, pants

Playgrounds aren’t what they used to be. 

If your kid is getting bored of his school’s old, rusty jungle gym, he may want to check out Slime Zone, Nickelodeon’s virtual playground. 

It’s no secret that VR is, for the most part, an enthusiast’s world. Oculus and HTC Vive continue to release high-quality headsets, but they’re too expensive, and the games aren’t good enough for the general public to catch on to. 

But SlimeZone has, perhaps wisely, taken a different approach. Nobody buys SlimeZone. Instead, it’s set of HTC Vive headsets in the lobbies of IMAX theaters in Los Angeles, New York, and Toronto. Kids (or adults) pay $15 for thirty minutes of play.

The Setup

Image: monica chin/mashable

Released in March 2018, SlimeZone is a collaboration between Nickelodeon and IMAX. 

It’s set up, along with a number of other VR experiences, in the lobbies of IMAX theaters in Los Angeles, New York City, and Toronto, and soon to roll out in Shanghai, Bangkok and Manchester, according to Nickelodeon. 

I entered the Slime Zone in the lobby of the Kips Bay AMC. After a brief tour of the center, I put on my Vive headset and a harness (to keep me from walking into the wall, which I absolutely would have done otherwise), took my controllers, and entered an adorably bright, loud, colorful Nickelodeon world. 

Image: nickelodeon

Nickelodeon is very adamant that SlimeZone is not a game. “It’s an opportunity to connect kids to our brand,” Nickelodeon SVP of Entertainment Lab Chris Young told me. “It’s another chance to connect with our audience outside of this linear channel.”

The Play

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2018%2f5%2f9d08edb0 7bde b052%2fthumb%2f00001

While Mr. Young may not have intended for SlimeZone to be a game, that’s certainly what it feels like. 

Users choose a Nickelodeon character to play. After selecting some variety of Teenage Mutant Ninja Turtle, I found myself in large, colorful arena, holding a squirt gun. 

The first thing I saw when I appeared in SlimeZone was a massive inflatable Spongebob looming over me. Startled, I shot it immediately. Slime erupted from my gun, knocking Spongebob over. He reset himself soon after, but a number in the sky indicated that my ambush had earned me points of some sort. 

Image: nickelodeon

You moved around by selecting an area ahead of you and teleporting yourself there by clicking the controller. You can make yourself much bigger or much smaller, changing the sizes of the various characters and other props around you in turn. 

The arena is large, full of nooks and crannies, and various items litter the floor. One room contained a basketball and hoop, which I dribbled aimlessly and tried (and failed) to dunk. Another was full of small tubes of paint, which you can use to create art if you’re so inclined. 

Random objects were scattered about, including balls, fish, pencils, could be picked up and put down at will, but it was unclear what I was supposed to do with them. Would they get me points? Did I want points?

Fun, but what’s the point? 

A Teenage Mutant Ninja Turtle shoots...Hey Arnold? I think? It's been a while.

A Teenage Mutant Ninja Turtle shoots…Hey Arnold? I think? It’s been a while.

Image: nickelodeon

As I barrelled through the Slime Zone, shooting down my inflatable nemeses, I noticed that I continued to accumulate points, but it seemed somewhat random. Hitting a smaller target didn’t correlate with a higher point return, and I was never sure how exactly to get myself higher on the scoreboard. 

Neither, it appears, are Slime Zone’s developers.
“It’s more of a sandbox,” Young told me, emphasizing that it’s not supposed to have an objective. “It doesn’t really take a level of skill.”

Fair enough. At the same time, there’s a bit of an aimless nature to Slime Zone play, to the point where I felt like I was doing a lot of wandering, and not a lot of anything exciting. That might be okay on the school swingset, but I’d expect more stimulation from a $15 playground. 

Image: nickelodeon

At the end of the day, Slime Zone was a cute experience. But I’m still not quite sure what kids are supposed to do

Young says it’s up to the players. “You could pick up a paint tube and start to play your own game, or draw a heart in one of the very far corners of the space,” he told me. “Some people use bananas as shields when other people are sliming at them. Other people start throwing bananas. Here are a bunch of objects. Do what you want.” 

Image: nickelodeon

Again, fair enough. But at that point, I wonder what’s unique. Painting, dribbling basketballs, and shooting squirt guns are all things you can do for free at home — so why pay $15 to do it for 15 minutes in VR? 

But more importantly, the beauty of a digital, interactive medium seems to me to revolve, in at least some part, around organization. 

What games, from League of Legends to Fortnite to Final Fantasy, have in common is that they guide your action towards an objective. Yes, that eliminates some freedom. But it also ensures that your kids are getting their money’s worth out of their experience, seeing and doing the best of what developers intended, and emerging from the experience feeling some sense of accomplishment. With young kids, who could easily spend all 30 minutes trying to figure out how to use the squirt gun, or wandering aimlessly around the main hall, this could be a real concern. 

I love the Slime Zone. But it would need a bit more structure before I’d pay $15 for my kid to play. 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f11%2f8a463824 a31b 2b0b%2fthumb%2f00001