A PhD candidate in Computer Science at Columbia Engineering, Brian A. Smith, created a new system for blind gamers who want to get a little racing in. The system, called racing auditory display or RAD, is truly amazing. It lets the visually impaired play racing games without “seeing” the screen. Instead, the audio output tells the player when they’re getting closer to an edge and can even enable to them to cut corners in tight turns.
“The RAD is the first system to make it possible for people who are blind to play a ‘real’ 3D racing game–with full 3D graphics, realistic vehicle physics, complex racetracks, and a standard PlayStation 4 controller,” said Smith, who worked on the project with Shree Nayar, T.C. Chang Professor of Computer Science. “It’s not a dumbed-down version of a racing game tailored specifically to people who are blind.”
The audio changes as players approach turns and tells them where they are on the road. Interestingly, RAD allows blind players to play as well or better than sighted players. The system is universal so game makers can embed the system into upcoming racers. The research paper is available here.
“The RAD’s sound slider and turn indicator system work together to help players know the car’s current speed; align the car with the track’s heading; learn the track’s layout; profile the direction, sharpness, timing, and length of upcoming turns; cut corners; choose an early or late apex; position the car for optimal turning paths; and know when to brake to complete a turn,” said Smith. Smith built a racing game in Unity and added RAD to the prototype. He then worked with 15 participants from Helen Keller Services for the Blind and volunteers at Columbia
“With the RAD, Edis could not only play our prototype racing game, but do so with the same lap times and driving paths as sighted players,” Smith said.
One player, Edis Adilovic, loved the freedom it gave him.
“After the training was done, I had the possibility of doing whatever I wanted to,” he said.
March 8, 2018 / Comments Off on RAD is a new system to help the visually impaired play racing games
Researchers at the have successfully etched edible circuits onto the surface of food, paving the way for RFID tagged edibles that can help us track food from farm to tummy. The project, which uses something called laser-induced graphene (LIG), is a process that creates a “foam made out of tiny cross-linked graphene flakes” that can carry electricity through carbon-rich products like bread, potatoes, and cardboard.
“Overall, the process demonstrated that LIG can be burned into paper, cardboard, cloth, coal, potatoes, coconuts, toasted bread and other foods,” wrote the researchers who hail from Rice University’s Smalley-Curl Institute and Ben-Gurion University of the Negev.
The process can embed or burn patterns that could be used as supercapacitors, radio frequency identification (RFID) antennas or biological sensors. Based on these results, the researchers theorized that any substance with a reasonable amount of carbon can be turned into graphene. To test this theory, Tour’s team sought to burn LIG into food, cardboard and several other everyday, carbon-based materials.
Presumably you won’t be etching power sources and other components onto your Pop Tarts but this early research into edible circuits could pave the way for smarter food.
March 1, 2018 / Comments Off on Graphene-based edible electronics will let you make cereal circuits
Voice control is everywhere these days, but to hear people effectively from every direction, devices like Amazon’s Echo need to have a whole collection of microphones in them — but they take up space, require extra processing power, and restrict industrial design. Soundskrit is a brand new company that aims to replace those many mics with a single one that can clearly hear and separate sound from multiple sources and directions.
The Soundskrit device easily tracked two separate voices speaking from different directions simultaneously, quickly providing independent recordings and transcripts of each one; you can see a video demo here. It could just as easily ignore one voice in order to hear another better, or focus on accepting sound from a single direction, such as in front of a camera.
I’m sure you can imagine half a dozen situations off the top of your head where such a capability might be useful: smart home devices, video recording, meetings and conference calls… anywhere you or a device wants to hear one sound but not another.
This can already be done, of course, but it’s done in a rather clumsy way, by taking the signals from multiple microphones and applying algorithms to sort out the noise, compare waveforms to determine which is stronger where to get an idea of direction, and so on. It’s complex and ends up reducing the quality of the recorded sound, especially in the lower registers.
The design of the company’s prototype microphone was inspired by insects, many of which have highly acute “hearing” that’s accomplished by sensing the way air moves across tiny hairs or other pressure-sensitive structures.
Soundskrit’s tech does a similar thing with a special membrane on a custom chip. Sound enters from all sides and based on how it vibrates the membrane, its origin can be determined with a pretty decent accuracy. And it does it without compromising the quality or tone of the sound.
Research on this topic goes back many years, but these students decided to make a go of turning it into a product after encountering it in their studies. They’ve raised an $800,000 (Canadian) seed round from TandemLaunch, a Montreal-based incubator that focuses on commercializing research.
For now the team is just getting started and the device is far from being in a state that could be included in a real product. But once it’s miniaturized, tested and so on, it’s a fair bet that companies like Google and Amazon, with their focus on accurate voice recognition, will be sniffing around the tech.
January 10, 2018 / Comments Off on Inspired by insect ears, Soundskrit wants to make microphones magically directional
Jason Barnes has used robotic prosthetics before, but this is the first time he has been able to move each finger independently and even play the piano again.
Researchers at Georgia Tech’s Center for Music Technology developed the unique robotic arm using ultrasound sensors that allow amputees to control each prosthetic finger using muscles in their residual limbs.
December 25, 2017 / Comments Off on Cyborg strums ‘Star Wars’ theme song with bionic hand
Virtual reality training is being developed as a method to equip troops with resilience training before deployment — something a Black Mirror episode toyed with in season three.
Australia’s Minister for Defence Industry, Christopher Pyne, has announced $2.2 million for a University of Newcastle project that aims to develop enhanced resilience training for military personnel using VR and biometrics.
The program, which will work in conjunction with the ADF’s existing Battle SMART stress resilience training program, will see neuroscientists designing simulated environments to replicate real-world combat scenarios in VR.
Military personnel will use the program to train in problem-solving unpredictable situations, and build up psychological resilience to pressure. Theoretically, their superiors can use the cognitive data collected on soldiers to “objectively” measure whether a person is ready to be deployed.
Funded by the Australian government, the Defence Science Technology Group (DSTG) and the Australian Army, the project is the work of associate professors Rohan Walker and Eugene Nalivaiko, affiliates of the Hunter Medical Research Institute (HMRI), alongside Dr. Albert “Skip” Rizzo’s team at the University of Southern California.
“We said, is there potentially something where we can bring VR together with an objective assessment of stress and put them together to come up with an improved, immersive, engaging way of training to get better control of your stress levels when you are in demanding workplace situations?” Walker tells Mashable.
“Imagine a helicopter is coming in with casualties on it … It’s a very life threatening situation,” Walker explains. “So, not only would emergency responders or paramedics have to deal with the fact that they may be in a conflict zone, they have to deal with a helicopter landing, but they also have to rapidly triage what might be very significant injuries.
“Now, that would become a very stressful process. In those situations, even without obvious things you think about with combat like bullets and guns, just that immediate pressure of there being a huge number of things that you have to do, and really, the consequences … of making good decisions in those circumstances.”
“The idea will be that trainees can master the skill in a measurable situation where we can control the difficulty of the task to ensure they’re prepared before moving to a real-world conflict situations,” Nalivaiko said in a press statement.
It makes us think immediately of Black Mirror episode “Men Against Fire,” in which soldiers are equipped with a neural implant called MASS that provides instant data via augmented reality, both in training and in the field — and blocks any emotional reaction to killing enemies.
This project isn’t exactly Black Mirror‘s proposal, but it is a project aimed at using simulation to manage psychological stress as an occupational hazard in the military — a hazard that can affect performance in the field.
“What tends to be challenging is where difficult experiences are beyond the individual’s ability to control them,” said Nalivaiko in a press statement.
“It’s imperative our troops are forearmed with strategies to ensure they remain in control of the situation and are equipped with the skills to make a level-headed decision.”
How soldiers cope with high pressure situations
There are two main factors at play, when considering performance under pressure, according to Walker.
“Firstly, cognitive reframing, which involves identifying and then disputing irrational thoughts. Reframing is taking a step back and objectively looking at the scenario to find positive alternatives.
“The second is tactical breathing. Although it may sound simple, breathing is key as it is the only thing we can regulate under pressure.
“When you’re breathing properly, respiration and heart rate are controlled and you have high levels of cognitive flexibility to make better decisions.”
Once the trainees have completed the VR exercises, biometrics can theoretically be used to analyse how ready they are to be deployed for combat.
The VR program already has a prototype, with the launch of the program planned for six months away. But it’s not just the military that could benefit from VR pressure training.
“One of the things that we can do better is the way that we train people to deal with pressure across all workplaces,” Walker told Mashable. “High levels of stress are inherent in nearly every profession.”
Maybe Michelin-starred restaurants can consider VR training program for their high-intensity kitchens.
October 20, 2017 / Comments Off on Australian military developing VR programs to train soldiers to be more resilient to pressure