All posts in “Autonomous Vehicles”

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.

It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.

Can self-driving cars even honk their own horns?

Autonomous vehicles (AVs) will likely change the way we get around forever, but the AI that controls them might not be able to tell other cars on the road when they’re driving like assholes. 

At least not at first. 

Case in point: The City of Las Vegas and AAA’s self-driving shuttle, one of the most advanced public autonomous trials in the U.S., was hit by a semi-truck within hours of its maiden trip last month. The Navya Arma bus was stuck between a car behind it and the slowly advancing truck, which backed its way into the the shuttle. 

The shuttle behaved exactly as it was designed to in the situation, according to a AAA rep — but it didn’t move or, more importantly for the truck driver who might not have seen the vehicle behind it, honk a horn to make its presence known. One of the most essential tools for interpersonal communication between drivers wasn’t even in the AI’s protocol, which made us wonder: Can self-driving cars even beep?

Fostering these systems of communication will be one of the biggest jobs for AV developers, since honking and other audio and visual alerts are essential to the task of driving. 

The answer in this particular instance is yes, but the feature is limited to certain situations. “The shuttle does have the capacity to automatically honk its horn, but that function wasn’t designed for defensive driving techniques — such as honking when an object is actively backing toward the vehicle,” AAA’s rep told Mashable in an email. Instead, the shuttle beeps “offensively,” when objects in its path like pedestrians or idling vehicles don’t move after several seconds.  

Building in beeping

The question of whether AVs can honk has wider ramifications than just the trial in Las Vegas. Robocars will share the roads with human drivers for a protracted period of time in the future, since it will probably take decades for human-operated cars to be eliminated from the roads completely. There will be a learning curve for everyone, and mixing people and robots together will require increasing levels of communication between humans and machines to replace the system of beeps, waves, and middle fingers that rule the road today.  

Fostering these systems of communication will be one of the biggest jobs for AV developers, since honking and other audio and visual alerts are essential to the task of driving. Industry leader Waymo has worked to create the proper protocols for its self-driving vehicles to honk their horns, with different types of beeps to signal intention to other drivers on the road. 

Silicon Valley AV startup has also emphasized machine-human interaction in its platform since launching out of stealth mode last year, and it has shared images of concept designs equipped with obvious visual cues for pedestrians and human drivers on the road. The company’s most current platform can automatically beep in response to on-road stimuli, for the record — but’s CEO Sameep Tandon has wider concerns than just giving the vehicles a horn to honk. 

“We’re really interested in the question of how do vehicles communicate with the outside world and how we can use that to enhance trust,” he said in a phone interview. “How do we know that the vehicle is going to be transparent about what its doing? If it does sense danger, it should be able to communicate.” 

Signaling intention is a two-way street, however, and Tandon says people need to adjust their expectations to help bring AVs along.

“Generally speaking, [building AV communication] is something where it’s going to be a combination of working with people and society at large so that people understand that hey, these are robots — maybe I should behave slightly different around them,” he said. “It’s not the type of thing where there’ll be one solution and it’ll solve every single thing, there’s going to be a need to meet in the middle and work with society to get there.”

Emilio Frazzoli, CTO and Chief Scientist for recent Delphi acquisition and fellow Lyft partner nuTonomy agrees that collaboration in communication will be key to a future filled with self-driving cars — but nuTonomy’s vehicles don’t have protocols to honk their horns at the moment. He said in a phone interview the company is instead focused on perfecting the systems that control blinkers and active signaling, which he believes are used more often than a horn, and therefore more important.  

That’s not to say that audio cues are excluded from nuTonomy’s platform entirely. “Personally I hate beeping and people who do that,” said Frazzoli. “One of the ways we had our cars signal to human driven vehicles nearby that the car is ‘annoyed’ with their behavior was to actually play the sound of angry R2-D2 from Star Wars. That was kind of cool because it wasn’t perceived as rude, but it made people consider what they might be doing wrong. It worked pretty well.”

Making robots more human

The main goal of AV technology is to create safer roads, whether the cars can beep or not. Even though the systems aim to make a fundamentally imperfect task perfect by removing the flaws flesh and blood drivers bring to the steering wheel, however, the only way they’ll be able to coexist with people on the road will be to embrace the ways humans interact.

That’s a fine line to straddle for the teams developing the systems, like the AAA rep who told us that the shuttle scenario happened in part because the AV didn’t behave like a human driver. Waymo’s engineers approached their beeping training in a similar manner, with an explicit goal to give the platform the same type of capabilities as a “patient, seasoned driver.”

The key for successful self-driving cars will be to adopt all of the positive attributes of the best human drivers without any of the distractions or physical shortcomings. Managing the horn is one piece of the puzzle. 

As more self-driving cars hit the streets, like nuTonomy’s recently launched trial with Lyft in Boston, fellow motorists will need to practice patience. Maybe we should be prepared to treat an encounter with vehicles laden with sensors and cameras the same way we handle intersections with clearly marked student driver cars: Give them a wide berth and some extra attention when they’re nearby. Just don’t expect the robots to thank you with a wave before speeding away.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f82887%2f5b38d4b1 5d9a 4f53 8915 9100e61565b8

Lyft’s self-driving cars are now on the road in Boston

Bostonites have a new way to get around the city’s famously contentious streets: robotaxis.  

Lyft and autonomous driving company nuTonomy announced their joint pilot program has been cleared by the city’s authorities to begin picking up passengers. The two companies first disclosed their partnership back in June, but had to wait until the city’s regulatory bodies gave it the green light to actually offer Lyft users driverless rides. 

The program will begin in Boston’s Seaport district, matching riders looking to travel on routes within the area with driverless cabs. The cars will have human safety operators, like other trials, and the program will emphasize rider education about self-driving cars as one of its major points of focus.   

Image: nutonomy/lyft

The Boston program is the first time a self-driving company and ride-hailing company have teamed to put robotaxis on city streets in the U.S., since Lyft’s other self-driving pilot, in the Bay Area, still hasn’t commenced. 

These Lyft trips won’t be the first self-driving rides in Boston, however; nuTonomy launched its own public passenger pilot in the city last month. The company, which was recently acquired by Delphi, also previously partnered with ride-hailing app Grab for a similar program in Singapore last year.

Boston presents a particularly challenging landscape for autonomous vehicles, with a combination of challenging weather conditions and streets filled with terrible drivers. This type of development is becoming more common in the industry; autonomous programs are expanding beyond more comfortable West Coast climes for difficult test sites in Michigan and New York City

Autonomous vehicles need to be equipped to handle just about every environment — so Lyft and nuTonomy will give them a chance to prove themselves by sharing the roads with hot-blooded Bostonians to start. fbaa 0e8d%2fthumb%2f00001

Intel and Warner Bros. are teaming up to build in-cabin entertainment for autonomous cars

When fully autonomous vehicles finally hit the road, they’ll turn everyone into a passenger. As such, we’ll need something to do in order to pass the time while we’re riding in our vehicles.

Luckily, Intel and Warner Bros. are here to help.

Intel and Warner Bros. announced today at the L.A. Auto Show that they’re teaming up to develop in-cabin entertainment experiences for cars of the future.

The two companies will essentially work together to create a new concept vehicle dubbed the AV Entertainment Experience. The car will be the only one of its kind, a specially-tailored vehicle that will be a member of the Intel/Mobileye 100-vehicle testing fleet announced back in August.

The AV Entertainment Experience will use Warner Bros. properties like Batman and Harry Potter along with cutting-edge technology to keep passengers occupied during drives.

Intel CEO Brian Krzanich touted the potential for VR and AR experiences in the vehicle to go along with more traditional entertainment options like TV and movies. AR could specifically be used to open up passengers to advertising opportunities, he wrote, echoing Intel’s June study that outlined a potential new $7 trillion passenger economy once self-driving vehicles become the norm. 

Intel isn’t the only company interested in what passengers will do to pass the time in the autonomous vehicles of the future. Audi’s 25th Hour project studied test subjects in a mockup AV interior to determine how they’ll be most likely to spend their self-driven rides. 

Audi’s researchers found that people were likely to use the extra time for work, not play, so WB and Intel might have a steep road to convincing future AV riders to put their laptops down to enter a virtual playground. If the car can become a Batmobile, though, that might be a different story. fbaa 0e8d%2fthumb%2f00001

Apple might test self-driving cars at this track fbaa 0e8d%2fthumb%2f00001

Apple’s self-driving car project might have a new test site. 

The company is leasing an Arizona proving ground to experiment with its nascent autonomous platform, according to a Jalopnik report citing a source familiar with the project. The facility in Surprise, Arizona was previously owned by Fiat Chrysler, which took advantage of the area’s brutal climate to conduct tests on how heat affected its vehicles.

Apple’s designs for the now-empty property would be focused squarely on testing its autonomous platform, which is thought to be the main focus of the company’s automotive development after the rumored work to actually produce a car, “Project Titan,” reportedly shuttered last year. Tim Cook confirmed Apple’s self-driving car platform earlier this year, and the company’s test vehicles have reportedly been spotted on California’s public roads. 

The proving ground would give Apple an opportunity to test out specific driving scenarios its platform might not encounter in its other pilots and a dedicated space to put test vehicles through their paces free of state regulations and prying eyes.

An Apple spokesperson declined to comment on the Arizona lease report when reached by Mashable via email. 

Apple has been linked with an automotive testing ground before, when people believed the company would build its own vehicles and Project Titan rumors were prevalent. Apple was rumored to have tabbed California’s GoMentum track to test its vehicle prototypes back in 2015, but nothing was ever publicly confirmed. Toyota recently signed an agreement to test its driverless cars on the site. 

Apple computer scientists published new research about their tests with self-driving car software last week, a rare move from employees of the ultra-secretive company. The system they described eliminates the need to use cameras to detect objects around a self-driving car, and could be a key candidate for closed-course testing if the Arizona site really is Apple’s latest move to become a player the world of self-driving car development.  

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f81805%2f38a5f230 ae47 46a3 bb4f d35fdc895af3