All posts in “Artificial Intelligence”

Hold this beam for me, friend robot, and let us construct a house together

Being a neophyte in the world of woodworking — I’ve made a shabby but sturdy shed — I can appreciate the value of a good partner who can help measure, cut, hold stuff and generally be a second pair of hands. The usual drawback with humans is you have to pay them or feed them in return for this duty. So imagine my delight in finding that ETH Zürich is pioneering the art of robot-assisted woodworking!

The multi-institutional Spatial Timber Assemblies DFAB House project is an effort to increase the efficiency not just of the process of framing a home, but also of the design itself.

The robot part is as you might expect, though more easily said than created. A pair of ceiling-mounted robot arms in the work area pluck and cut beams to length, put them in position and drill holes where they will later be attached.

Most of this can be accomplished without any human intervention, and what’s more, without reinforcement plates or scaffolding. The designs of these modules (room-size variations that can be mixed and matched) are generated specifically to be essentially freestanding; load and rigidity are handled by the arrangement of beams.

The CAD work is done ahead of time and the robots follow the blueprint, carefully avoiding one another and working slowly but efficiently.

“If any change is made to the project overall, the computer model can be constantly adjusted to meet the new requirements,” explained Matthias Kohler, who heads the project, in an ETHZ news release. “This kind of integrated digital architecture is closing the gap between design, planning and execution.”

[embedded content]

Human workers have to do the bolting step, but that step too seems like it could be automated; the robots may not have the sensors or tools available to undertake it at present.

Eventually the beams will also be reinforced by similarly prefabbed concrete posts and slot into a “smart slab,” optimized for exactly these layouts and created by sand-based 3D printing. The full three-story structure should be complete and open to explore this fall. You can learn more at the project’s website.

Our 8 favorite startups from Y Combinator W18 Demo Day 2

Microbiome pills, gambling for one-on-one video games and potential cancer cures were the highlights from legendary startup accelerator Y Combinator’s Winter 2018 Demo Day 2. You can read about all 64 startups that launched on Day 1 in verticals like biotech and robotics, our picks for the top 7 companies from Day 1 and our full coverage of another 64 startups from Day 2. TechCrunch’s writers huddled and took feedback from investors to create this list, so click (web) or scroll (mobile) to see our 8 picks for the top startups from Day 2.

Additional reporting by Greg Kumparak, Lucas Matney and Katie Roof

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.

It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.

SwiftKey gets stickers

Back in 2016, Microsoft bought the popular SwiftKey keyboard for Android and iOS for $250 million. It’s still one of the most popular third-party keyboard on both platforms and today, the company is launching one of its biggest updates since the acquisition. With SwiftKey 7.0,  which is out now, the company is adding stickers — because who doesn’t like stickers?

Going forward, the service will offer a number of sticker packs, including some that can be edited and some that are exclusive to Microsoft, too.

That by itself wouldn’t be all that interesting, of course (and I can already see you rolling your eyes) but the real change here is under the hood and sets SwiftKey up for adding more interesting features soon. That’s because the stickers will live in the new SwiftKey toolbar, which will replace the current ‘hub,’ the menu where you can change your keyboard’s layout, size, etc. Right now, what you can find there are stickers and collections, that is, a library of stickers, images and other media you like to torture your friends with.

In the near future, SwiftKey will use this toolbar to enable a number of other new features like location sharing (though only in the U.S. and India for now) and calendar sharing.

“We remain committed to making regular typing as fast and easy as possible,” writes Chris Wolfe, Principal Product Manager at SwiftKey in today’s announcement. “Today’s release of Toolbar, Stickers and Collections, as well as the announcement of Location and Calendar, also shows our ambition to improve users’ experience of rich media. With the support of Microsoft, you can expect to see more innovations in both regular and rich media typing coming soon”

Little Caesars patents a pizza-making robot

A robotic waitress delivers a pizza at a restaurant in Pakistan.
A robotic waitress delivers a pizza at a restaurant in Pakistan.

Image: ss mizra/afp/Getty Images

Robots can already complete a wide variety of tasks for their human overlords, but they may soon be about to conquer the final frontier: making pizzas.

As first reported by ZDNet, Little Caesars has received a new patent for an “automated pizza assembly system,” or what is essentially a robot that makes pizza.

The patent describes it as “a robot including a stationary base and an articulating arm having a gripper attached to the end is operable to grip a pizza pan having pizza dough therein.”

Little Caesars' patented robot from the side.

Little Caesars’ patented robot from the side.

Image: screenshot: monica chin/Little caesars/

The robot will then rotate the pizza pan through “the cheese spreading station” and the “pepperoni applying station.” The patent claims that the robot and its stations will “properly distribute the cheese and pepperoni on the pizza.” 

This patent isn’t all that surprising, when you consider how quickly the entire fast-food industry has moved toward automation. Establishments like McDonalds and Wal-Mart already have robots heavily involved in their most basic procedures. Even the smaller burger chain CaliBurger has a burger-flipping robot of its own, though it’s currently on unpaid leave. It’s worth noting that CaliBurger’s robot worker also requires humans to prepare buns and place patties on its grill. 

This new Little Caesars’ patent doesn’t necessarily mean a pizza-making robot is coming to your neighborhood anytime soon, or even that it will come at all. Still, it’s an exciting sign for anyone who hates to cook, but loves to eat pizza. Its widespread use could mean a more efficient kitchen, and free up time for employees to focus on customer service — plus maybe it will lower the cost of making an already dirt-cheap $5 hot-and-ready pizza.

[embedded content]