All posts in “Science”

Hold this beam for me, friend robot, and let us construct a house together

Being a neophyte in the world of woodworking — I’ve made a shabby but sturdy shed — I can appreciate the value of a good partner who can help measure, cut, hold stuff and generally be a second pair of hands. The usual drawback with humans is you have to pay them or feed them in return for this duty. So imagine my delight in finding that ETH Zürich is pioneering the art of robot-assisted woodworking!

The multi-institutional Spatial Timber Assemblies DFAB House project is an effort to increase the efficiency not just of the process of framing a home, but also of the design itself.

The robot part is as you might expect, though more easily said than created. A pair of ceiling-mounted robot arms in the work area pluck and cut beams to length, put them in position and drill holes where they will later be attached.

Most of this can be accomplished without any human intervention, and what’s more, without reinforcement plates or scaffolding. The designs of these modules (room-size variations that can be mixed and matched) are generated specifically to be essentially freestanding; load and rigidity are handled by the arrangement of beams.

The CAD work is done ahead of time and the robots follow the blueprint, carefully avoiding one another and working slowly but efficiently.

“If any change is made to the project overall, the computer model can be constantly adjusted to meet the new requirements,” explained Matthias Kohler, who heads the project, in an ETHZ news release. “This kind of integrated digital architecture is closing the gap between design, planning and execution.”

[embedded content]

Human workers have to do the bolting step, but that step too seems like it could be automated; the robots may not have the sensors or tools available to undertake it at present.

Eventually the beams will also be reinforced by similarly prefabbed concrete posts and slot into a “smart slab,” optimized for exactly these layouts and created by sand-based 3D printing. The full three-story structure should be complete and open to explore this fall. You can learn more at the project’s website.

Water Abundance XPRIZE finalists compete in gathering water from thin air

Despite being a necessity for life, clean, drinkable water can be extremely hard to come by in some places where war has destroyed infrastructure or climate change has dried up rivers and aquifers. The Water Abundance XPRIZE is up for grabs to teams that can suck fresh water straight out of the air, and it just announced its five finalists.

The requirements for the program are steep enough to sound almost like science fiction: the device must extract “a minimum of 2,000 liters of water per day from the atmosphere using 100 percent renewable energy, at a cost of no more than 2 cents per liter.” Is that even possible?!

For a million bucks, people will try anything. But only five teams have made it to the finals, taking equal shares of a $250,000 “milestone prize” to further their work. There isn’t a lot of technical info on them yet, but here they are, in alphabetical order:

Hydro Harvest: This Australian team based out of the University of Newcastle is “going back to basics,” probably smart if you want to keep costs down. The team has worked together before on an emission-free engine that turns waste heat into electricity.

JMCC Wing: This Hawaiian team’s leader has been working on solar and wind power for many years, so it’s no surprise their solution involves the “marriage” of a super-high-efficiency, scalable wind energy harvester with a commercial water condenser. The bigger the generator, the cheaper the energy.

Skydra: Very little information is available for this Chicago team, except that they have created “a hybrid solution that utilizes both natural and engineered systems.”

The Veragon & Thinair: Alphabetically this collaboration comes on both sides of U, but I’m putting it here. U.K. collaboration has developed a material that “rapidly enhances the process of water condensation,” and are planning not only to produce fresh water but also to pack it with minerals.

Uravu: Out of Hyderabad in India, this team is also going back to basics with a solar-powered solution that doesn’t appear to actually use solar cells — the rays of the sun and design of the device do it all. The water probably comes out pretty warm, though.

The first round of testing took place in January, and round 2 comes in July, at which point the teams’ business plans are also due. In August there should be an announcement of the $1 million grand prize winner. Good luck to all involved and regardless of who takes home the prize, here’s hoping this tech gets deployed to good purpose where it’s needed.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.

It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.

IBM working on ‘world’s smallest computer’ to attach to just about everything

IBM is hard at work on the problem of ubiquitous computing, and its approach, understandably enough, is to make a computer small enough that you might mistake it for a grain of sand. Eventually these omnipresent tiny computers could help authenticate products, track medications and more.

Look closely at the image above and you’ll see the device both on that pile of salt and on the person’s finger. No, not that big one. Look closer:

It’s an evolution of IBM’s “crypto anchor” program, which uses a variety of methods to create what amounts to high-tech watermarks for products that verify they’re, for example, from the factory the distributor claims they are, and not counterfeits mixed in with genuine items.

The “world’s smallest computer,” as IBM continually refers to it, is meant to bring blockchain capability into this; the security advantages of blockchain-based logistics and tracking could be brought to something as benign as a bottle of wine or box of cereal.

A schematic shows the parts (you’ll want to view full size).

In addition to getting the computers extra-tiny, IBM intends to make them extra-cheap, perhaps 10 cents apiece. So there’s not much of a lower limit on what types of products could be equipped with the tech.

Not only that, but the usual promises of ubiquitous computing also apply: this smart dust could be all over the place, doing little calculations, sensing conditions, connecting with other motes and the internet to allow… well, use your imagination.

It’s small (about 1mm x 1mm), but it still has the power of a complete computer, albeit not a hot new one. With a few hundred thousand transistors, a bit of RAM, a solar cell and a communications module, it has about the power of a chip from 1990. And we got a lot done on those, right?

Of course at this point it’s very much still a research project in IBM’s labs, not quite a reality; the project is being promoted as part of the company’s “five in five” predictions of turns technology will take in the next five years.

Meet the man whose voice became Stephen Hawking’s

A man and a voice who will be missed.
A man and a voice who will be missed.

Image: Karwai Tang/Getty Images

Stephen Hawking’s computer-generated voice is so iconic that it’s trademarked — The filmmakers behind The Theory of Everything had to get Hawking’s personal permission to use the voice in his biopic.

But that voice has an interesting origin story of its own.

Back in the ’80s, when Hawking was first exploring text-to-speech communication options after he lost the power of speech, a pioneer in computer-generated speech algorithms was working at MIT on that very thing. His name was Dennis Klatt.

As Wired uncovered, Klatt’s work was incorporated into one of the first devices that translated speech into text: the DECtalk. The company that made the speech synthesizer for Hawking’s very first computer used the voice Klatt had recorded for computer synthesis. The voice was called ‘Perfect Paul,’ and it was based on recordings of Klatt himself. 

In essence, Klatt lent his voice to the program that would become known the world over as the voice of Stephen Hawking.

Hawking passed away on Wednesday at the age of 76. The renowned cosmologist lived with amyotrophic lateral sclerosis, or ALS, for 55 years. His death has prompted an outpouring of love, support, and admiration for his work and his inspirational outlook on life. It’s also prompted reflection on how he managed to have such an enormous impact on science and the world, when his primary mode of communication for the last four decades was a nerve sensor in his cheek that allowed him to type, and a text-to-speech computer. 

Though Hawking had only had the voice for a short time, it quickly became his own. According to Wired, when the company that produced the synthesizer offered Hawking an upgrade in 1988, he refused it. Even recently, as Intel worked on software upgrades for Hawking over the last decade, they searched through the dusty archives of a long-since-acquired company so they could use the original Klatt-recorded voice, at Hawking’s request.

Klatt was an American engineer who passed away in 1989, just a year after Hawking insisted on keeping ‘Perfect Paul’ as his own. He was a member of MIT’s Speech Communication Group, and according to his obituary, had a special interest in applying his research in computational linguistics to assist people with disabilities.

Hawking has been known to defend and champion his voice. During a 2014 meeting with the Queen, she jokingly asked the British Hawking “have you still got that American voice?” Hawking, like the sass machine that he is, replied “Yes, it is copyrighted actually.”

Hawking doesn’t actually consider his voice fully “American.” In a section on his website entitled “The Computer,” Hawking explains his voice technology:

“I use a separate hardware synthesizer, made by Speech Plus,” he writes. “It is the best I have heard, although it gives me an accent that has been described variously as Scandinavian, American or Scottish.”

It’s an accent, and a voice, that will be missed.

You can find Hawking’s last lecture which he gave in Japan earlier this month on his website. It’s called ‘The Beginning of Time.’

[embedded content]