All posts in “Science”

Gates-backed Lumotive upends lidar conventions using metamaterials

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D, and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; This can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes a NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then do it again, again, and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow, or reverse a beam that’s being moved by a high speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically-steered lidar system would detect the deer at the same time as the mechanically-steered ones, or perhaps a bit sooner; Upon noticing this movement, could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or an larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps he is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

Intel and Cray are building a $500 million ‘exascale’ supercomputer for Argonne National Lab

In a way, I have the equivalent of a supercomputer in my pocket. But in another, more important way, that pocket computer is a joke compared with real supercomputers — and Intel and Cray are putting together one of the biggest ever with a half-billion-dollar contract from the Department of Energy. It’s going to do exaflops!

The “Aurora” program aims to put together an “exascale” computing system for Argonne National Laboratory by 2021. The “exa” is prefix indicating bigness, in this case 1 quintillion floating point operations, or FLOPs. They’re kind of the horsepower rating of supercomputers.

For comparison, your average modern CPU does maybe a hundred or more gigaflops. A thousand gigaflops makes a teraflop, a thousand teraflops makes a petaflop, and a thousand petaflops makes an exaflop. So despite major advances in computing efficiency going into making super powerful smartphones and desktops, we’re talking several orders of magnitude difference. (Let’s not get into GPUs, it’s complicated.)

And even when compared with the biggest supercomputers and clusters out there, you’re still looking at a max of 200 petaflops (that would be IBM’s Summit, over at Oak Ridge National Lab) or thereabouts.

Just what do you need that kind of computing power for? Petaflops wouldn’t do it? Well, no, actually. One very recent example of computing limitations in real-world research was this study of how climate change could affect cloud formation in certain regions, reinforcing the trend and leading to a vicious cycle.

This kind of thing could only be estimated with much coarser models before; Computing resources were too tight to allow for the kind of extremely large number of variables involved here (or here — more clouds). Imagine simulating a ball bouncing on the ground — easy — now imagine simulating every molecule in that ball, their relationships to each other, gravity, air pressure, other forces — hard. Now imagine simulating two stars colliding.

The more computing resources we have, the more can be dedicated to, as the Intel press release offers as examples, “developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction and discovering materials for the creation of more efficient organic solar cells.”

Intel says that Aurora will be the first exaflop system in the U.S. — an important caveat, since China is aiming to accomplish the task a year earlier. There’s no reason to think they won’t achieve it, either, since Chinese supercomputers have reliably been among the fastest in the world.

If you’re curious what ANL may be putting its soon-to-be-built computers to work for, feel free to browse its research index. The short answer is “just about everything.”

Tiny claws let drones perch like birds and bats

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.

Opportunity’s last Mars panorama is a showstopper

The Opportunity Mars Rover may be officially offline for good, but its legacy of science and imagery is ongoing — and NASA just shared the last (nearly) complete panorama the robot sent back before it was blanketed in dust.

After more than 5,000 days (or rather sols) on the Martian surface, Opportunity found itself in Endeavour Crater, specifically in Perseverance Valley on the western rim. For the last month of its active life, it systematically imaged its surroundings to create another of its many impressive panoramas.

Using the Pancam, which shoots sequentially through blue, green, and deep red (near-infrared) filters, it snapped 354 images of the area, capturing a broad variety of terrain as well as bits of itself and its tracks into the valley. You can click the image below for the full annotated version.

It’s as perfect and diverse an example of the Martian landscape as one could hope for, and the false-color image (the flatter true-color version is here) has a special otherworldly beauty to it, which is only added to by the poignancy of this being the rover’s last shot. In fact, it didn’t even finish — a monochrome region in the lower left shows where it needed to add color next.

This isn’t technically the last image the rover sent, though. As the fatal dust storm closed in, Opportunity sent one last thumbnail for an image that never went out: its last glimpse of the sun.

After this the dust cloud so completely covered the sun that Opportunity was enveloped in pitch darkness, as its true last transmission showed:

All the sparkles and dots are just noise from the image sensor. It would have been complete dark — and for weeks on end, considering the planetary scale of the storm.

Opportunity had a hell of a good run, lasting and traveling many times what it was expected to and exceeding even the wildest hopes of the team. That right up until its final day it was capturing beautiful and valuable data is testament to the robustness and care with which it was engineered.

Koala-sensing drone helps keep tabs on drop bear numbers

It’s obviously important to Australians to make sure their koala population is closely tracked — but how can you do so when the suckers live in forests and climb trees all the time? With drones and AI, of course.

A new project from Queensland University of Technology combines some well-known techniques in a new way to help keep an eye on wild populations of the famous and soft marsupials. They used a drone equipped with a heat-sensing camera, then ran the footage through a deep learning model trained to look for koala-like heat signatures.

It’s similar in some ways to an earlier project from QUT in which dugongs — endangered sea cows — were counted along the shore via aerial imagery and machine learning. But this is considerably harder.

A koala.

“A seal on a beach is a very different thing to a koala in a tree,” said study co-author Grant Hamilton in a news release, perhaps choosing not to use dugongs as an example because comparatively few know what one is.

“The complexity is part of the science here, which is really exciting,” he continued. “This is not just somebody counting animals with a drone, we’ve managed to do it in a very complex environment.”

The team sent their drone out in the early morning, when they expected to see the greatest contrast between the temperature of the air (cool) and tree-bound koalas (warm and furry). It traveled as if it was a lawnmower trimming the tops of the trees, collecting data from a large area.

Infrared image, left, and output of the neural network highlighting areas of interest.

This footage was then put through a deep learning system trained to recognize the size and intensity of the heat put out by a koala, while ignoring other objects and animals like cars and kangaroos.

For these initial tests, the accuracy of the system was checked by comparing the inferred koala locations with ground truth measurements provided by GPS units on some animals and radio tags on others. Turns out the system found about 86 percent of the koalas in a given area, considerably better than an “expert koala spotter,” who rates about a 70. Not only that, but it’s a whole lot quicker.

“We cover in a couple of hours what it would take a human all day to do,” Hamilton said. But it won’t replace human spotters or ground teams. “There are places that people can’t go and there are places that drones can’t go. There are advantages and downsides to each one of these techniques, and we need to figure out the best way to put them all together. Koalas are facing extinction in large areas, and so are many other species, and there is no silver bullet.”

Having tested the system in one area of Queensland, the team is now going to head out and try it in other areas of the coast. Other classifiers are planned to be added as well, so other endangered or invasive species can be identified with similar ease.

Their paper was published today in the journal Nature Scientific Reports.