All posts in “Science”

AI consulting startup Hypergiant brings on Bill Nye as an advisor

Hypergiant, a startup launched last year to address the execution gap in bringing applied AI and machine learning technologies to bear for large companies, has signed on a high-profile new advisor to help out with the new ‘Galactic Systems’ division of its services lineup.

Hypergiant founder CEO Ben Lamm also serves as an Advisory Council Member for The Planetary Society, the nonprofit dedicated to space science and exploration advocacy that’s led by Nye who acts as the Society’s CEO. Nye did some voiceover work for the video at the bottom of this post for Hypergiant through the connection, and then decided to come on in a more formal capacity as an official advisor working with the company. He’ll act as a member of Hypergiant’s Advisory Board.

Nye was specifically interested in helping Hypergiant to work on AI tech that touch on a couple of areas he’s most passionate about.

“Hypergiant has an ambitious mission to address some big problems using artificial intelligence systems,” Nye explained via email. “I’m looking forward to working with Hypergiant to develop artificially intelligent systems in two areas I care about a great deal: climate change and space exploration. We need to think big, and I’m very optimistic about what AI can do to make the world quite a bit better.”

Through its work, Hypergiant has an impact on projects in flight from high-profile customers including Apple, GE, Starbucks and the Department of Homeland Security to name just a few. Earlier this year, Austin-based Hypergiant announced it was launching a dedicated space division through the acquisition of Satellite & Extraterrestrial Operations & Procedures (SEOPS), a Texas company that offered deployment services for small satellites.

Ben Lamm NASA 2

Hypergiant founder and CEO Ben Lamm along with members of the Hypergiant team at NASA. Credit: Hypergiant.

Nye’s role will focus on this division, advising on space, but also equally on advising clients as to climate change in order to ensure that Hypergiant can “make the most of AI systems to hep provide a high quality of life for people everywhere,” Nye wrote.

“Climate change is the biggest issue we face, and we need to get serious about new ways to fight it,” he explained in an email, noting that the potential impact his work with Hypergiant will have in this area specifically is a key reason he’s excited to undertake the new role.

[embedded content]

A Better World from HYPERGIANT on Vimeo.

This robot crawls along wind turbine blades looking for invisible flaws

Wind turbines are a great source of clean power, but their apparent simplicity — just a big thing that spins — belie complex systems that wear down like any other, and can fail with disastrous consequences. Sandia National Labs researchers have created a robot that can inspect the enormous blades of turbines autonomously, helping keep our green power infrastructure in good kit.

The enormous towers that collect energy from wind currents are often only in our view for a few minutes as we drive past. But they must stand for years through inclement weather, temperature extremes, and naturally — being the tallest things around — lightning strikes. Combine that with normal wear and tear and it’s clear these things need to be inspected regularly.

But such inspections can be both difficult and superficial. The blades themselves are among the largest single objects manufactured on the planet, and they’re often installed in distant or inaccessible areas, like the many you see offshore.

“A blade is subject to lightning, hail, rain, humidity and other forces while running through a billion load cycles during its lifetime, but you can’t just land it in a hanger for maintenance,” explained Sandia’s Joshua Paquette in a news release. In other words, not only do crews have to go to the turbines to inspect them, but they often have to do those inspections in place — on structures hundreds of feet tall and potentially in dangerous locations.

Using a crane is one option, but the blade can also be orientated downwards so an inspector can rappel along its length. Even then the inspection may be no more than eyeballing the surface.

“In these visual inspections, you only see surface damage. Often though, by the time you can see a crack on the outside of a blade, the damage is already quite severe,” said Paquette.

Obviously better and deeper inspections are needed, and that’s what the team decided to work on, with partners International Climbing Machines and Dophitech. The result is this crawling robot, which can move along a blade slowly but surely, documenting it both visually and using ultrasonic imaging.

A visual inspection will see cracks or scuffs on the surface, but the ultrasonics penetrate deep into the blades, making them capable of detecting damage to interior layers well before it’s visible outside. And it can do it largely autonomously, moving a bit like a lawnmower: side to side, bottom to top.

Of course at this point it does it quite slowly and requires human oversight, but that’s because it’s fresh out of the lab. In the near future teams could carry around a few of these things, attach one to each blade, and come back a few hours or days later to find problem areas marked for closer inspection or scanning. Perhaps a crawler robot could even live onboard the turbine and scurry out to check each blade on a regular basis.

Another approach the researchers took was drones — a natural enough solution, since the versatile fliers have been pressed into service for inspection of many other structures that are dangerous for humans to get around: bridges, monuments, and so on.

These drones would be equipped with high-resolution cameras and infrared sensors that detect the heat signatures in the blade. The idea is that as warmth from sunlight diffuses through the material of the blade, it will do so irregularly in spots where damage below the surface has changed its thermal properties.

As automation of these systems improves, the opportunities open up: A quick pass by a drone could let crews know whether any particular tower needs closer inspection, then trigger the live-aboard crawler to take a closer look. Meanwhile the humans are on their way, arriving to a better picture of what needs to be done, and no need to risk life and limb just to take a look.

Crowdfunded spacecraft LightSail 2 prepares to go sailing on sunlight

Among the many spacecraft and satellites ascending to space on Monday’s Falcon Heavy launch, the Planetary Society’s LightSail 2 may be the most interesting. If all goes well, a week from launch it will be moving through space — slowly, but surely — on nothing more than the force exerted on it by sunlight.

LightSail 2 doesn’t have solar-powered engines, or use solar energy or heat for some secondary purpose; it will literally be propelled by the physical force of photons hitting its immense shiny sail. Not solar wind, mind you — that’s a different thing altogether.

It’s an idea, explained Planetary Society CEO and acknowledged Science Guy Bill Nye said in a press call ahead of the launch, that goes back centuries.

“It really goes back to the 1600’s,” he said; Kepler deduced that a force from the sun must cause comet tails and other effects, and “he speculated that brave people would one day sail the void.”

So they might, since more recent astronomers and engineers have pondered the possibility more seriously.

“I was introduced to this in the 1970s, in the disco era. I was in Carl Sagan’s astronomy class… wow, 42 years ago, and he talked about solar sailing,” Nye recalled. “I joined the Planetary Society when it was formed in 1980, and we’ve been talking about solar sails around here ever since then. It’s really a romantic notion that has tremendous practical applications; There are just a few missions that solar sails are absolutely ideal for.”

Those would primarily be long-term, medium-orbit missions where a craft needs to stay in an Earth-like orbit, but still get a little distance away from the home planet — or, in the future, long-distance missions where slow and steady acceleration from the sun or a laser would be more practical than another propulsion method.

Mission profile

The eagle-eyed among you may have spotted the “2” in the name of the mission. LightSail 2 is indeed the second of its type; the first launched in 2015, but was not planned to be anything more than an test deployment that would burn up after a week or so.

That mission had some hiccups, with the sail not deploying to its full extent and a computer glitch compromising communications with the craft. It was not meant to fly via solar sailing, and did not.

“We sent the CubeSat up, we checked out the radio, the communications, the overall electronics, and we deployed the sail and we got a picture of that deployed sail in space,” said COO Jennifer Vaughn. “That was purely a deployment test; no solar sailing took place.”

The spacecraft itself, minus the sail, of course.

But it paved the way for its successor, which will attempt this fantastical form of transportation. Other craft have done so, most notably JAXA’s IKAROS mission to Venus, which was quite a bit larger, though as LightSail 2’s creators pointed out, not nearly as efficient as their craft, and had a very different mission.

The brand new spacecraft, loaded into a 3U CubeSat enclosure — that’s about the size of a loaf of bread — is piggybacking on an Air Force payload going up to an altitude of about 720 kilometers. There it will detach and float freely for a week to get away from the rest of the payloads being released.

Once it’s safely on its own, it will fire out from its carrier craft and begin to unfurl the sail. From that loaf-sized package will emerge an expanse of reflective mylar with an area of 32 square meters — about the size of a boxing ring.

Inside the spacecraft’s body is also what’s called a reaction wheel, which can be spun up or slowed down in order to impart the opposite force on the craft, causing it to change its attitude in space. By this method LightSail 2 will continually orient itself so that the photons striking it propel it in the desired direction, nudging it into the desired orbit.

1 HP (housefly power) engine

The thrust produced, the team explained, is very small — as you might expect. Photons have no mass, but they do (somehow) have momentum. Not a lot, to be sure, but it’s greater than zero, and that’s what counts.

“In terms of the amount of force that solar pressure is going to exert on us, it’s on the micronewton level,” said LightSail project manager Dave Spencer. “It’s very tiny compared to chemical propulsion, very small even compared to electric propulsion. But the key for solar sailing is that it’s always there.”

“I have many numbers that I love,” cut in Nye, and detailed one of them: “It’s nine micronewtons per square meter. So if you have 32 square meters you get about a hundred micronewtons. It doesn’t sound like much, but as Dave points out, it’s continuous. Once a rocket engine stops, when it runs out of fuel, it’s done. But a solar sail gets a continuous push day and night. Wait…” (He then argued with himself about whether it would experience night — it will, as you see in the image below.)

Bruce Betts, chief scientist for LightSail, chimed in as well to make the numbers a bit more relatable: “The total force on the sail is approximately equal to the weight of a house fly on your hand on Earth.”

Yet if you added another fly every second for hours at a time, pretty soon you’ve got a really considerable amount of acceleration going on. This mission is meant to find out whether we can capture that force.

“We’re very excited about this launch,” said Nye, “because we’re going to get to a high enough altitude to get away from the atmosphere, far enough that we’ll really gonna be able to build orbital energy and take some, I hope, inspiring pictures.”

Second craft, same (mostly) as the last

The Lightsail going up this week has some improvements over the last one, though overall it’s largely the same — and a relatively simple, inexpensive craft at that, the team noted. Crowdfunding and donations over the last decade have provided quite a bit of cash to pursue this project, but it still is only a small fraction of what NASA might have spent on a similar mission, Spencer pointed out.

“This mission is going to be much more robust than the previous LightSail 1, but as we said previously, it’s done by a small team,” he said. “We’ve had a very small budget relative to our NASA counterparts, probably 1/20th of the budget that a similar NASA mission would have. It’s a low cost spacecraft.”

Annotated image of LightSail 2 courtesy of Planetary Society.

But the improvements are specifically meant to address the main problems encountered by LightSail 2’s predecessor.

Firstly, the computer inside has been upgraded to be more robust (though not radiation-hardened) and given the ability to sense faults and reboot if necessary — they won’t have to wait, as they did for LightSail 1, for a random cosmic ray to strike the computer and cause a “natural reboot.” (Yes, really.)

The deployment of the sail itself has also improved. The previous one only extended to about 90 percent of its full width and couldn’t be adjusted after the fact. Subsequently tests have been done, Betts told me, to exactly determine how many revolutions the motor must make to extend the sail to 100 percent. Not only that, but they have put markings on the extending booms or rods that will help double check how deployment has gone.

“We also have the capability on orbit, if it looks like it’s not fully extended, we can extend it a little bit more,” he said.

Once it’s all out there, it’s uncharted territory. No one has attempted to do this kind of mission, even IKAROS, which had a totally different flight profile. The team is hoping their sensors and software are up to the task — and it should be clear whether that’s the case within a few hours of unfurling the sail.

It’s still mainly an experiment, of course, and what the team learns from this they will put into any future LightSail mission they attempt, but also share it with the spaceflight community and others attempting to sail on sunlight.

“We all know each other and we all share information,” said Nye. “And it really is — I’ve said it as much as I can — it’s really exciting to be flying this thing at last. It’s almost 2020 and we’ve been talking about it for, well, for 40 years. It’s very, very cool.”

LightSail 2 will launch aboard a SpaceX Falcon Heavy no sooner than June 24th. Keep an eye on the site for the latest news and a link to the livestream when it’s almost time for takeoff.

Space startup Wyvern wants to make data about Earth’s health much more accessible

The private space industry is seeing a revolution driven by cube satellites, which are affordable, lightweight satellites that are much easier than traditional satellites to design, build and launch. It’s paving the way for new businesses like Wyvern, an Alberta-based startup that provides a very specific service that wouldn’t even have been possible to offer a decade ago: Relatively low-cost access to hyperspectral imaging taken from low-Earth orbit, which is a method for capturing image data of Earth across many more bands than we’re able to see with our eyes or traditional optics.

Wyvern’s founding team, including CEO Chris Robson, CTO Kristen Cote, CFO Callie Lissinna and VP of Engineering/COO Kurtis Broda, had experience building satellites through their schooling, including working on building the first ever satellite in space designed and built in Alberta, Ex-Alta 1. They’ve also developed their own proprietary optical technology to develop the kind of imagery that will best serve the needs of the clients they’re pursuing. Their first target market, for instance, are farmers, who will be able to log into the commercial version of their product and get up-to-date hyperspectral imaging data of their fields, which can help them optimize yield, detect changes in soil makeup that will tell them if they have too little nitrogen, or even help them spot invasive plants and insects.

“We’re doing all sorts of things that directly affect the bottom line farmers,” explained Robson in an interview. “If you can detect them, and you can quantify them, and the farmers can make decisions on how to act and ultimately how to increase the bottom line. A lot of those things you can’t do with multi-spectral [imaging] right now, for example, you can speciate with multi-spectral, so you can’t detect invasive species.”

Multi-spectral imaging, in contrast to hyperspectral imaging, measures light on average in between 3 to 15 bands, while hyperspectral can manage as many as hundreds of adjoining or neighboring bands, which is why it can do more specialist things like identifying the species of animals on the ground in an observed area from a satellite’s perspective.

Hyperspectral imaging is already a proven technology in use around the world for exactly these purposes, but the main way it’s captured is via drone airplanes, which Robson says is much more costly and less efficient than using CubeSats in orbit.

“Drone airplanes are really expensive, and with us, we’re able to provide it for 10 times less than a lot of these drones [currently in use,” he said.

Wyvern’s business model will focus on owning and operating the satellites, and providing access to the data it caters to customers in a way that’s easy for anyone to access and use.

“Our key differentiator is the fact that we allow access to actual actionable information,” Robson said. “Which means that if you want to order imagery, you do it through a web browser, instead of calling somebody up and waiting one to three days to get a price on it, and to find out whether they could even do what you’re asking.”

Robson says that it’s only even become possible and affordable to do this because of advances in optics (“Our optical system allows us to basically put what should be a big satellite into the form factor of a small one without breaking the laws of physics,” Robson told me), small satellites, data storage and monitoring stations, and privatized launches making space accessible through hitching a ride on a launch alongside other clients.

Wyvern will also occupy its own, underserved niche providing this highly specialized info, first to agricultural clients, and then expanding to five other verticals including forestry, water quality monitoring, environmental monitoring and defense. This isn’t something other more generalist satellite imaging providers like Planet Labs will likely be interested in pursuing, Robson said, because it’s an entirely different kind of business with entirely different equipment, clientele and needs. Eventually, Wyvern hopes to be able to open up access to the data it’s gathering even more broadly.

“You have the right to access [information regarding] the health of the Earth regardless of who you are, what government you’re under, what country you’re a part of or where you are in the world,” he said. “You have the right to see how other humans are treating the Earth, and to see how you’re treating the Earth and how your country is behaving. But you also have the right to take care of the Earth, because we’re super predators. We’re the most intelligent species. We are we have we have the responsibility of being stewards of the Earth. And part of that, though, is being able to add almost omniscient of what’s going on in the Earth in the same way that we understand what’s going on in our bodies. That’s that’s what we want for people.”

Right now, Wyvern is very early on the trajectory of making this happen – they’re working on their first round of funding, and have been speaking to potential customers and getting their initial product validation work finalized. But with actual experience building and launching satellites, and a demonstrated appetite for what they want to build, it seems like they’re off to a promising start.

Tripping grad students over and over for science (and better prosthetic limbs)

Prosthetic limbs are getting better, but not as quickly as you’d think. They’re not as smart as our real limbs, which (directed by the brain) do things like automatically stretch out to catch ourselves when we fall. This particular “stumble reflex” was the subject of an interesting study at Vanderbilt that required its subjects to fall down… a lot.

The problem the team is aiming to help alleviate is simply that users of prosthetic limbs fall, as you might guess, more than most, and when they do fall, it can be very difficult to recover, since an artificial leg — especially for above-the-knee amputations — doesn’t react the same way a natural leg would.

The idea, explained lead researcher and mechanical engineering Professor Michael Goldfarb, is to determine what exactly goes into a stumble response and how to recreate that artificially.

“An individual who stumbles will perform different actions depending on various factors, not all of which are well known. The response changes, because the strategy that is most likely to prevent a fall is highly dependent on the ‘initial conditions’ at the time of stumble,” he told TechCrunch in an email. “We are hoping to construct a model of which factors determine the nature of the stumble response, so when a stumble occurs, we can use the various sensors on a robotic prosthetic leg to artificially reconstruct the reflex in order to provide a response that is effective and consistent with the biological reflex loop.”

The experimental setup looked like this. Subjects were put on a treadmill and told to walk forward normally; a special pair of goggles prevented them from looking down, arrows on a display kept them going straight, and a simple mental task (count backwards by sevens) kept their brain occupied.

Meanwhile an “obstacle delivery apparatus” bode its time, waiting for the best opportunity to slip a literal stumbling block onto the treadmill for the person to trip over.

When this happened, the person inevitably stumbled, though a harness prevented them from actually falling and hurting themselves. But as they stumbled, their movements were captured minutely by a motion capture rig.

After 196 stumbling blocks and 190 stumbles, the researchers had collected a great deal of data on how exactly people move to recover from a stumble. Where do their knees go relative to their ankles? How do they angle their feet? How much force is taken up by the other foot?

Exactly how this data would be integrated with a prosthesis is highly dependent on the nature of the artificial limb and the conditions of the person using it. But having this data, and perhaps feeding it to a machine learning model, will help expose patterns that can be used to inform emergency prosthetic movements.

It could also be used for robotics: “The model could be used directly to program reflexes in a biped,” said Goldfarb. Those human-like motions we see robots undertaking could be even more human when directly based on the original. There’s no rush there — they might be a little too human already.

The research describing the system and the dataset, which they’re releasing for free to anyone who’d like to use it, appeared in the Journal of NeuroEngineering and Rehabilitation.