All posts in “Science”

‘Star Wars’ returns: Trump calls for space-based missile defense

The President has announced that the Defense Department will pursue a space-based missile defense system reminiscent of the one proposed by Reagan in 1983. As with Reagan’s ultimately abortive effort, the technology doesn’t actually exist yet and may not for years to come — but it certainly holds more promise now than 30 years ago.

In a speech at the Pentagon reported by the Associated Press, Trump explained that a new missile defense system would “detect and destroy any missile launched against the United States anywhere, any time, any place.”

“My upcoming budget will invest in a space-based missile defense layer. It’s new technology. It’s ultimately going to be a very, very big part of our defense, and obviously our offense,” he said. The nature of this “new technology” is not entirely clear, as none was named or ordered to be tested or deployed.

Lest anyone think that this is merely one of the President’s flights of fancy, he is in fact simply voicing the conclusions of the Defense Department’s 2019 Missile Defense Review, a major report that examines the state of the missile threat against the U.S. and what countermeasures might be taken.

It reads in part:

As rogue state missile arsenals develop, space will play a particularly important role in support of missile defense.

Russia and China are developing advanced cruise missiles and hypersonic missile capabilities that can travel at exceptional speeds with unpredictable flight paths that challenge existing defensive systems.

The exploitation of space provides a missile defense posture that is more effective, resilient and adaptable to known and unanticipated threats… DoD will undertake a new and near-term examination of the concepts and technology for space-based defenses to assess the technological and operational potential of space-basing in the evolving security environment.

The President’s contribution seems to largely have been to eliminate the mention of the nation-states directly referenced (and independently assessed at length) in the report, and to suggest the technology is ready to deploy. In fact all the Pentagon is ready to do is begin research into the feasibility of the such a system or systems.

No doubt space-based sensors are well on their way; we already have near-constant imaging of the globe (companies like Planet have made it their mission), and the number and capabilities of such satellites are only increasing.

Space-based tech has evolved considerably over the many years since the much-derided “Star Wars” proposals, but some of them are still as unrealistic as they were then. However as the Pentagon report points out, the only way to know for sure is to conduct a serious study of the possibilities, and that’s what this plan calls for. All the same it may be best for Trump not to repeat Reagan’s mistake of making promises he can’t keep.

Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Driving down the cost of preserving genetic material, Acorn Biolabs raises $3.3 million

Acorn Biolabs wants consumers to pay them to store genetic material in a bet that the increasing advances in targeted genetic therapies will yield better healthcare results down the line.

The company’s pitch is to “Save young cells today, live a longer, better, tomorrow.” It’s a gamble on the frontiers of healthcare technology that has managed to net the company $3.3 million in seed financing from some of Canada’s busiest investors.

For the Toronto-based company, the pitch isn’t just around banking genetic material — a practice that’s been around for years — it’s about making that process cheaper and easier.

Acorn has come up with a way to collect and preserve the genetic material contained in hair follicles, giving its customers a way to collect full-genome information at home rather than having to come in to a facility and getting bone marrow drawn (the practice at one of its competitors, Forever Labs) .

“We have developed a proprietary media that cells are submerged in that maintains the viability of those cells as they’re being transported to our labs for processing,” says Acorn Biolabs chief executive Dr. Drew Taylor.

“Rapid advancements in the therapeutic use of cells, including the ability to grow human tissue sections, cartilage, artificial skin and stem cells, are already being delivered. Entire heart, liver and kidneys are really just around the corner. The urgency around collecting, preserving and banking youthful cells for future use is real and freezing the clock on your cells will ensure you can leverage them later when you need them,” Taylor said in a statement.

Typically, the cost of banking a full genome test is roughly $2,000 to $3,000, and Acorn says they can drop that cost to less than $1,000. Beyond the cost of taking the sample and storing it, Acorn says it will reduce to roughly $100 a year the fees to store such genetic materials.

It’s important to note that healthcare doesn’t cover any of this. It’s a voluntary service for those neurotic enough or concerned enough about the future of healthcare and their potential health. 

There’s also no services that Acorn will provide on the back end of the storage… yet.

What people do need to realize is that there is power with that data that can improve healthcare. Down the road we will be able to use that data to help people collect that data and power studies,” says Taylor. 

The $3.3 million the company raised came from Real Ventures, Globalive Technology, Pool Global Partners and Epic Capital Management and other undisclosed investors.

“Until now, any live cell collection solutions have been highly expensive, invasive and often painful, as well as being geographically limited to specialized clinics,” said Anthony Lacavera, founder and chairman at Globalive. “Acorn is an industry-leading example of how technology can bring real innovation to enable future healthcare solutions that will have meaningful impact on people’s wellbeing and longevity, while at the same time — make it easy, affordable and frictionless for everyone.”

CERN’s plan for 100-km collider makes the LHC look like a hula hoop

The Large Hadron Collider has produced a great deal of incredible science, most famously the Higgs Boson — but physicists at CERN, the international organization behind the LHC, are already looking forward to the next model. And the proposed Future Circular Collider, at 100 kilometers or 62 miles around, would be quite an upgrade.

The idea isn’t new; CERN has had people looking into it for years. But the conceptual design report issued today shows that all that consulting hasn’t been idle: there’s a relatively cohesive and practical plan — as practical as a particle collider can be — and a decent case for spending the $21 billion or so that would be needed.

“These kind of largest scale efforts and projects are huge starters for networking, connecting institutes across borders, countries,” CERN’s Michael Benedikt, who led the report, told Nature. “All these things together make up a very good argument for pushing such unique science projects.”

On the other hand, while the LHC has been a great success, it hasn’t exactly given physicists an unambiguous signpost as to what they should pursue next. The lack of new cosmic mysteries — for example, a truly anomalous result or mysterious gap where a particle is expected — has convinced some that they must simply turn up the heat, but others that bigger isn’t necessarily better.

The design document provides several possible colliders, of which the 100-km ring is the largest and would produce the highest-energy collisions. Sure, you could smash protons together at 100,000 gigaelectron-volts rather than 16,000 — but what exactly will that help explain? We have left my areas of expertise, such as they are, well behind at this point so I will not speculate, but the question at least is one being raised by those in the know.

It’s worth noting that Chinese physicists are planning something similar, so there’s the aspect of international competition as well. How should that affect plans? Should we just ask China if we can use theirs? The academic world is much less affected by global strife and politics than, say, the tech world, but it’s still not ideal.

There are plenty of options to consider and time is not of the essence; it would take a decade or more to get even the simplest and cheapest of these proposals up and running.

Turns out the science saying screen time is bad isn’t science

A new study is making waves in the worlds of tech and psychology by questioning the basis of thousands of paper and analyses with conflicting conclusions on the effect of screen time on well-being. The researchers claim is that the science doesn’t agree because it’s bad science. So is screen time good or bad? It’s not that simple.

The conclusions only make the mildest of claims about screen time, essentially that as defined it has about as much effect on well-being as potato consumption. Instinctively we may feel that not to be true; technology surely has a greater effect than that — but if it does, we haven’t found a way to judge it accurately.

The paper, by Oxford scientists Amy Orben and Andrew Przybylski, amounts to a sort of king-sized meta-analysis of studies that come to some conclusion about the relationship between technology and well-being among young people.

Their concern was that the large datasets and statistical methods employed by researchers looking into the question — for example, thousands and thousands of survey responses interacting with weeks of tracking data for each respondent — allowed for anomalies or false positives to be claimed as significant conclusions. It’s not that people are doing this on purpose necessarily, only that it’s a natural result of the approach many are taking.

“Unfortunately,” write the researchers in the paper, “the large number of participants in these designs means that small effects are easily publishable and, if positive, garner outsized press and policy attention.” (We’re a part of that equation, of course, but speaking for myself at least I try to include a grain of salt with such studies, indeed with this one as well.)

In order to show this, the researchers essentially redid the statistical analysis for several of these large datasets (Orben explains the process here), but instead of only choosing one result to present, they collected all the plausible ones they could find.

For example, imagine a study where the app use of a group of kids was tracked, and they were surveyed regularly on a variety of measures. The resulting (fictitious, I hasten to add) paper might say it found kids who use Instagram for more than two hours a day are three times as likely to suffer depressive episodes or suicidal ideations. What the paper doesn’t say, and which this new analysis could show, is that the bottom quartile is far more likely to suffer from ADHD, or the top five percent reported feeling they had a strong support network.

In the new study, any and all statistically significant results like those I just made up are detected and compared with one another. Maybe a study came out six months later that found the exact opposite in terms of ADHD but also didn’t state it as a conclusion.

This figure from the paper shows a few example behaviors that have more or less of an effect on well-being.

Ultimately what the Oxford study found was that there is no consistent good or bad effect, and although a very slight negative effect was noted, it was small enough that factors like having a single parent or needing to wear glasses were far more important.

Yet, and this is important to understand, the study does not conclude that technology has no negative or positive effect; such a broad conclusion would be untenable on its face. The data it rounds up are (as some experts point out with no ill will towards the paper) simply inadequate to the task and technology use is too variable to reduce to single factor. Its conclusion is that studies so far have in fact been inconclusive and we need to go back to the drawing board.

“The nuanced picture provided by these results is in line with previous psychological and epidemiological research suggesting that the associations between digital screen-time and child outcomes are not as simple as many might think,” the researchers write.

Could, for example, social media use affect self-worth, either positively or negatively? Could be! But the ways that scientists have gone about trying to find out have, it seems, been inadequate.

In the future, the authors suggest, researchers should not only design their experiments more carefully, but be more transparent about their analysis. By committing to document all significant links in the dataset they create, whether they fit the narrative or hypothesis or go against it, researchers show that they have not rigged the study from the start. Designing and iterating with this responsibility in mind will produce better studies and perhaps even some real conclusions.

What should parents, teachers, siblings, and others take away from this? Not anything about screen time or whether tech is good or bad, certainly. Rather let it be another instance of the frequently learned lesson that science is a work in progress and must be considered very critically before application.

Your kid is an individual and things like social media and technology affect them differently from other kids; it may very well be that your informed opinion of their character and habits, tempered with that of a teacher or psychologist, is far more accurate than the “latest study.”

Orben and Przybylski’s study, “The association between adolescent well-being and digital technology use,” appears in today’s issue of the journal Nature Human Behavior.