All posts in “Security”

Protecting 3D printers from cyberattacks could be as simple as listening carefully

As 3D printers grow smarter and continue to embed themselves in manufacturing and product creation processes, they are exposed to online malefactors just like every other device and network. Security researchers suggest a way to prevent hackers from sabotaging the outputs of 3D printers: listen very, very carefully.

Now, you’re forgiven if someone hacking a 3D printer doesn’t strike you as a particularly egregious threat. But they really are starting to be used for more than hobby and prototyping purposes: prosthetics are one common use, and improved materials have made automotive and aerospace applications possible.

The problem, as some security researchers have already demonstrated, is that a hacker could take over the machine and not merely shut it down but introduce flaws into the printed objects themselves. All it takes is a few small air gaps, a misalignment of internal struts or some such tweak, and all of a sudden the part rated to hold 75 pounds only holds 20. That could be catastrophic in many circumstances.

And of course the sabotaged parts may look identical to ordinary ones to the naked eye. What to do?

A team from Rutgers and Georgia Tech suggests three methods, one of which is easy and clever enough to integrate widely — a bit like Shazam for 3D printing. (The other two are still cool.)

I don’t know if you’ve ever been next to a printer while it works, but it makes a racket. That’s because many 3D printers use a moving print head and various other mechanical parts, all of which produce the usual whines, clicks and other noises.

The researchers recorded those noises while a reference print was being made, then fed that noise in bits to an algorithm that classifies sound so it can be recognized again.

When a new print is done, the sound is recorded again and submitted for inspection by the algorithm. If it’s the same all the way through, chances are the print hasn’t been tampered with. Any significant variation from the original sound, such as certain operations ending too fast or anomalous peaks in the middle of normally flat sections, will be picked up by the system and the print flagged.

It’s just a proof of concept, so there’s still room for improvement, lowering false positives and raising resistance to ambient noise.

Or the acoustic verification could be combined with other measures the team suggested. One requires the print head to be equipped with a sensor that records all its movements. If these differ from a reference motion path, boom, flagged.

The third method impregnates the extrusion material with nanoparticles that give it a very specific spectroscopic signature. If other materials are used instead, or air gaps left in the print, the signature will change and, you guessed it, the object flagged.

Like with the DNA-based malware vector, the hacks and countermeasures proposed here are speculative right now, but it’s never too early to start thinking about them.

“You’ll see more types of attacks as well as proposed defenses in the 3D printing industry within about five years,” said Saman Aliari Zonouz, co-author of the study (PDF), in a Rutgers news release.

And like the DNA research, this paper was presented at the USENIX Security Symposium.

Malicous code written into DNA infects the computer that reads it

In a mind-boggling world first, a team of biologists and security researchers have successfully infected a computer with a malicious program coded into a strand of DNA.

It sounds like science fiction, but I assure you it’s quite real — although you probably don’t have to worry about this particular threat vector any time soon. That said, the possibilities suggested by this project are equally fascinating and terrifying to contemplate.

The multidisciplinary team at the University of Washington isn’t out to make outlandish headlines, although it’s certainly done that. They were concerned that the security infrastructure around DNA transcription and analysis was inadequate, having found elementary vulnerabilities in open-source software used in labs around the world. Given the nature of the data usually being handled, this could be a serious problem going forward.

Sure, they could demonstrate the weakness of the systems with the usual malware and remote access tools. That’s how any competent attacker would come at such a system. But the discriminating security professional prefers to stay ahead of the game.

“One of the big things we try to do in the computer security community is to avoid a situation where we say, ‘Oh shoot, adversaries are here and knocking on our door and we’re not prepared,’” said professor Tadayoshi Kohno, who has a history of pursuing unusual attack vectors for embedded and niche electronics like pacemakers.

From left, Lee Organick, Karl Koscher, and Peter Ney from the UW’s Molecular Information Systems Lab and the Security and Privacy Research Lab prepare the DNA exploit for sequencing

“As these molecular and electronic worlds get closer together, there are potential interactions that we haven’t really had to contemplate before,” added Luis Ceze, one co-author of the study.

Accordingly, they made the leap plenty of sci-fi writers have made in the past, and that we are currently exploring via tools like CRISPR: DNA is basically life’s file system. The analysis programs are reading a DNA strand’s bases (cytosine, thymine etc, the A, T, G, and C we all know) and turning them into binary data. Suppose those nucleotides were encoding binary data in the first place? After all, it’s been done before — right down the hall.

Here comes the mad science

Here’s how they did it. All you really need to know about the transcription application is that it reads the raw data coming from the transcription process and sorts through it, looking for patterns and converting the base sequences it finds into binary code.

“The conversion from ASCII As, Ts, Gs, and Cs into a stream of bits is done in a fixed-size buffer that assumes a reasonable maximum read length,” explained co-author Karl Koscher in response to my requests for more technical information.

That makes it ripe for a basic buffer overflow attack, in which programs execute arbitrary code because it falls outside expected parameters. (They cheated a little by introducing a particular vulnerability into the software themselves, but they also point out that similar ones are present elsewhere, just not as conveniently for purposes of demonstration.)

After developing a way to include executable code in the base sequence, they set about making the exploit itself. Ironically, it’s inaccurate to call it a virus, although it’s closer to a “real” virus than perhaps any malicious code ever written.

“The exploit was 176 bases long,” Koscher wrote. “The compression program translates each base into two bits, which are packed together, resulting in a 44 byte exploit when translated.”

Given that there are 4 bases, it would make sense to have each represent a binary pair. Koscher confirmed this was the case. (If you’re curious, as I was: A=00, C=01, G=10, T=11.)

“Most of these bytes are used to encode an ASCII shell command,” he continued. “Four bytes are used to make the conversion function return to the system() function in the C standard library, which executes shell commands, and four more bytes were used to tell system() where the command is in memory.”

Essentially the code in the DNA escapes the program as soon as it is converted from ACGTs to 00011011s, and executes some commands in the system — a sufficient demonstration of the existence of the threat vector. And there’s plenty of room for more code if you wanted to do more than break out of the app.

At 176 bases, the DNA strand comprising the exploit is “by almost any biological standard, very small,” said Lee Organick, a research scientist who worked on the project.

Biopunk future confirmed

In pursuance of every science journalist’s prime directive, which is to take interesting news and turn it into an existential threat to humanity, I had more questions for the team.

“CONCEIVABLY,” I asked, in all caps to emphasize that we were entering speculative territory, “could such a payload be delivered via, for example, a doctored blood sample or even directly from a person’s body? One can imagine a person whose DNA is essentially deadly to poorly secured computers.”

Irresponsibly, Organick stoked the fires of my fearmongering.

“A doctored biological sample could indeed be used as a vector for malicious DNA to get processed downstream after sequencing and be executed,” he wrote.

“However, getting the malicious DNA strand from a doctored sample into the sequencer is very difficult with many technical challenges,” he continued. “Even if you were successfully able to get it into the sequencer for sequencing, it might not be in any usable shape (it might be too fragmented to be read usefully, for example).”

It’s not quite the biopunk apocalypse I envisioned, but the researchers do want people thinking along these lines at least as potential avenues of attack.

“We do want scientists thinking about this so they can hold the DNA analysis software they write to the appropriate security standards so that this never makes sense to become a potential attack vector in the first place,” said Organick.

“I would treat any input as untrusted and potentially able to compromise these applications,” added Koscher. “It would be wise to run these applications with some sort of isolation (in containers, VMs, etc.) to contain the damage an exploit could do. Many of these applications are also run as publicly-available cloud services, and I would make isolating these instances a high priority.”

The likelihood of an attack like this actually being pulled off is minuscule, but it’s a symbolic milestone in the increasing overlap between the digital and the biological.

The researchers will present their findings and process (PDF) next week at the USENIX Security conference in Vancouver.

Featured Image: Dennis Wise / UW

Logitech Circle 2 is a great surveillance system, but for a price

Accessible home monitoring should be more than just being able to buy a security camera. It means having a packaged software experience, where you should be able to link cameras over a secure cloud connection while mounting them on walls, glass or the outdoors.

Because we live in the age of connected devices, being able to interface with Alexa and HomeKit software should not only be a bonus but a given. Luckily for you, the Circle 2 does all of it.

The only nagging requirement is setting aside a personal surveillance budget, but otherwise the Circle 2 is a great monitoring device.

Now you can spy… or catch the package thief

If you wanted to catch a thief at the door before they run away with your package — I may be projecting here, but it happens — then a single Circle 2 is great for that. However, if your goal is to catch a break-in, you would at least need a camera at the entrance and another in the living room or kitchen, with accessories to match.

A speaker/microphone duo exists in the camera, allowing you to communicate briefly with whomever you see via the Logi Circle iOS/Android app. It functions as a push-to-talk feature within the app or a signed-in desktop.

Video quality is solid, outputting up to 1080p HD video with a 180-degree wide-angle lens. Though maybe not as impressive, the automatic night vision has visibility up to 15 feet and lets me see moving objects, not including a stray cat.

Regarding software, there’s a neat feature that lets you avoid sitting and watching a whole day’s worth of footage to find something of interest. Within the Logi Circle app you can scroll through time, every two to six minutes or so, or have it compressed into a “day brief.”

The next time someone says they knocked on your door, you can hold them (or yourself) accountable thanks to the app.

Getting more of the Circle 2… requires more money

So, you have your first Circle 2 camera, which is great! Now, what if you’d like to say, mount it to a glass panel, mount it outdoors or, better yet, make it wireless and keep it anywhere there’s a Wi-Fi connection? You’re going to need accessories — a lot of them.

  1. DSC_0036

  2. DSC_0051

  3. DSC_0057

  4. DSC_0067

The window mount ($39), battery pack ($49) and outdoor mount ($29) are just a few offerings from Logitech that extend the Circle 2’s functionality. If you buy three cameras, plus the mentioned accessories for a two-bedroom apartment, your total comes out to $657 (excluding taxes).

But wait, there’s more! Circle Safe, the service Logitech built with AES 256-bit dual-layer encryption, only enables person detection and motion zone awareness (say you wanted to monitor movement at a front door) with a premium subscription plan. This service costs $10/month or $99 /year, per camera. For comparison’s sake, Canary’s connected cameras cost $10/month to run, but support three cameras as a start.

Assuming you went with the monthly plan, that’s $30 to maintain three cameras, every month. It’s true, the Circle 2 platform works well and has great software, but to keep the show running can be pricey.

Thankfully, if you’re going for the wireless Circle 2 ($199), it comes bundled with the $49 battery pack, so you do save a little on accessories there. It’s something to be mindful of.

Bottom line

Costly or not, Logitech has a great home security tool here. If you pay the premium for software and accessories, there’s not much you won’t be able to keep a visual on. Honestly, a lone wired Circle 2 unit is enough for basic monitoring.

However, it would benefit the consumer if Logitech created a starter bundle, with more than one camera unit and a few accessories to share between them.

Because now, if from the palm of your hand you’re trying to know what goes on in every direction, you may find yourself running in circles due to upkeep.

Price as reviewed: $179 at Logitech 

Wire launches e2e encrypted team messaging in beta

End-to-end encrypted messaging platform Wire is targeting Slack’s territory with a new messaging for teams product, called Teams.

It announced a beta launch yesterday, and is offering teams a 30-day free trial — with pricing starting at €5 per user per month thereafter, or custom pricing for enterprise installations offering extras such as self-hosted servers and an integration API.

Co-founder Alan Duric tells TechCrunch that demand for the team messaging launch is being driven “primarily” by Wire’s existing user base.

Alex, a TC reader and Wire user who tipped us to the beta launch, is one of those existing users with an interest in the new team messaging feature — although he says his team won’t be signing up until the product exits beta.

Explaining how his team originally started using Wire, Alex says: “One of the team was traveling and visited China where we found the firewall was blocking basically everything. Skype would randomly keep crashing / lagging under a VPN, though Wire simply worked there. We decided just to stick with it.”

The Wire Teams product supports logging in with multiple accounts, so users can maintain a personal Wire messaging account separate from a Wire work account, for example.

There’s also support for adding guests to projects to allow for collaboration with outsiders who don’t have full Wire accounts of their own.

And, in future, Teams users will be able to switch off notifications for different accounts — so they could turn off work alerts for the weekend, for example.

“More and more businesses and international organizations have started using Wire for work since we launched end-to-end encryption. Teams make it easy to organize work groups and related conversations,” it writes in a blog post announcing the beta.

While the company started by offering a more general comms app, launched in late 2014 and backed by Skype co-founder Janus Friis, in recent years it’s shifted emphasis to focus on privacy — rolling out end-to-end encryption in March last year — perhaps calculating this makes for a better differentiator in the crowded messaging platform space.

When it comes to team messaging, services offering end-to-end encryption are certainly a relative rarity. Slack’s data request policy, for example, notes that it will turn over customer data “in response to valid and binding compulsory legal process”.

In its blog about Teams, Wire includes a comparison graphic across a range of team comms products and messaging apps, such as Slack, Skype for business, WhatsApp and Signal, which shows its commercial positioning and marketing at work.

As well as flagging as a plus its use of e2e encryption — which extends to securing features such as group calls, screen-sharing and file sharing — other differentiating advantages it’s claiming include its business having a European base (specifically it’s based in Switzerland, which has a legal regime that’s generally perceived as offering some of the most robust data protection and privacy laws in Europe); and its code being open sourced (unlike, for example, the Facebook-owned WhatsApp messaging platform).

Wire also suggests e2e encryption for team messaging could be a way for companies to ensure compliance with incoming European privacy legislation. The General Data Protection Regulation, which ramps up fines for data breaches, is due to come into force in May next year.

“Businesses affected by the EU’s upcoming GDPR rules benefit from end-to-end encryption, as it automatically protects the data they share with the team from third party access,” Wire claims.

Earlier this year the company published an external audit of its e2e encryption. This uncovered some flaws and issues but generally found the reviewed components to have a “high security”.

Although a third layer of security review — to consider Wire’s complete solution in the round — remained outstanding at that point.

At the time Wire published the audit it committed to ongoing security reviews of “every major development” of its product.

So — presumably — that should include one for the Teams addition when it launches.

Wire hosts its open sourced code on GitHub.

Luma launches a home tech support service for $5 a month

The Luma was a compelling product when it was announced back in late-2015 — we even went so far as declaring the WiFi-extending home mesh system, “fun.” That descriptor doesn’t really apply the startup’s new offering, and indeed, Luma Guardian feels a bit out of left field for the networking hardware-maker. 

The system is a lot of things rolled into one: a VPN service, antivirus (through Webroot), internet speed monitor and a sort of catch-all tech support line for $5 a month — none of which sounds particularly fun, per se. CEO Paul Judge begs to differ, however, insisting that it’s all part of a natural progression for Luma — the company had apparently already been fielding a broad range of security questions from device owners.

It was also one of the earlier home networking devices to bake IoT security into its system, and as a result, the company spotted security problems in around two-thirds of the “thousands and thousands” of homes that currently sport a Luma.

“We’d been blocking them, and the next step was, how do we go to their devices and clean them up?” Judge tells TechCrunch. “How do we install antivirus and clean up the infections on those devices? For 15 years, we built networking and security equipment for companies. You can have the best equipment in the world, but at the end of the day, they had a team to manage it all. Having someone there who pays attention is key.”

[embedded content]

The result is what Luma refers to as a sort of “IT team for the house,” a way to offer protection and peace of mind for users who aren’t savvy enough to pull together those internet defense systems on their own. And $5 a month actually sounds like a decent price — after all, many VPN providers charge right around that for a standalone service. The concierge service will most likely appeal to older users, and as such, the company has enlisted the one-time Dos Equis pitchman “Most Interesting Man in the World,” Jonathan Goldsmith.

Of course, the startup can’t actually refer to him as such (trademarks and all), but he’s essentially reprised the role, this time with what looks to be a glass of whiskey at his side. The 78-year-old was replaced as a beer spokesman by a younger actor late last year, but his age puts him firmly in one of the product’s key potential demographics. Of course, in order to take advantage of the service, users will need a Luma deployed in their home, so they’ll either have to have enough tech savvy to pick up the system — or at least have someone gift them one.

The Guardian service is available starting today.