All posts in “Security”

Intel announces hardware fixes for Spectre and Meltdown on upcoming chips

When the Spectre and Meltdown bugs hit, it became clear that they wouldn’t be fixed with a few quick patches — the problem runs deeper than that. Fortunately, Intel has had plenty of time to work on it, and new chips coming out later this year will include improvements at the hardware/architecture level that protect against the flaws. Well, two out of three, anyway.

CEO Brian Krzanich announced the news in a company blog post. After thanking a few partners, he notes that all affected products from the last 5 years have received software updates to protect them from the bugs. Of course, the efficacy of those updates is debatable, as well as their performance hits — and that’s if your hardware vendor even gets a patch out. But at any rate, the fixes are available.

There are actually three semi-related bugs here: Spectre is variants 1 and 2; then there’s variant 3, which researchers dubbed Meltdown. Variant 1 is arguably the most difficult of them all to fix, and as such Intel doesn’t have a hardware solution for it yet — but variants 2 and 3 it has in the bag.

“We have redesigned parts of the processor to introduce new levels of protection through partitioning that will protect against both Variants 2 and 3,” Krzanich writes. Cascade Lake Xeon and 8th-gen Core processors should include these changes when they ship in the second half of 2018. Although that’s a bit vague, we can be certain that Intel will prominently advertise what new chips include the mitigations as we get closer to release.

Lastly, even older hardware will be getting the microcode updates — back to the 1st-gen Core processors. Remember Nehalem and Penryn? Those will be patched in time, as well. Anyone surprised that a Nehalem system is still in use anywhere probably hasn’t worked in IT at a big company or government agency. I bet there are 98SE systems running on Pentiums somewhere in the Department of Energy.

This announcement doesn’t require anything from users, but keep your computer up to date if you know how, and ask customer service for your device provider if you’re not sure.

TypingDNA launches Chrome extension that verifies your identity based on typing

TypingDNA has a new approach to verifying your identity based on how you type.

The startup, which is part of the current class at Techstars NYC, is pitching this as an alternative to two-factor authentication — namely, the security feature that sends unique codes to a separate device (usually your phone) to make sure someone else isn’t logging in with your password.

The problem with two factor? TypingDNA Raul Popa put it simply: “It’s a bad user experience … Nobody wants to use a different device.” (I know that TechCrunch writers have had two-factor issues of their own, like when they’re trying to log in on an airplane and can’t connect their phone.)

So TypingDNA allows users to verify their identity without having to whip out their phone. Instead, they just enter their name and password into a window, then TypingDNA will analyze their typing and confirm that it’s really them.

TypingDNA Authenticator - Animation

The startup’s business model revolves around working with partners to incorporate the technology, but it’s also launching a free Chrome extension that works as an alternative to two-factor authentication on a wide range of services, including Amazon Web Services, Coinbase and Gmail.

Popa said TypingDNA measures two key aspects of your typing: How long it takes you to reach a key and how long you keep the key pressed down. Apparently these patterns are unique; Popa showed me that the system could tell the difference between his typing and mine, and you can test it out for yourself on the TypingDNA website.

He also said that the company can adjust the strictness of the system, getting the rate of false positives as low as 0.1 percent. In the case of the Chrome authenticator, Popa said, “We minimize the false acceptance rate” — so you might get rejected if you’re typing in an unusual position, or if there’s some other reason you’re typing slower or faster than usual. But in that case, the authenticator will just ask you to try again.

[embedded content]

And again, you can use the Chrome extension on a variety of sites. Most two-factor options include confirming a device using a QR code, which TypingDNA can grab. The two-factor codes are then sent to the TypingDNA extension (the codes are stored locally on your computer, not the company’s servers), and they’re revealed once you’ve verified your identity with the aforementioned typing.

You can visit TypingDNA to learn more and download the extension.

Security researchers find flaws in AMD chips but raise eyebrows with rushed disclosure

A newly discovered set of vulnerabilities in AMD chips is making waves not because of the scale of the flaws, but rather the rushed, market-ready way in which they were disclosed by the researchers. When was the last time a bug had its own professionally shot video and PR rep, yet the company affected was only alerted 24 hours ahead of time? The flaws may be real, but the precedent set here is an unsavory one.

The flaws in question were discovered by CTS Labs, a cybersecurity research outfit in Israel, and given a set of catchy names: Ryzenfall, Masterkey, Fallout and Chimera, with associated logos, a dedicated website and a whitepaper describing them.

So far, so normal: major bugs like Heartbleed and of course Meltdown and Spectre got names and logos, too.

The difference is that in those cases the affected parties, such as Intel, the OpenSSL team and AMD were quietly alerted well ahead of time. This is the concept of “responsible disclosure,” and gives developers first crack at fixing an issue before it becomes public.

There’s legitimate debate over just how much control big companies should exert over the publicity of their own shortcomings, but generally speaking in the interest of protecting users the convention tends to be adhered to. In this case, however, the CTS Labs team sprang their flaws on AMD fully formed and with little warning.

The flaws discovered by the team are real, though they require administrative privileges to execute a cascade of actions, meaning taking advantage of them requires considerable access to the target system. The research describes some as backdoors deliberately included in the chips by Taiwanese company ASmedia, which partners with many manufacturers to produce components.

The access requirement makes these much more limited than the likes of Meltdown and Spectre, which exploited problems at the memory handling and architecture level. They’re certainly serious, but the manner in which they have been publicized has aroused suspicion around the web.

Why the extremely non-technical video shot on green screen with stock backgrounds composited in? Why the scare tactics of calling out AMD’s use in the military? Why don’t the bugs have CVE numbers, the standard tracking method for nearly all serious issues? Why was AMD given so little time to respond? Why not, if as the FAQ suggests, some fixes could be created in a matter of months, at least delay the publication until they were available? And what’s with the disclosure that CTS “may have, either directly or indirectly, an economic interest in the performance” of AMD? That’s not a common disclosure in situations like this.

(I’ve contacted the PR representative listed for the flaws [!] for answers to some of these questions.)

It’s hard to shake the idea that there’s some kind of grudge against AMD at play. That doesn’t make the flaws any less serious, but it does leave a bad taste in the mouth.

AMD issued a statement saying that “We are investigating this report, which we just received, to understand the methodology and merit of the findings.” Hard to do much else in a day.

As always with these big bugs, the true extent of their reach, how serious they really are, whether users or businesses will be affected and what they can do to prevent it are all information yet to come as experts pore over and verify the data.

Featured Image: Fritzchens Fritz/Flickr

Cellebrite may have found a way to unlock iPhones running iOS 11

According to a Forbes report, Israeli company Cellebrite is now able to unlock some very recent iPhones. Cellebrite is a well-known company that sells mobile forensics tools to extract data from locked devices.

While early versions of iOS weren’t really secure, this has changed quite a lot in recent years. All iOS devices now ship with a secure enclave, all data is encrypted if you use a passcode and there are multiple security checks when you boot and use your device.

In other words, if you don’t have the passcode, you’re going to have a hard time getting your hand on the data on the device. Many firms try to find vulnerabilities to unlock mobile devices. It has become a lucrative industry as intelligence agencies often pay forensics companies to unlock mobile devices.

Those forensics methods often lag behind. For instance, it’s quite easy to find a device to unlock an iPhone 6 running iOS 8. But if Forbes’ report and Cellebrite’s website are right, governments can now pay Cellebrite to unlock an iPhone 8 running iOS 11.

It’s unclear if it works with the most recent version of iOS 11 (11.2.6) or just the operating system version that was available back in September (11.0). It’s also unclear if it works with all iOS devices or if it only works with some devices. Forbes found a warrant that mentions an unlocked iPhone X.

This is a cat-and-mouse game, and Apple engineers are now probably working hard to fix all the vulnerabilities they can find. As always, if you don’t want to let authorities read your personal data, you should keep your devices up-to-date.

In addition to new features, security patches protect you against the most common attacks. And malicious hackers might use the same vulnerabilities against you.