Microsoft appeals to power users with the Surface Precision Mouse


Once upon a time, Microsoft had a whole line of gaming hardware under the Sidewinder brand, and while they never really put much of a dent in Razer or Logitech, they put out some decent mice (the X3 “might serve as a good introduction to multiple DPI settings for your mom or dad,” I wrote in 2010). I’m glad to see they’re returning to that advanced-mousing heritage with the Surface Precision Mouse.

Of course, you can never know if a mouse is for you until you test it, but the SPM looks like a lot of care was put into it: it’s totally different from pretty much every other mouse in the Microsoft Hardware lineup, and much better than the plain old Surface Mouse.

The main arch is aluminum, and the sides are “molded TPE,” by which they mean plastic — but the good kind. It weighs 135 grams — right between a Razer Death adder and my Logitech G500s. A good weight, though you can’t adjust it.

It’s clearly a right-handed mouse, molded to fit the hand and let your thumb rest on the little shelf there, something I’ve always appreciated. There are three customizable buttons that will fall under the thumb, but far enough away that you won’t hit them by accident.

It works on Mac or PC, via Bluetooth or USB, and the battery should last 3 months before needing a charge. (Things have come a long way since the X8, but I bet that charge cable is still the best of all time.)

There are three things that will make or break this mouse.

First, sensor quality and placement. The plain Surface Mouse has a laser-based sensor that’s somewhat forward of center, and reviews said it was pretty good (I haven’t used one). If they keep that pattern, the SPM will be golden.

Second, the software. Hopefully Microsoft doesn’t restrict what you can assign to the buttons. And quickly adjustable sensitivity controls are needed by any artist or gamer worth their salt (or one whose gaming is art, such as myself).

And third, the scroll wheel. It looks nice, but I’ve been spoiled for years by Logitech’s freewheel scrolling, and even the best scroll wheels with discrete steps feel clumsy to me now.

We’ll get one of these things to test and make sure you get this critical information. Man, it has been a while since I’ve gotten to nerd out on a mouse.

A dormant chip in the Pixel 2 will soon let developers write better camera and AI apps

Here’s a surprise: Google’s Pixel 2 phones include a custom system-on-a-chip (SoC) that’s optimized for image processing — but it currently just sits there, doing nothing.

Google says it’ll turn this chip on in the coming weeks as a developer option in the preview of Android Oreo 8.1. This will enable developers to include the same HDR+ image processing that allows Google’s camera app to produce great pictures with hardware that is, at least on the spec sheet, not up to par with that of its competitors.

This custom system-on-a-chip (SoC) marks Google’s first foray into custom chips in a consumer product. This chip, the “Pixel Visual Core,” features eight custom-designed image processing unit (IPU) cores and 512 arithmetic logic units. This allows the Pixel’s camera to shoot images that use the company’s HDR+ algorithm for a wider color spectrum (by quickly taking and combining multiple images at different exposure levels) with none of the delay you’d typically expect for HDR images. Google says that using the Pixel Visual Core speeds up HDR+ processing by 5x, all while using only a tenth of the energy of running that same algorithm on a regular CPU.

Google notes that the IPU uses two domain-specific languages: Halide for image processing and — no surprise there — TensorFlow for machine learning applications.

It’s pretty curious, though, that right now, Google says this chip just quietly sits on the phone, doing nothing. It’s not even being used by Google’s own camera app. As a Google spokesperson told me, the company has “managed to achieve the HDR+ through tight coupling of hardware, software and ML.” There must be some speed and power advantages to using it even in Google’s own applications, though, so my best guess is that the chip was a very late addition to the phone’s hardware — so late that Google wasn’t even able to work with third-party developers to write public demo apps and tout it in its launch keynote.

For now, the focus here is squarely on allowing developers to use HDR+ in their applications, but given that this is a programmable chip, it’s no surprise that Google plans to open it up to other use cases over time, too. “HDR+ is the first application to run on Pixel Visual Core,” Google notes in today’s announcement. “As noted above, Pixel Visual Core is programmable and we are already preparing the next set of applications. The great thing is that as we follow up with more, new applications on Pixel Visual Core, Pixel 2 will continue to improve.”

Two sizes really do fit all with Google’s Pixel 2 and Pixel 2 XL


Google’s Pixel 2 and Pixel 2 XL have a number of strengths to recommend them, but what makes them really unique is that they offer two versions of phone that finally, for Android, will provide everything anyone really needs in a smartphone. Other device makers have realized that different sizes appeal to different customers, but Google’s Pixel lineup offers a true choice, instead of a selection of compromises.

It’s true that their all are few key differences between the Pixel 2 and Pixel 2 XL, including the use of an AMOLED display in the 2 and a pOLED screen in the 2 XL, with 95% DCI-P3 and 100% DCI-P3 color gamut coverage respectively, but the phones are the same where it counts – specifically around the camera, and around the surprisingly convenient squeezable Assistant activation feature, the same processors, the same storage, the same front-firing speakers, and the same battery life.

All of that’s nice to have, but the camera is the key ingredient here: Google isn’t penalizing those who prefer smaller devices with any sacrifice in capabilities on that front. It’s a fact that most large devices that offer additional camera benefits do so because, at least in part, of additional real estate for more camera hardware, but it’s also true that Google’s smartphones offer excellent camera capabilities regardless of having just one lens on both sizes.

The other area that might affect consumer purchasing is the bezel gulf, which basically means that the smaller Pixel has larger top and bottom portions around the display. But it’s actually not something most users are going to care that much about in practice, and when using either, you forget about the bezels pretty quickly thanks to excellent OLED screens.

Both devices also feel great in the hand, but the smaller Pixel does a lot with a form factor that’s appreciably better for those with smaller grips, and single-handed use.

Other smartphone makers divide up the line with sizes, but also change up the features available in each variant, adding a lot more to consider in the decision-making process. Google has successfully threaded the needle on balancing out both form factors – though design preference is still going to cause some head-scratching before picking one or the other for those planning on buying.

The real strength of the Google Pixel 2 doesn’t come from its range of sizes, however, but from its ability to deliver an Android smartphone that doesn’t just exemplify how far the mobile OS has come over the years, but also does so with style and a unique take on what it means to make a smartphone. The Nexus line always had a slightly forgettable quality, the ‘beigeness’ of a manufacturer reference device – Pixel, and especially the Pixel 2, is a statement, and one that people can really identify with.

Google Pixel 2 review

Google wanted to announce more than just a boatload of products at its event the other week. The company hoped to foster a new conversation around consumer hardware, moving from a narrative about specs to one about artificial intelligence and machine learning.

The Pixel 2 is the centerpiece of that idea. The sequel to last year’s hit phone isn’t a radical upgrade. If it were an Apple product, the company would put a somewhat resigned “S” after the model number as an affirmation that this is one of those in-between years. It’s an evolution of a good phone that helps the device keep pace with the market, but lacks the sort of wow factor that drives early adopters to trade in last year’s model.

But while Google managed to wow many reviewers with its self-branded entry into the market, the Pixel line was arguably never really just about hardware to begin with. It’s about developing hardware and software together.

It’s a synergy few outside of Apple have been able to accomplish, but as Microsoft has done with its surprisingly successful Surface line, the phones are showcases for the power of pure, uncut Android. It’s a line developed with the Android experience at its core — a marked change from many of the company’s hardware partners, where OS is more of an afterthought.

There’s little doubt that the company is doing some of the industry’s most compelling work in terms of consumer-facing AI and machine learning. Years of research and development on those fronts are beginning to bear fruit and have converged here in some very interesting ways. Taking a step back to examine Google’s long-term goals with software offerings like Assistant and Lens, it’s easy to envision a future where hardware becomes relatively incidental.

But that’s going to be a hard-fought, uphill battle, after a decade of tech companies bombarding us with tech specs, Clockwork Orange-style. And as the company happily admitted to us following the event, it still has some work to do on the hardware side, including the eventual addition of an edge-to-edge display.

Google is definitely doing some interesting work with existing hardware. You need look no further than the camera for evidence. Imaging was one of the highlights of last year’s model, and the company has stretched essentially the same camera even further, including the ability to shoot in portrait mode without the need for a second camera.

The Pixel 2 and Pixel 2 XL are good phones, mostly because they’re building on top of solid foundations and because of what they portend for the future of mobile handsets. But convincing consumers to rethink their mobile priorities is a larger, nuanced argument. It’s a lot to ask from a single handset.

Pixel by Pixel

Google made a less than subtle dig at Apple during the Pixel event, telling the crowd, “We don’t set aside better features for the larger device.” That’s not entirely true. The XL has a few standout features — most notably the lovely 6-inch pOLED display(versus the Pixel 2’s five-inch AMOLED) , which brings higher resolution, better color reproduction and more consistency. Otherwise, however, the insides are basically the same. Google sent us each phone for perusal, but for the sake of simplicity, we’re going to be focusing this review on the larger of the two devices.

The first Pixel marked a dramatic change for Google’s hardware approach. The company would no longer let its partners call the shots. Instead, it would lead development in-house, in an attempt to get as close to pure hardware/software synergy as possible — a feat few companies outside of Apple are able to accomplish. The result was a hardware product distinct enough to make Google an instant contender alongside the likes of Samsung and Apple.

The new devices don’t mark a major depart design-wise, but do bring some welcome changes. That two-tone back is still in place, but this time the company has opted for a much sturdier aluminum unibody design that gives the phone some added heft, without making it overly bulky. As with the previous models, the Pixel XL isn’t flashy compared to premium devices like the Galaxy Note 8 and iPhone X, but it’s a sturdy device that feels comfortable in hand.

Google’s chiseled away at the bezels up front, as well, helped along by the subtle curvature of the front glass on the left and right side. Unlike Samsung and Apple, the company wasn’t ready to pull the trigger on an edge-to-edge display, however. Pricing was likely a big driver in the decision — after all, the display is a major driver in the iPhone X’s astronomical price tag — $849 isn’t exactly a steal, but it’s certainly not out of the standard six-inch premium smartphone range.

Of course, Google certainly sees things heading that way. As the company’s VP of product management, Brian Rakowski, told me the day of launch, “It’s a new technology, but we’re really excited about the possibility of being able to wrap the screen around the side.” That certainly points at a company waiting for the price on the technology to come down.

Sound and vision

It was also mentioned in reference to the fact that Google went ahead and dropped the headphone jack from the bottom of the phone, after mocking the Apple for dropping it last year. Back then, the company jokingly listed the “3.5 mm headphone jack satisfying not new” as one of the first Pixel’s big features on the product page. It’s gone now, and in its place a $20 adapter included somewhat ironically for the cause of making the whole thing more elegant.

There is, however, a marked upside to the decision. Dropping the jack clearly played a part in Google’s decision to invest more on the headphone front. There are those compelling Pixel Buds that offer real-time language translation that I personally can’t wait to take for a spin. The push toward Bluetooth was also no doubt a driving force behind the addition of “Fast Pair,” ostensibly the company’s take on Apple’s W2 offering, which takes a lot of the pain point out of Bluetooth syncing.

The feature isn’t quite as well-integrated as Apple’s AirPod connectivity yet, but it has some marked advantages. For one thing, it will work with select third parties; our review unit shipped with a pair of on-ear Libratone headphones, as the Pixel Buds aren’t ready for prime time as yet. For another, the company plans to offer it on all Android phones running Nougat or higher. That means a heck of a lot more opportunities to take advantage of the feature than Apple’s walled ecosystem.

As with the first Pixel, there’s no Home button on the front of the device. The top and bottom bezels have shrunk down a fair bit and are now home to a pair of front-facing speakers. It is, perhaps, some last vestige of companies willing to include those sorts of features up front, as the industry marches toward the inevitability of all-screen fronts. So enjoy it while it lasts. On-board audio has been mostly an afterthought for phone makers, and things will likely continue to stay that way as aesthetic decisions take precedence.

The speaker grilles are well-positioned for watching YouTube videos and the like — and they get pretty loud, as advertised. That said, I’ve yet to encounter a pair of phone speakers I would recommend for anything beyond watching a quick video, and the Pixel XL’s don’t really do much to buck that trend.

The screen, on the other hand is lovely. That much is clear from he moment you fire up the phone and see the live wallpaper in action. As goofy a feature as it is, the default bird’s eye view of waves crashing on a beach do a great job demonstrating the color and detail of the pOLED screen (that’s LG’s OLED tech of choice). It’s the same one — or at least really similar — you’ll find on the LG V30.

That’s a good thing. LG’s offering is a top contender for the best screen on a smartphone right now, alongside Samsung’s new flagship and the iPhone X (which uses Samsung’s panels, incidentally).

Users may also notice a distinct change in the color gamut. Things appear darker at first — the reds are almost a muddy brown. This change was by design. Android Oreo brings the operating system color profile support, and Google’s taking full advantage of it, by offering a what it’s determined is a truer to life display. It’s a bit of a jolt at first, which the saturation bumped down a fair bit, but you get used to it after using the phone for a day or two.

The new color profile offering is open to hardware and software developers, so you may start seeing it become more widespread on OLED displays. Though says it’s also open to the possibility that the transition might be too much for some users, so it could loosen up on the decision or offer people more control over their own color gamut depending on feedback.

The best of squeeze

No surprise, Google found another key hardware partner in the form of HTC. The Pixel 2 was well underway before the two companies sealed the deal, with Google buying up the phone maker’s assets, but HTC’s role in the success of the phone’s predecessor made the company a no-brainer for the sequel.

Nowhere are HTC’s fingerprints clearer than Active Edge. Named Edge Sense when it launched with the U11 earlier this year, Google has adopted the side squeezing gimmick for its own flagship. In a conversation at the Pixel 2 launch event, the company told me it developed its own version of the offering from the ground up. It’s hard to say how much of that is true, and how much is simply the company’s reluctance to shout-out hardware partners — but either way, the tech works the same in principle.

It’s still a silly gimmick, adding sensors to the device’s frame in lieu of an additional, single service button (which Samsung took a lot of flack for with Bixby), but it does make more sense on a device where Assistant is central to the product’s functionality. It’s certainly understandable if you’ve opted to disable the “Okay Google” wake word feature for battery reasons, or over rising privacy concerns around always-listening devices (the Google Mini story is only the last to raise red flags).

A quick squeeze fires up Assistant from anywhere — that includes the lock screen, though you’ll have to actually unlock the phone to get your answer. The feature is quite responsive and customizable in settings. It worked just fine through the case the company shipped the Pixel 2 with, and offers a satisfying tactile buzz to let you know it’s picking up what you’re putting down.

The feature also is interesting from the standpoint of a company looking to move its assistant beyond just a voice interface. Amazon has stayed pretty firm in its commitment to Alexa as almost exclusively voice input, but both Google and Amazon have looked to broaden their offerings, using their proprietary systems to unite all manner of different features across the devices.

A squeeze of the side and a tap of the keyboard icon inside the Assistant window offers a way to interface with it without using your voice at all. That could ultimately prove helpful in, say, a loud environment, or if you don’t want to be “that guy” (or lady) on a crowded public bus asking, “Okay Google, what’s that smell?”

The Pixel 2 doesn’t really raise the squeeze beyond novelty, but Google never really positioned it as much more — where HTC sold it as downright revolutionary. As an added feature, it’s got some potentially interesting use cases, though, in most cases your voice will probably get the job down even better.

Lens crafters

A big part of keeping the two devices on level footing from a hardware standpoint is the decision to only include a single camera. From a pure feature standpoint, that means the Pixel line is getting left in the dust by practically every flagship, as Apple and Samsung push their own solutions and Qualcomm makes multiple camera implementation that much easier for the rest of the industry. The inclusion of multiple cameras has several benefits — a lot of it is dependent on specific implementation, but it can include things like better picture quality, optical zoom and improved depth sensing.

But while the camera hardware isn’t much changed from last year’s model, Google has once again managed to do a lot of heavy lifting on the software side of things. In conversations with TechCrunch, the company has noted that the future may well be leading to more and more cameras (“maybe 40,” one executive joked during our meeting), but in the meeting, the company is determined to make the most of a single lens.

Depth sensing is going to continue to become more and more important with the proliferation of products like ARCore and ARKit, but Google’s managed to get good results here without leaning on the parallax effect from two cameras. Instead, it’s able to use pixel distance on a single lens. The most immediate result is the implementation of Google’s own version of portrait mode — that faked bokeh effect that blurs the background to make a subject pop.

The result is actually pretty impressive. Granted, I had a bit of an issue getting it to work perfectly in some low-light situations, but on a whole, the camera’s portrait mode is up there with many other flagships that use a pair of cameras to achieve the effect. It’s not able to perfectly capture, say, a messy hairline, but that’s fairly common on these devices. Like Samsung’s latest offering, Google Photos will save a raw and bokehed version of the photo, though it doesn’t offer a slider that lets you adjust the blur to your liking.

Google also happily touted the Pixel 2’s DxOMark score of 98 at the event. It’s an impressive score. The site’s not exactly a household name for phone buyers, but it’s an important benchmark. While it’s important to note it’s not a 98 out of a possible 100, it’s an extremely impressive score — in fact, it’s the highest the site has given, and doubly so given the fact that the Pixel was able to hit it (surpassing last year’s also impressive 90) with a single camera.

As advertised, the camera also performs admirably in low- and mixed-light settings, grabbing tough shots with minimal noise. The auto setting will work well for most users in most settings, but Google’s included some additional controls, like white balance and exposure compensation. It’s not quite the same level of control featured in other smartphone camera apps, but should be plenty for most people.

And then, of course, there’s Motion Photos. Google no doubt found a bit of inspiration in Apple’s similarly named offering. The principle is essentially the same — by default, the camera captures what’s essentially a proprietary version of an animated gif. The animation is fairly smooth, even with a shaky hand, courtesy of the Pixel’s new video stabilization technology, as evidenced by the Motion Photo converted into a video, converted into a gif of my rabbit Lucy seen above. That can then be set as a wallpaper, exported as a two-second video or shared via Google Photos — though I was only able to view it in motion on Chrome.

Lens, meanwhile, is probably the most meaningful software addition to the Pixel 2’s camera offering. It’s still in Beta, and won’t be coming to consumer units for a few weeks now, but it’s an impressive and compelling feature nonetheless, leveraging Google’s extensive search matrix to offer context to the shots you take.

Like Samsung’s Bixby offering, it’s able to work with landmarks and buildings — an impressive feat, given the infinite number of ways it’s possible to shoot one of those objects. At the moment, monument recognition is a bit of a mixed bag. You’re going to want to make sure you’re close enough to get an unobstructed view, while making sure you’re far enough away to get the full thing in frame — it’s a tough task, as seen with the above attempted shot of the World Trade Center.

It works well with books and records, and I was able to get it to recognize the aforementioned Lucy as a “domestic rabbit” and identify a tree as a tree. The system them presents a dialog box from Google Search offering up additional context. I was pretty impressed with its capabilities at this early stage, drawing upon Google’s vast knowledge base, and it’s sure to only get better as more and more people us it.

It’s not a super useful tool at the moment. Detecting well-known monuments in perfect conditions from the right distance is a fairly narrow case use. Until it works most of the time, it will be a novel but slightly frustrating feature.

But it’s an important sign post for Google’s use of AI and machine learning to augment its offerings. It also points to the company’s ability to use cloud-based computing and on-board software to augment the capabilities of handheld devices.And then there’s that dormant chip that could point a way forward for third-party developers.

You can check out an even more in-depth look at the camera features here.

Double stuf

Software is, of course, the key place Google looks to distinguish itself — a difficult task, given the fact that most of the competition will also receive many of these features via Android updates. The new Pixel isn’t the first device to ship with Android Oreo — though that title belongs to the Sony Xperia XZ1, which doesn’t mean a heck of a lot to users here in the States.

When notification dots are the most notable feature in your major software update, it probably goes without saying that it’s not the most compelling new operating system update. In fact, when it arrived, Frederic called it “probably one of the least exciting operating system updates in recent memory.” Notification dots and app shortcuts (accessed by giving an app a long press) are both a bit of catch-up with offerings that have been present in iOS for a while. Picture-in-picture, meanwhile, is a nice addition to Nougat’s split-screen mode, taking advantage of additional screen real estate so you can watch a video while using another app.

But it wouldn’t be a proper piece of Google hardware if the company didn’t use it to launch a few new compelling features. Always On Display is the one you’ll notice first, for obvious reasons. It’s a handy little addition, adding time and date and popping up notifications as they come across. Anything that’ll help keep our faces out of our phones for any period of time is probably a welcome addition. In Always On, the screen stays black, with white text, really the only option that won’t drain the battery in the process.

Always On is also home to one of Android’s more fun new features, Now Playing. It’s a sort of built-in Shazam killer that automatically identifies songs as they play. The artist name and song title pops up at the bottom. Clicking through will bring you to its entry in, naturally, Google Play. It’s a great little feature and stupidly simply — though it was a bit of a mixed bag in my own testing.

It did a pretty solid job with the PA system in the coffee shop I was working in and recognized songs by bigger-name artists like Kanye and Fleetwood Mac. It even got the occasional indie, like Courtney Barnett (great album, listen if you haven’t already), but came up short with prominent indie rock artists like Built to Spill and Guided by Voices. And frustratingly, there’s no “no match found” option, so you find yourself waiting and wondering a lot longer than the 10 or so seconds it should take.

Turns out the system uses an on-phone database pulled from Google Play that contains somewhere in the tens of thousands of songs. This is done for privacy reasons, so the phone isn’t constantly sending information about your listening habits to Google. The downside is it’s only tuned to “popular songs;” a bit ironic, given that there’s likely more of a need to hunt for obscure titles, rather than Ed Sheeran.

In the future, the company will be adding a more direct Shazam competitor to Google Assistant, so you just ask “what’s this song?” Another fun addition in the same vein lets you search for a song by mumbling a few lyrics. The song will generally pull up some answers via YouTube, so you can cross-reference your findings. The results, again, are a bit of a mixed bag, but it’s easy to see where Google’s going with Assistant: building an AI that can serve some useful function in every aspect of our day to day lives — and using its robust search platform as an important stepping stone.

The Pixel’s big sell

The Pixel 2 doesn’t make a particularly compelling upgrade case for users of last year’s model. The hardware isn’t a radical departure, and many of the new software features will be coming to the first-generation model — after all, Android support for older devices is one of the key tenants of Google’s first-party software approach. The device also doesn’t push the boundaries of what a mobile device is as much as other recent flagships.

Instead, it’s a good update built on a solid foundation that makes an interesting case for the importance of moving beyond a purely spec-based approach to devices. It’s true that Google will have an uphill battle convincing consumers to look beyond the pure numbers, but there are enough additions on-board to paint a picture of a compelling and well-rounded hardware product.

The Pixel 2 isn’t exactly future-proofed. Google told us that it’s looking toward an edge-to-edge display for future models, and hasn’t ruled out the possibility of joining the rest of the industry’s embrace of multiple cameras. These sorts of hardware features will likely play a big role in the sorts of AI and ML features the company is currently implementing with Assistant and across Android in general.

The new phones offer a glimpse at that future and, in the case of the device’s camera, show what can be done without having to charge users $1,000 for a device.

What’s new in Windows 10 Fall Creators Update

Announced roughly this time last year, Creators Update was Microsoft’s attempt to capture the creative types who have long been considered a core part of the Mac ecosystem’s userbase. The update brought simple 3D content creation tools to Windows 10 and additional gaming functionality, among other things. The new Fall Creators Update, which is set to roll out to all users today, builds on top of many of those advances.

Like its predecessor, the new update brings more 3D content creation and helps ready Windows for Microsoft’s vision of a Mixed Reality future. There are also a number of other additions aimed at patching holes and addressing new input devices like the Surface Pen. Here’s a rundown of some of the biggies.

Pixel 2 and Pixel 2 XL reaffirm Google’s top spot among smartphone cameras

Google Pixel 2 and Pixel 2 XL smartphones are here, and they bring with them sequels to some of the best smartphone cameras available. They’re equal to the task, though – more than, in fact.

Google’s changes to the Pixel cameras are mostly on the software side, but they gain some excellent additional abilities, including a new Portrait mode, as well as optical image stabilization to compliment Google’s digital anti-shake for photos and video.

Google spent a lot of time during its presentation crowing about the Pixel 2 (and Pixel 2 XL, since their cameras are the same) earning the highest ever rating from DxO for a smartphone camera. And that’s not a bad thing, as far as accomplishments go – but for everyday use, a DxO score is about as useful as you GPA once you’ve entered the working world: Maybe something to brag about, but no one else is going to care about anything except the results you produce.

Basics

Luckily for Google, the results from its Pixel 2 cameras (both front and back) are terrific, and among the industry’s best. Are they the best? That’s going to depend a bit on what you’re after, but you can definitely rest assured that you’ll never regret buying either the Pixel 2 or the Pixel 2 XL because of the quality of the photos they take.

In fact, the cameras are a highlight here and a great reason to consider the Pixel 2 as your next smartphone choice. They’re fast, responsive, highly detailed and have great color composition, and they also manage to strike a good balance on the software side of offering a handful of great features, but without feeling overwhelming in terms of options and settings.

Most importantly, the Pixel 2 takes stunning photos basically whenever you pull it out of your pocket, double tap the power button to quickly launch the camera, point and shoot. It’s hard to take a bad picture – or at least an out of focus or unbalanced one in terms of exposure and lighting, and that’s the key to making a camera that’s designed for everyone, as opposed to something honed for specialist craft.

One of the Pixel 2 camera’s greatest strengths is its ability to exercise restraint despite doing a lot on the software side to clean up things like noise in low light images, and combining different exposures to generate HDR images that have balanced lighting across the scene. The images feel more true to the memory of the actual events, and true to what you see with your eye, than other options from top Android device makers that are intent on boosting saturation and contrast for artificial pop.

Portraits

Another big win for Google is the Portrait mode. In some ways, it’s far less flexible than either the iPhone 8’s Portrait mode, or the Galaxy Note 8’s Focus Shift, since it’s using only one lens to produce its depth effect. But in one key way, it’s more generally useful: It’s far less fiddly to use.

Basically, Pixel 2’s portrait feature works just by taking a picture as you normally would with the regular camera, after enabling Portrait from the capture mode menu. The software does its best to produce an image with a sense of depth of field after the fact, and it turns out pretty well – provided your subject is a person or a real animal, like my dog in the examples below.

  1. 00100dPORTRAIT_00100_BURST20171013114956568_COVER

  2. 00100dPORTRAIT_00100_BURST20171016145333978_COVER

  3. 00100sPORTRAIT_00100_BURST20171016151411814_COVER

  4. 00100sPORTRAIT_00100_BURST20171013115816008_COVER

Note 8’s after-the-fact adjustable blur is great, and the iPhone 8’s Portrait Lighting produces some terrific results when used properly, but Google’s solution is arguably the best one for the biggest number of people, since it requires very little patience and produces pleasing results much of the time.

Video

Another area where the Pixel 2 builds on the success of its predecessor is in video. The first time around, Google did some amazing things with digital stabilization to produce smooth cuts, even when you’re in motion filming with the smartphone. But with the added optical image stabilization, you can pan, tilt and even walk and shoot without fear of producing something that’s going to unsettle your audience members with motion sensitivity.

[embedded content]

In side-by-side testing, Pixel 2 XL’s video stabilization (embedded above) came out the winner among iPhone 8 Plus (embedded below) and Samsung Galaxy Note 8. It’s smooth enough that my girlfriend said it had a ‘filmic’ quality, which is high praise. You can see a bit of up-and-down motion in the example provided, but it’s actually not much worse than you get with very expensive hardware gimbals like DJI’s Osmo or rigs designed for use with DSLRs.

[embedded content]

Google also offers up an additional “stabilize” option in the video edit settings, which minimizes the up-and-down effect even more. All of this adds up to the ability to shoot clips on the go with not additional hardware that’s suitable for amateur filmmaking at the very least, and for editorial video and creative web content and reporting for sure. Plus, those family videos are going to look positively ‘auteur.’

Apple responds to Senator Franken’s Face ID privacy concerns


Apple has now responded to a letter from Senator Franken last month in which he asked the company to provide more information about the incoming Face ID authentication technology which is baked into its top-of-the-range iPhone X, due to go on sale early next month.

As we’ve previously reported, Face ID raises a range of security and privacy concerns because it encourages smartphone consumers to use a facial biometric for authenticating their identity — and specifically a sophisticated full three dimensional model of their face.

And while the tech is limited to one flagship iPhone for now, with other new iPhones retaining the physical home button plus fingerprint Touch ID biometric combo that Apple launched in 2013, that’s likely to change in future.

After all, Touch ID arrived on a single flagship iPhone before migrating onto additional Apple hardware, including the iPad and Mac. So Face ID will surely also spread to other Apple devices in the coming years.

That means if you’re an iOS user it may be difficult to avoid the tech being baked into your devices. So the Senator is right to be asking questions on behalf of consumers. Even if most of what he’s asking has already been publicly addressed by Apple.

Last month Franken flagged what he dubbed “substantial questions” about how “Face ID will impact iPhone users’ privacy and security, and whether the technology will perform equally well on different groups of people”, asking Apple for “clarity to the millions of Americans who use your products” and how it had weighed privacy and security issues pertaining to the tech itself; and for additional steps taken to protect users.

Here’s the full list of 10 questions the Senator put to the company:

1.      Apple has stated that all faceprint data will be stored locally on an individual’s device as opposed to being sent to the cloud.

a.      Is it currently possible – either remotely or through physical access to the device – for either Apple or a third party to extract  and obtain usable faceprint data from the iPhone X?

b.      Is there any foreseeable reason why Apple would decide to begin storing such data remotely?

2.     Apple has stated that it used more than one billion images in developing the Face ID algorithm. Where did these one billion face images come from?

3.     What steps did Apple take to ensure its system was trained on a diverse set of faces, in terms of race, gender, and age? How is Apple protecting against racial, gender, or age bias in Face ID?

4.     In the unveiling of the iPhone X, Apple made numerous assurances about the accuracy and sophistication of Face ID. Please describe again all the steps that Apple has taken to ensure that Face ID can distinguish an individual’s face from a photograph or mask, for example.

5.     Apple has stated that is has no plans to allow any third party applications access to the Face ID system or its faceprint data. Can Apple assure its users that it will never share faceprint data, along with the tools or other information necessary to extract the data, with any commercial third party?

6.      Can Apple confirm that it currently has no plans to use faceprint data for any purpose other than the operation of Face ID?

7.     Should Apple eventually determine that there would be reason to either begin storing faceprint data remotely or use the data for a purpose other than the operation of Face ID, what steps will it take to ensure users are meaningfully informed and in control of their data?

8.      In order for Face ID to function and unlock the device, is the facial recognition system “always on,” meaning does Face ID perpetually search for a face to recognize? If so:

a.      Will Apple retain, even if only locally, the raw photos of faces that are used to unlock (or attempt to unlock) the device?

b.      Will Apple retain, even if only locally, the faceprints of individuals other than the owner of the device?

9.      What safeguards has Apple implemented to prevent the unlocking of the iPhone X when an individual other than the owner of the device holds it up to the owner’s face?

10.   How will Apple respond to law enforcement requests to access Apple’s faceprint data or the Face ID system itself?

In its response letter, Apple first points the Senator to existing public info — noting it has published a Face ID security white paper and a Knowledge Base article to “explain how we protect our customers’ privacy and keep their data secure”. It adds that this “detailed information” provides answers “all of the questions you raise”.

But also goes on to summarize how Face ID facial biometrics are stored, writing: “Face ID data, including mathematical representations of your face, is encrypted and only available to the Secure Enclave. This data never leaves the device. It is not sent to Apple, nor is it included in device backups. Face images captured during normal unlock operations aren’t saved, but are instead immediately discarded once the mathematical representation is calculated for comparison to the enrolled Face ID data.”

It further specifies in the letter that: “Face ID confirms attention by directing the direction of your gaze, then uses neural networks for matching and anti-spoofing so you can unlock your phone with a glance.”

And reiterates its prior claim that the chance of a random person being able to unlock your phone because their face fooled Face ID is approximately 1 in 1M (vs 1 in 50,000 for the Touch ID tech). After five unsuccessful match attempts a passcode will be required to unlock the device, it further notes.

“Third-party apps can use system provided APIs to ask the user to authenticate using Face ID or a passcode, and apps that support Touch ID automatically support Face ID without any changes. When using Face ID, the app is notified only as to whether the authentication was successful; it cannot access Face ID or the data associated with the enrolled face,” it continues.

On questions about the accessibility of Face ID technology, Apple writes: “The accessibility of the product to people of diverse races and ethnicities was very important to us. Face ID uses facial matching neural networks that we developed using over a billion images, including IR and depth images collected in studies conducted with the participants’ informed consent.”

The company had already made the “billion images” claim during its Face ID presentation last month, although it’s worth noting that it’s not saying — and has never said — it trained the neural networks on images of a billion different people.

Indeed, Apple goes on to tell the Senator that it relied on a “representative group of people” — though it does not confirm exactly how many individuals, writing only that: “We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users.”

There’s obviously an element of commercial sensitivity at this point, in terms of Apple cloaking its development methods from competitors. So you can understand why it’s not disclosing more exact figures. But of course Face ID’s robustness in the face of diversity remains to be proven (or disproven) when iPhone X devices are out in the wild.

Apple also specifies that it has trained a neural network to “spot and resist spoofing” to defend against attempts to unlock the device with photos or masks. Before concluding the letter with an offer to brief the Senator further if he has more questions.

Notably Apple hasn’t engaged with Senator Franken’s question about responding to law enforcement requests — although given enrolled Face ID data is stored locally on a user’s device in the Secure Element as a mathematical model, the technical architecture of Face ID has been structured to ensure Apple never takes possession of the data — and couldn’t therefore hand over something it does not hold.

The fact Apple’s letter does not literally spell that out is likely down to the issue of law enforcement and data access being rather politically charged.

In his response to the letter, Senator Franken appears satisfied with the initial engagement, though he also says he intends to take the company up on its offer to be briefed in more detail.

“I appreciate Apple’s willingness to engage with my office on these issues, and I’m glad to see the steps that the company has taken to address consumer privacy and security concerns. I plan to follow up with Apple to find out more about how it plans to protect the data of customers who decide to use the latest generation of iPhone’s facial recognition technology,” he writes.

“As the top Democrat on the Privacy Subcommittee, I strongly believe that all Americans have a fundamental right to privacy,” he adds. “All the time, we learn about and actually experience new technologies and innovations that, just a few years back, were difficult to even imagine. While these developments are often great for families, businesses, and our economy, they also raise important questions about how we protect what I believe are among the most pressing issues facing consumers: privacy and security.”

How to reset your Apple ID password and gain control of your account

Everyone with an iPhone, iPad, iPod, or Apple Watch has an Apple ID. It’s essential to getting the most out of Apple’s services, including the iTunes Store, the App Store, Apple Music, and iCloud. An Apple ID isn’t the only account with credentials you need to be keeping track of these days, however, and as such, there’s always the possibility that you may forget certain login information — like your all-important password.

Thankfully, there’s no need to panic if you do forget your Apple ID password, as it happens to all of us from time to time. When it happens to you, there are steps you can take to reset your Apple ID password, all of which are pretty straightforward. There’s no way for Apple to simply tell you what your current password is, though, not even through email. Instead, every method to deal with a forgotten Apple ID password involves resetting it completely. Here’s how.

Once you get your Apple ID password reset, check out the seven things you can do to make your iPhone safer.

Reset your password using the Apple ID account page

Forgot AppleID or password

Step 1: To start, go to appleid.apple.com and click Forgot Apple ID or password in the center of the page.

Step 2: You’ll be taken to a new page where you’ll have to enter your Apple ID or the email address associated with the account. Click Continue, then select I need to reset my password.

Step 3: You’ll now be able to choose how you want to reset your password, whether it be through email or by answering a set of security questions. Which option you choose is really based on your personal preference.

Step 4: Choosing the email method prompts Apple to send instructions to the primary email address you used to begin this process, or a rescue email if you decided to make one. You’ll know the email has been sent when you see the “Email has been sent” page with a large, green check mark. If you can’t find the email, be sure to check your Spam, Junk, and Trash folders, or repeat the steps above to have the email sent again. Going with the security questions requires you to confirm your birthday and answer the aforementioned questions before you’re able to create a new password.

If you use two-factor authentication

If you set up and enabled two-factor authentication — which is different from two-step verification — resetting your password will be even easier, as you’ll be able to reset your password directly from your trusted iPhone, iPad, iPod Touch, or from the Apple ID account page. If you’re unsure if you have any trusted devices, don’t be; when you set up two-factor authentication, you created trusted devices. All iOS devices will also need to have a passcode enabled.

Using your iOS device

Step 1: Go to Settings > [your name] > Password & Security.

Step 2: Tap Change Password.

Step 3: You will be asked to enter your passcode, and then you can enter your new password.

Using the Apple ID account page

Step 1: Go to iforgot.apple.com and enter the trusted phone number you submitted when you set up two-factor authentication.

Step 2: Choose Continue to send a notification to your trusted iPhone, iPad, or iPod.

Step 3: When you receive the notification on your iOS device, tap Allow.

Step 4: Follow the provided steps, enter your passcode, and reset your password.

Facebook tests a resume “work histories” feature to boost recruitment efforts


As LinkedIn ads in video and other features to look a little more like Facebook, Facebook continues to take on LinkedIn in the world of social recruitment services. In the latest development, Facebook is testing a feature to let users create resumes — which Facebook calls a “work histories” feature — and share them privately on the site as part of their job hunt.

First made public by The Next Web’s Matt Navarra on the back of a tip he received from a computer science student called Jane Manchun Wong, the test was confirmed to us by a spokesperson at the company as part of its efforts to grow usage of its recruitment advertising business, which was launched in February this year.

“At Facebook, we’re always building and testing new products and services, ” he said. “We’re currently testing a work histories feature to continue to help people find and businesses hire for jobs on Facebook.”

We’ve been looking around, and so far the only evidence of the test appears to be coming from an Android mobile device.

Interestingly, Facebook is testing this resume service to reduce some of the friction between finding a job and then applying for it on mobile specifically, to make it easier and faster to apply for jobs with a ready-made career and education history. That is a use case that LinkedIn also identified a while back, using its basic profile pages as resume proxies in its own mobile-based job application flow. Facebook, of course, has a wider purpose than career advancement, so its basic profile pages don’t quite fit that need.

On its surface, Facebook’s resume feature appears to be an expansion of the work and education details that you can already provide around your Facebook profile, including the period of time you’ve worked in a job or studied somewhere, and your contact information:

  1. resume fb

    The main resume page

  2. resume FB 2

    Contact details

  3. resume FB 3

    Editing your previous job experience

  4. resume fb 4

    Editing your education

In the case of the resume, though, the key difference is that the information doesn’t post directly to your profile. Today, users only have two options for handling that kind of information: either making it completely public, or just visible to your friends (but not entirely visible unless you choose to share it). The resume will have a more targeted use: you can show it off only when you choose to, as part of a job application.

Facebook took its initial step into the recruitment market in February this year when it launched its first job ads as a basic page that let you look for jobs they way you might look for goods for sale on Facebook’s Marketplace: by location and keywords. In the months since then, it’s worked on several tests and expansions of the service to figure out how to get more traffic to this new part of its site.

They have included plans to connect users in mentorships to help create a wider culture of career advancement on the platform (something LinkedIn has also been building); and ramping up the volume of job ads on Facebook by way of a partnership with ZipRecruiter, an aggregator that lets businesses post to Facebook’s job site along with dozens of other online job boards.

One notable thing to me about Facebook’s recruitment efforts is that while they have the  potential to take on LinkedIn in the world of white-collar jobs, Facebook is taking a very mass-market approach: in my area, I’ve seen jobs for lawyers and designers, but also bus drivers, housekeepers and other service workers.

In a sense, it makes this not unlike the approach that Facebook has taken with Workplace as a competitor to Slack: the latter has positioned itself as a communications tool for the professional class of workers, the former is trying to target them, but also everyone else. (And now those businesses can also use the platform to recruit more.)

Another notable data point: just as Facebook’s collecting of profile interests helps the company build out its social graph and data points for advertising and more, so could this resume builder help the company develop better ideas of where to target its job ads, as well as other kinds of advertising aiming at particular demographics.

It remains to be seen how far Facebook will be willing to go to grow its footprint in the very crowded area of online recruitment, which already has a number of huge players including Randstad (which owns Monster.com) and Recruit (which owns Indeed.com), among many more.

In Facebook’s favor, though, there is definitely a case of noise and signal when it comes to recruitment, and social networks have had a much higher hit-rate when it’s come to getting qualified leads for open positions.

“People are interacting on a wide variety of subjects, not just jobs, so it feels very organic,” said Ian Siegel, the CEO of ZipRecruiter. He told TechCrunch that social platforms tend to perform well in recruitment because employees can tap their networks and so inbound interest tends to be less random. “They deliver good quality candidates,” Siegel said. “People who come through the network of current employees can be vouched for.”

Featured Image: Michael D Brown/Shutterstock (IMAGE HAS BEEN MODIFIED)

The best way to get cheap data while traveling internationally

Image: vicky leta/mashable

My favorite travel gadget isn’t my camera or noise-canceling headphones or even my iPhone. It’s a SIM card from Google.

I’m talking, of course, about Project Fi, Google’s wireless service that provides cheap voice and data plans to Nexus and Pixel owners. It’s also the perfect way to get data abroad without breaking the bank.

That’s because the service offers flat-rate data no matter how many countries you travel to (Fi currently has service in 135 countries), and makes it super simple to pause your service when you get home so you only ever have to pay when you need it.

A basic Project Fi plan starts at $20 a month for unlimited texting and local calling. Data is a flat rate of $10/GB and non-local calls are $.20 a minute. You can decide upfront how much data you want to be automatically included in your plan, but you only ever have to pay for what you use — Fi will credit back anything you don’t use.

It does require a bit of an upfront investment, since Google limits Project Fi to its Nexus and Pixel phones. And, yes, that means you’ll need to use Android (though there are workarounds for making Project Fi work with iPhones, assuming you have an unlocked phone and can get access to a Nexus or Pixel to activate the SIM).

Image: mashable/karissa bell

But you don’t need the latest Pixel 2, which starts at $649, to get the most from Project Fi. I’ve used the service with the Nexus 6 and Nexus 5x — both of which can be found online for well under $300. 

And if you don’t like the idea of spending a couple hundred bucks on an older phone, there’s the newly launched $399 Motorola X4, which is the first non-Nexus or Pixel-branded handset to be Fi compatible. 

That may still sound like a pricey upfront investment, but it could be well worth it even if you only take a couple trips a year. Seriously. Between time spent and cost, the savings quickly add up.

In the last two years, I’ve used Project Fi on trips to more than half a dozen countries, including Germany, Greece, Ukraine, and Israel. I’ve loaned it to family members for their own trips abroad and each time I’ve been impressed with the quality of the coverage and service. Fi did fail me once — in Aruba — though I suspect this was due to an issue with whichever local telecoms they partner with, not Fi itself. 

That trip aside though, Fi has enabled me to effortlessly keep up my Snapchat and Instagram habits without having to constantly search for Wi-Fi or worry about racking up a huge bill. 

Sure, $10/Gb might be more expensive than what you can find from some local carriers on the ground, but who wants to waste precious vacation time shopping for a data plan that may or may not end up saving you any money.

And that’s really the point — Fi makes it so you never have to worry about your data plan ever again.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2f591ac925 40b3 2d2f%2fthumb%2f00001