All posts in “Gadgets”

Nintendo introduces a Switch model refresh with better battery life

Nintendo already announced an entirely new Switch console this month, the Switch Lite, and now it’s bumping some of the specs on the existing Switch with a slightly updated version, spotted by The Verge. This update improves the hardware right where it counts when it comes to Switch portable playing power.

The new model will provide between 4.5 and 9 hours of battery life, depending on use, which is a big bump from the 2.5 to 6.5 hour rating on the original hardware that’s been offered to date. This is likely an improvement derived from a change in the processor used in the console, as well as more power-efficient memory, both of which were detailed in an FCC filing from last week.

Nintendo’s official Switch comparison page lists the models with improved battery life as model number ‘HAC-001(-01), with the bracketed addition distinguishing it from the original. You can check the version based on the serial number, with XKW preceding the newer hardware, and XAW starting off serials for the older, less power efficient version. No word on a specific street date, but if you’re in the market it’s worth taking a ‘wait and see’ approach to ensure this battery boosted hardware is the one you get.

In all other respects the two Switch models appear to be similar, if not identical, so it’s probably not enough of a change to get anyone considering an upgrade, unless the battery life on your current version really seems to fall about two hours short of your ideal play session length on average.

Nexar’s Live Map is like Street View with pictures from 5 minutes ago

We all rely on maps to get where we’re going or investigate a neighborhood for potential brunch places, but the data we’re looking at is often old, vague, or both. Nexar, maker of dashcam apps and cameras, aims to put fresh and specific data on your map with images from the street taken only minutes before.

If you’re familiar with dash cams, and you’re familiar with Google’s Street View, then you can probably already picture what Live Map essentially is. It’s not quite as easy to picture how it works or why it’s useful.

Nexar sells dash cams and offers an app that turns your phone into one temporarily, and the business has been doing well, with thousands of active users on the streets of major cities at any given time. Each node of this network of gadgets shares information with the other nodes — warning of traffic snarls, potholes, construction, and so on.

The team saw the community they’d enabled trading videos and sharing data derived by automatic analysis of their imagery, and, according to co-founder and CTO Bruno Fernandez-Ruiz, asked themselves: Why shouldn’t this data shouldn’t be available to the public as well?

Actually there are a few reasons — privacy chief among them. Google has shown that properly handled, this kind of imagery can be useful and only minimally invasive. But knowing where someone or some car was a year or two ago is one thing; knowing where they were five minutes ago is another entirely.

Fortunately, from what I’ve heard, this issue was front of mind for the team from the start. But it helps to see what the product looks like in action before addressing that.

nexar zoom

Zooming in on a hexagonal map section, which the company has dubbed “nexagons,” polls the service to find everything the service knows about that area. And the nature of the data makes for extremely granular information. Where something like Google Maps or Waze may say that there’s an accident at this intersection, or construction causing traffic, Nexar’s map will show the locations of the orange cones to within a few feet, or how far into the lanes that fender-bender protrudes.

You can also select the time of day, letting you rewind a few minutes or a few days — what was it like during that parade? Or after the game? Are there a lot of people there late at night? And so on.

Right now it’s limited to a web interface, and to New York City — the company has enough data to launch in several other areas in the U.S. but wants to do a slower roll-out to identify issues and opportunities. An API is on the way as well. (Europe, unfortunately, may be waiting a while, though the company says it’s GDPR-compliant.)

The service uses computer vision algorithms to identify a number of features, including signs (permanent and temporary), obstructions, even the status of traffic lights. This all goes into the database, which gets updated any time a car with a Nexar node goes by. Naturally it’s not in 360 and high definition — these are forward-facing cameras with decent but not impressive resolution. It’s for telling what’s in the road, not for zooming in to spot a street address.

Detection Filtering

Of course, construction signs and traffic jams aren’t the only things on the road. As mentioned before it’s a serious question of privacy to have constantly updating, public-facing imagery of every major street of a major city. Setting aside the greater argument of the right to privacy in public places and attendant philosophical problems, it’s simply the ethical thing to do to minimize how much you expose people who don’t know they’re being photographed.

To that end Nexar’s systems carefully detect and blur out faces before any images are exposed to public view. License plates are likewise obscured so that neither cars nor people can be easily tracked from image to image. Of course one may say that here is a small red car that was on 4th, and is on 5th a minute later — probably the same. But systematic surveillance rather than incidental is far easier with an identifier like a license plate.

In addition to protecting bystanders, Nexar has to think of the fact that an image from a car by definition places that car in a location at a given time, allowing them to be tracked. And while the community essentially opts into this kind of location and data sharing when they sign up for an account, it would be awkward if the public website let a stranger track a user all the way to their home or watch their movements all day.

“The frames are carefully random to begin with so people can’t be soloed out,” said Fernandez-Ruiz. “We eliminate any frames near your house and your destination.” As far as the blurring, he said that “We have a pretty robust model, on par with anything you can see in the industry. We probably are something north of 97-98 percent accurate for private data.”

So what would you do with this kind of service? There is, of course, something fundamentally compelling about being able to browse your city in something like real time.

On Google, there’s a red line. We show you an actual frame – a car blocking the right lane right there. It gives you a human connection,” said Fernandez-Ruiz. “There’s an element of curiosity about what the world looks like, maybe not something you do every day, but maybe once a week, or when something happens.”

No doubt we are many of us guilty of watching dash-cam footage or even Street View pictures of various events, pranks, and other occurrences. But basic curiosity doesn’t pay the bills. Fortunately there are more compelling use cases.

“One that’s interesting is construction zones. You can see individual elements like cones and barriers – you can see where exactly they are, when they’re started etc. We want to work with municiapl authorities, department of transportation, etc on this — it gives them a lot of information on what their contractors are doing on the road. That’s one use case that we know about and understand.”

In fact there are already some pilot programs in Nevada. And although it’s rather a prosaic application of a 24/7 surveillance apparatus, it seems likely to do some good.

But the government angle brings in an unsavory linen of thinking — what if the police want to get unblurred dash cam footage of a crime that just happened, or one of many such situations where tech’s role has historically been a mixed blessing?

“We’ve given a lot of thought to this, and it this concerns our investors highly,” Fernandez-Ruiz admitted. “There are two things we’ve done. One is we’ve differentiated what data the user owns and what we have. The data they send is theirs – like Dropbox. What we get is these anonymized blurred images. Obviously we will comply with the law, but as far as ethical applications of big data and AI, we’ve said we’re not going to be a tool of an authoritarian government. So we’re putting processes in place — even if we get a subpoena, we can say: This is encrypted data, please ask the user.”

That’s some consolation, but it seems clear that tools like this one are more a question than an answer. It’s an experiment by a successful company and may morph into something ubiquitous and useful or a niche product used by professional drivers and municipal governments. But in tech, if you have the data, you use it. Because if you don’t, someone else will.

You can test out Nexar’s Live Map here.

Voyant Photonics raises $4.3M to fit lidar on the head of a pin

Lidar is a critical method by which robots and autonomous vehicles sense the world around them, but the lasers and sensors generally take up a considerable amount of space. Not so with Voyant Photonics, which has created a lidar system that you really could conceivably balance on the head of a pin.

Before getting into the science, it’s worth noting why this is important. Lidar is most often used as a way for a car to sense things at a medium distance — far away, radar can outperform it, and up close ultrasonics and other methods are more compact. But from a few feet to a couple hundred feed out, lidar is very useful.

Unfortunately even the most compact lidar solutions today are still, roughly, the size of a hand, and the ones ready for use in production vehicles are still larger. A very small lidar unit that could be hidden on every corner of a car, or even inside the cabin. It could provide rich positional data about everything in and around the car with little power and no need to disrupt the existing lines and design. (And that’s not getting into the many, many other industries that could use this.)

Lidar began with the idea of, essentially, a single laser being swept across a scene multiple times per second, its reflection carefully measured to track the distances of objects. But mechanically steered lasers are bulky, slow, and prone to failure, so newer companies are attempting other techniques like illuminating the whole scene at once (flash lidar) or steering the beam with complex electronic surfaces (metamaterials) instead.

One discipline that seems primed to join in the fun is silicon photonics, which is essentially the manipulation of light on a chip for various purposes — for instance, to replace electricity in logic gates to provide ultra-fast, low-heat processing. Voyant, however, has pioneered a technique to apply silicon photonics to lidar.

In the past, attempts in chip-based photonics to send out a coherent laser-like beam from a surface of lightguides (elements used to steer light around or emit it) have been limited by a low field of view and power because the light tends to interfere with itself at close quarters.

Voyant’s version of these “optical phased arrays” sidesteps that problem by carefully altering the phase of the light traveling through the chip. The result is a strong beam of non-visible light that can be played over a wide swathe of the environment at high speed with no moving parts at all — yet it emerges from a chip dwarfed by a fingertip.

LIDAR Fingertip Crop

“This is an enabling technology because it’s so small,” said Voyant co-founder Steven Miller. “We’re talking cubic centimeter volumes. There’s a lot of electronics that can’t accommodate a lidar the size of a softball — think about drones and things that are weight sensitive, or robotics, where it needs to be on the tip of its arm.”

Lest you think this is just a couple yahoos who think they’ve one-upped years of research, Miller and co-founder Chris Phare came out of the Lipson Nanophotonics Group at Columbia University.

“This lab basically invented silicon photonics,” said Phare. “We’re all deeply ingrained with the physics and devices-level stuff. So we were able to step back and look at lidar, and see what we needed to fix and make better to make this a reality.”

The advances they’ve made frankly lie outside my area of expertise so I won’t attempt to characterize them too closely, except that it solves the interference issues and uses a frequency modulated continuous wave technique, which lets it measure velocity as well as distance (Blackmore does this as well). At any rate their unique approach to moving and emitting light from the chip lets them create a device that is not only compact, but combines transmitter and receiver in one piece, and has good performance — not just good for its size, they claim, but good.

“It’s a misconception that small lidars need to be low-performance,” explained Phare. “The silicon photonic architecture we use lets us build a very sensitive receiver on-chip that would be difficult to assemble in traditional optics. So we’re able to fit a high-performance lidar into that tiny package without any additional or exotic components. We think we can achieve specs comparable to lidars out there, but just make them that much smaller.”

photonics testbed

The chip-based lidar in its test bed.

It’s even able to be manufactured in a normal fashion like other photonics chips. That’s a huge plus when you’re trying to move from research to product development.

With this first round of funding, the team plans to expand the team and get this tech out of the lab and into the hands of engineers and developers. The exact specs, dimensions, power requirements and so on are all very different depending on the application and industry, so Voyant can make decisions based on feedback from people in other fields.

In addition to automotive (“It’s such a big application that no one can make lidar and not look at that space,” Miller said), the team is in talks with numerous potential partners.

Although being at this stage while others are raising 9-figure rounds might seem daunting, Voyant has the advantage that it has created something totally different from what’s out there, a product that can safely exist alongside popular big lidars from companies like Innoviz and Luminar.

“We’re definitely talking to big players in a lot of these places, drones and robotics, perhaps augmented reality. We’re trying to suss out exactly where this is most interesting to people,” said Phare. “We see the evolution here being something like bringing room size computers down to chips.”

The $4.3 million raised by Voyant comes from Contour Venture Partners, LDV Capital, and DARPA, which naturally would be interested in something like this.

Petcube’s Bites 2 and Play 2 amuse pets and humans alike with Alexa built-in

Petcube’s original Bites smart treat dispenser and Play pet camera with a built-in laser pointer were great for pet parents who couldn’t always be around to hang out with their furry charges, but the new Bites 2 and Play 2 come with one big new upgrade that make them far more versatile than the original: They both double as Alexa-powered smart speaker devices.

Both the Bites 2 and Play 2 can hear and respond to Alexa requests, with a four-microphone array that in my limited testing actually outperforms the Alexa mics built into my Sonos One and Sonos Beam speakers, which is pretty impressive for devices whose main features are serving up treats and keeping an eye on your pets. That’s on top of the Bites 2 being able to remotely dispense treats for your pet, and the Play 2 providing playtime away from home with a built-in laser pointer you can direct from your phone.

The Bites 2 and Play 2 also feature other improvements, including new wider angle lenses that offer full 180-degree views of your home for more likelihood you’ll spot your pets wandering around, and better Fi-Fi connectivity support with additional 5GHz networking, plus night vision and full HD video. Currently, the field of view is limited to 160-degrees, with an update to follow that will unlock the full 180, but for most users, the 160 FOV is going to show you an entire room and then some.

With the Bites 2, you can also initiate video calls and chat with your pet, though my dog Chelsea basically is just confused by this. It is handy if I need to ask my partner if there’s anything else I’m forgetting to pick up from the store, however. And the treat-flinging feature definitely does appeal to Chelsea, especially now that it’s Alexa-integrated so that I can easily issue a voice command to give her a well-earned reward.

This has actually proven more than just fun – Chelsea suffers from a little bit of separation anxiety, so when we leave our condo she usually spends a few quick minutes complaining audibly with some rather loud barks. But since getting the Petcube Bites 2 to test, I’ve been reinforcing good behavior by reminding her to keep quiet, waiting outside the door and then flinging her a treat or two for her troubles. It’s pretty much done away with the bye-bye barking in just a short time.

The Play 2 doesn’t fling treats, but it does have a built-in laser pointer (which the company says is totally safe for your pets eyes). Chelsea straight up does not understand the laser or even really acknowledge it, so that’s a bit of a miss, but with a friend’s cat this proved an absolute show-stopping feature. I’ve also known dogs previously who loved this, so your mileage may vary, but if you’re unsure it’s probably worth picking up a dollar store laser pointer keychain first to ensure it’s their jam.

The $249 Bites 2 and $199 Play 2 offer a ton of value in just the image and build quality upgrades over their original incarnations, and their basic features are probably plenty enough for doting pet parents. But the addition of Alexa makes these both much more appealing in my opinion, since it essentially bundles an Echo in each device at no extra cost.

Hero Labs raises £2.5M for its ultrasonic device to monitor a property’s water use and prevent leaks

Hero Labs, a London-based startup that is developing “smart” technology to help prevent water leaks in U.K. properties, has raised £2.5 million in seed funding. The round is led by Earthworm Group, an environmental fund manager, with further support via a £300,000 EU innovation grant and a number of unnamed private investors.

The new capital will be used by Hero Labs to accelerate development of its first product: a smart device dubbed “Sonic” that uses ultrasonic technology to monitor water use within a property, including the early detection of water leaks.

Founded in 2018 by Krystian Zajac after he exited Neos, a smart home insurer that was acquired by Aviva, Hero Labs was born out of the realisation that a lot of smart home technology either wasn’t very smart or didn’t solve mass problems (Zajac had also previously ran a smart home company focusing on ultra high net-worth individuals that delivered bespoke designs for things like motorised swimming pool floors or home cinemas doubling up as panic rooms).

Coupled with this, the Hero Labs founder learned that water wastage was a very costly problem, both financially and environmentally, with water leaks being the number one culprit for property damage in the U.K. ahead of fires, gas explosions or break-ins combined. This sees water leaks cost the U.K. insurance industry £1 billion per year, apparently.

“My vision for the company is to solve real-life problems with truly smart technology,” Zajac tells me. “From working at Neos and alongside some of the world’s largest home insurers I understood the problems that impacted ordinary homeowners and their families on a day-to-day basis. Perhaps most surprisingly, I learnt that water leaks are far and way the biggest cause of damage to homes… I also wanted to do more for the environment in my next venture after learning that water leaks waste 3 billion litres of water a day in the U.K. alone”.

KZ Event

To that end, the Sonic device and service is described as a smart leak defence system. Aimed at anyone who wants to prevent water leaks in their property — including homeowners, landlords, facilities management, property developers and businesses — the ultrasonic device typically attaches to the piping below your sink and “listens” to the vibrations coming off the interconnected pipes.

Sonic then monitors the water flow using machine learning and its algorithms to identify usage and detect anomalies. This requires the technology to understand the difference between appliances, running taps and even flushing toilets so that it can build up a picture of normal water usage in the home and in turn identify if that pattern is broken. Crucially, if needed, Sonic can automatically shut off the water supply to prevent a water leak damaging the property or its possessions.

Will a full launch planned for later this year, Sonic is targeting consumers as well as small businesses initially. “We are [also] in discussions with insurers who might subsidise the product or give it away completely for free to certain more affluent customers to minimise the risk of water escape,” adds Zajac.