All posts in “Technology”

Meet the startups in the latest Alchemist class

Alchemist is the Valley’s premiere enterprise accelerator and every season they feature a group of promising startups. They are also trying something new this year: they’re putting a reserve button next to each company, allowing angels to express their interest in investing immediately. It’s a clever addition to the demo day model.

You can watch the live stream at 3pm PST here.

Videoflow – Videoflow allows broadcasters to personalize live TV. The founding team is a duo of brothers — one from the creative side of TV as a designer, the other a computer scientist. Their SaaS product delivers personalized and targeted content on top of live video streams to viewers. Completely bootstrapped to date, they’ve landed NBC, ABC, and CBS Sports as paying customers and appear to be growing fast, having booked over $300k in revenue this year.

Redbird Health Tech – Redbird is a lab-in-a-box for convenient health monitoring in emerging market pharmacies, starting with Africa. Africa has the fastest growing middle class in the world — but also the fastest growing rate of diabetes (double North America’s). Redbird supplies local pharmacies with software and rapid tests to transform them into health monitoring points – for anything from blood sugar to malaria to cholesterol. The founding team includes a Princeton Chemical Engineer, 2 Peace Corps alums, and a Pharmacist from Ghana’s top engineering school. They have 20 customers, and are growing 36% week over week.

Shuttle Shuttle is getting a head start on the future of space travel by building a commercial spaceflight booking platform. Space tourism may be coming sooner than you think. Shuttle wants to democratize access to the heavens above. Founded by a Stanford Computer Science alum active in Stanford’s Student Space Society, Shuttle has partnerships with the leading spaceflight operators, including Virgin Galactic, Space Adventures, and Zero-G. Tickets to space today will set you back a cool $250K, but Shuttle believes that prices will drop exponentially as reusable rockets and landing pads become pervasive. They have $1.6m in reservations and growing.

Birdnest – Threading the needle between communal and private, Birdnest is the Goldilocks of office space for startups. Communal coworking spaces are accessible but have too many distractions. Traditional office spaces are private but inflexible on their terms. Birdnest brings the best of each without the drawbacks: finding, leasing, and operating a network of underutilized spaces inside of private offices. The cofounders, a duo of Duke and Kellogg MBA grads, are at $300K ARR with a fast-growing 50+ client waitlist.

Tag.bio – Tag.bio wants to make data science actionable in healthtech. The founding team is comprised of a former Ayasdi bioinformatician and a former Honda Racing engineer with a Stanford MBA. They’ve developed a next-generation data science platform that makes it easy and fast to build data apps for end users, or as they say, “WordPress for data science.” The result they claim is lightning-fast analysis apps that can be run by end users, dramatically accelerating insight discovery. They count the UCSF Medical Center and a “large Swiss pharma company” as early customers.

nCorium – They’ve built a new server architecture to handle the onslaught of AI to come with what they claim is the world’s first AI accelerator on memory to deliver 30x greater performance than the status quo. The quad founding team is intimidatingly technical — including a UCSD Professor, and former engineers from Qualcomm and Intel with 40 patents among them. They have $300K in pilots.

Spiio – Software eats landscaping with Spiio, which combines cloud-driven AI with physical sensors to monitor watering and landscaping for big companies. Their smart system knows when to water and when not to. This reduces water consumption by 50%, which means their system pays for itself in less than 30 days for big companies. They want to connect every plant to the internet, and look like they are off to a good start — $100K in orders from brand name Valley tech firms, and they are doubling monthly.

Element42 – Fraud is a major problem — For example, if you buy a Rolex on eBay, you run the risk of winding up with a counterfeit. Started by ex-VPs from Citibank, the founders are using risk models and technologies that banks use to help brands combat fraud and counterfeiting. Designed with token economics, they also incentivize customers to buy genuine products by serving exclusive content and promotions only to genuine product holders. Built on blockchain at the core, they claim to be the world’s first peer-to-peer authentication platform for physical assets. They have 45 customers across two industry verticals, 800K in ARR and are a member of World Economic Forum’s global initiatives against corruption.

My90 – Distrust between the public and the police has rarely been more strained than it is today. My90 wants to solve that by collecting data about interactions between the police and the public—think traffic stops, service calls, etc.—and turn these into actionable intelligence via an online analytics dashboard. Users text My90 anonymously about their interactions, and My90’s dashboard analyzes the results using natural language processing. Customers include major city police departments like the San Jose Police Department and the world’s largest community policing program. They have booked $150K in pilots and are expanding aggressively across the US.

Nunetz – A Stanford Computer Science grad and UCSF Neurosurgeon have come together to try to build a single unifying interface to replace the deluge of monitors and data sources in today’s clinical health environment. The goal is to prepare a daily “battle map” for physicians, nurses, and other providers, with an initial focus on the Intensive Care Unit (ICU). They have closed 3 paid pilots with hospitals through grants.

When Labs – If you hate managing people, When Labs wants to unburden you. Using an AI-powered assistant that texts with employees to negotiate assignments for hourly work, WhenLabs is trying to free customers like Hilton from spending money on managers who would normally do this manually. As the system gets smarter, they claim employees will prefer interfacing with their AI bot more than a human. AI and HR is a crowded space, but this might be the team to separate from the pack: the founding team’s previous company had a 9 figure exit to IBM.

FirstCut – FirstCut helps businesses put video content out at scale. Video dominates social media — it creates 10x more comments than text — and is emerging as a necessity for B2B media. But putting video out if you are a B2B marketer normally requires using agencies that charge hefty fees. FirstCut wants to disrupt the agencies with software and marketplaces. They use software automation and an on-demand talent marketplace to offer a fixed price product for video content. They are at $180k revenue, and most of it is moving to recurring subscriptions.

LynxCare – LynxCare claims that 90% of healthcare data goes untapped when doctors make critical decisions about your life. Further, they claim the average person’s life could be extended by 4 years if that data can be converted into insights. Their team of clinicians and data scientists aims to do just that — building a data platform that aggregates disparate data sets and drive insight for better clinical outcomes. And it looks like their platform has fans: they are active in 9 hospitals, count Pharma companies like Pfizer as Partners, and grew 4x over the past year and now are at $800K ARR.

ADIAN – Adian is a B2B SaaS product that digitizes the complex agrochemical supply chain in order to improve the sales process between manufacturers and distributors. The company claims manufacturers reduce costs by 20% and increase sales by 4% by using their online framework. $1.5 Billion and 70,000 orders have gone through the platform to date.

Hardin Scientific – Hardin is building IoT-enabled, Smart Lab Equipment. The hardware becomes a gateway to become the hub for monitoring, controlling, and sharing scientific data across teams. They’ve closed over $1.5m in revenue, and raised $15m in equity and debt financing. One of their smart devices is being used to 3D print bio-tissues and human organs in space.

ZaiNar – This team of 5 Stanford grads — 3 PhD’s and 2 MBAs — joined up with the Co-Founder of BlueKai to build the world’s best time synchronization technology. ZaiNar claims their ability to wirelessly synchronize and distribute time between networked devices is a thousand times better than existing technologies. This enables them to locate RF-emitting devices (i.e. phones, cars, drones, & RFID) at long distances with sub-meter accuracy. Beyond location, this technology has applications across data transmission, 5G communications, and energy grids. ZaiNar has raised a $1.7 million seed from AME Cloud and Softbank, and has built an extensive patent portfolio.

SMART Brain Aging – This startup claims to reduce the onset of dementia by 2.25 years with software. They are the only company approved by Medicare to get reimbursed on a preventative basis for the treatment of dementia. In conjunction with Harvard University, they have developed 20,000 exercises that are clinically proven to reduce the onset of dementia and, they claim, help build neurotransmitters. The company works with 300 patients per week ($2.2 million annual revenue) and is building to a goal of helping 22,000 people in 24 months.

Phoneic – Phoneic believes the data trapped in voice calls from cellphones is a gold mine waiting to be unleashed. Their app records and transcribes cell phones conversations, and the company has built an integration layer to enterprise AI and CRM systems that traditionally didn’t have access to voice data. The team is led by the co-founder of 3jam, one of the first group SMS and virtual number companies, which was acquired by Skype in 2011. He is keenly aware of the power of virality — and like Skype, the use of Phoneic spreads its adoption. The company has already raised $800,000 in seed funding.

Arkose Labs – Whether or not you think Russia interfered with the 2016 election, it’s no secret that bots are having significant impact on society. Arkose Labs wants to fight fraud, without adding friction to legit users. Most fraud prevention platforms today focus on gathering info from the user and providing a probability score that the traffic is good or bad. This leaves companies with a difficult decision where they may be blocking revenue generating users. Arkose has a different approach, and uses a bilateral approach that doesn’t force this tradeoff. They claim to be the only solution to offer a 100% SLA on fraud prevention. Big companies like Singapore Airlines and Electronic Arts are customers. USVP led a $6 million investment into the company.

Review: iPhone XS, XS Max and the power of long-term thinking

The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time.

When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought they could jump ahead a year, they were not blustering. That the iPhone XS feels, at least on the surface, like one of Apple’s most “S” models ever is a testament to how aggressive the iPhone X timeline was.

I think there will be plenty of people who will see this as a weakness of the iPhone XS, and I can understand their point of view. There are about a half-dozen definitive improvements in the XS over the iPhone X, but none of them has quite the buzzword-worthy effectiveness of a marquee upgrade like 64-bit, 3D Touch or wireless charging — all benefits delivered in previous “S” years.

That weakness, however, is only really present if you view it through the eyes of the year-over-year upgrader. As an upgrade over an iPhone X, I’d say you’re going to have to love what they’ve done with the camera to want to make the jump. As a move from any other device, it’s a huge win and you’re going head-first into sculpted OLED screens, face recognition and super durable gesture-first interfaces and a bunch of other genre-defining moves that Apple made in 2017, thinking about 2030, while you were sitting back there in 2016.

Since I do not have an iPhone XR, I can’t really make a call for you on that comparison, but from what I saw at the event and from what I know about the tech in the iPhone XS and XS Max from using them over the past week, I have some basic theories about how it will stack up.

For those with interest in the edge of the envelope, however, there is a lot to absorb in these two new phones, separated only by size. Once you begin to unpack the technological advancements behind each of the upgrades in the XS, you begin to understand the real competitive edge and competence of Apple’s silicon team, and how well they listen to what the software side needs now and in the future.

Whether that makes any difference for you day to day is another question, one that, as I mentioned above, really lands on how much you like the camera.

But first, let’s walk through some other interesting new stuff.

Notes on durability

As is always true with my testing methodology, I treat this as anyone would who got a new iPhone and loaded an iCloud backup onto it. Plenty of other sites will do clean room testing if you like comparison porn, but I really don’t think that does most folks much good. By and large most people aren’t making choices between ecosystems based on one spec or another. Instead, I try to take them along on prototypical daily carries, whether to work for TechCrunch, on vacation or doing family stuff. A foot injury precluded any theme parks this year (plus, I don’t like to be predictable) so I did some office work, road travel in the center of California and some family outings to the park and zoo. A mix of uses cases that involves CarPlay, navigation, photos and general use in a suburban environment.

In terms of testing locale, Fresno may not be the most metropolitan city, but it’s got some interesting conditions that set it apart from the cities where most of the iPhones are going to end up being tested. Network conditions are pretty adverse in a lot of places, for one. There’s a lot of farmland and undeveloped acreage and not all of it is covered well by wireless carriers. Then there’s the heat. Most of the year it’s above 90 degrees Fahrenheit and a good chunk of that is spent above 100. That means that batteries take an absolute beating here and often perform worse than other, more temperate, places like San Francisco. I think that’s true of a lot of places where iPhones get used, but not so much the places where they get reviewed.

That said, battery life has been hard to judge. In my rundown tests, the iPhone XS Max clearly went beast mode, outlasting my iPhone X and iPhone XS. Between those two, though, it was tougher to tell. I try to wait until the end of the period I have to test the phones to do battery stuff so that background indexing doesn’t affect the numbers. In my ‘real world’ testing in the 90+ degree heat around here, iPhone XS did best my iPhone X by a few percentage points, which is what Apple does claim, but my X is also a year old. The battery didn’t fail during even intense days of testing with the XS.

In terms of storage I’m tapping at the door of 256GB, so the addition of 512GB option is really nice. As always, the easiest way to determine what size you should buy is to check your existing free space. If you’re using around 50% of what your phone currently has, buy the same size. If you’re using more, consider upgrading because these phones are only getting faster at taking better pictures and video and that will eat up more space.

The review units I was given both had the new gold finish. As I mentioned on the day, this is a much deeper, brassier gold than the Apple Watch Edition. It’s less ‘pawn shop gold’ and more ‘this is very expensive’ gold. I like it a lot, though it is hard to photograph accurately — if you’re skeptical, try to see it in person. It has a touch of pink added in, especially as you look at the back glass along with the metal bands around the edges. The back glass has a pearlescent look now as well, and we were told that this is a new formulation that Apple created specifically with Corning. Apple says that this is the most durable glass ever in a smartphone.

My current iPhone has held up to multiple falls over 3 feet over the past year, one of which resulted in a broken screen and replacement under warranty. Doubtless multiple YouTubers will be hitting this thing with hammers and dropping it from buildings in beautiful Phantom Flex slo-mo soon enough. I didn’t test it. One thing I am interested in seeing develop, however, is how the glass holds up to fine abrasions and scratches over time.

My iPhone X is riddled with scratches both front and back, something having to do with the glass formulation being harder, but more brittle. Less likely to break on impact but more prone to abrasion. I’m a dedicated no-caser, which is why my phone looks like it does, but there’s no way for me to tell how the iPhone XS and XS Max will hold up without giving them more time on the clock. So I’ll return to this in a few weeks.

Both the gold and space grey iPhones XS have been subjected to a coating process called physical vapor deposition or PVD. Basically metal particles get vaporized and bonded to the surface to coat and color the band. PVD is a process, not a material, so I’m not sure what they’re actually coating these with, but one suggestion has been Titanium Nitride. I don’t mind the weathering that has happened on my iPhone X band, but I think it would look a lot worse on the gold, so I’m hoping that this process (which is known to be incredibly durable and used in machine tooling) will improve the durability of the band. That said, I know most people are not no-casers like me so it’s likely a moot point.

Now let’s get to the nut of it: the camera.

Bokeh let’s do it

I’m (still) not going to be comparing the iPhone XS to an interchangeable lens camera because portrait mode is not a replacement for those, it’s about pulling them out less. That said, this is closest its ever been.

One of the major hurdles that smartphone cameras have had to overcome in their comparisons to cameras with beautiful glass attached is their inherent depth of focus. Without getting too into the weeds (feel free to read this for more), because they’re so small, smartphone cameras produce an incredibly compressed image that makes everything sharp. This doesn’t feel like a portrait or well composed shot from a larger camera because it doesn’t produce background blur. That blur was added a couple of years ago with Apple’s portrait mode and has been duplicated since by every manufacturer that matters — to varying levels of success or failure.

By and large, most manufacturers do it in software. They figure out what the subject probably is, use image recognition to see the eyes/nose/mouth triangle is, build a quick matte and blur everything else. Apple does more by adding the parallax of two lenses OR the IR projector of the TrueDepth array that enables Face ID to gather a multi-layer depth map.

As a note, the iPhone XR works differently, and with fewer tools, to enable portrait mode. Because it only has one lens it uses focus pixels and segmentation masking to ‘fake’ the parallax of two lenses.

With the iPhone XS, Apple is continuing to push ahead with the complexity of its modeling for the portrait mode. The relatively straightforward disc blur of the past is being replaced by a true bokeh effect.

Background blur in an image is related directly to lens compression, subject-to-camera distance and aperture. Bokeh is the character of that blur. It’s more than just ‘how blurry’, it’s the shapes produced from light sources, the way they change throughout the frame from center to edges, how they diffuse color and how they interact with the sharp portions of the image.

Bokeh is to blur what seasoning is to a good meal. Unless you’re the chef, you probably don’t care what they did you just care that it tastes great.

Well, Apple chef-ed it the hell up with this. Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.

I keep saying model because it’s important to emphasize that this is a living construct. The blur you get will look different from image to image, at different distances and in different lighting conditions, but it will stay true to the nature of the virtual lens. Apple’s bokeh has a medium-sized penumbra, spreading out light sources but not blowing them out. It maintains color nicely, making sure that the quality of light isn’t obscured like it is with so many other portrait applications in other phones that just pick a spot and create a circle of standard gaussian or disc blur.

Check out these two images, for instance. Note that when the light is circular, it retains its shape, as does the rectangular light. It is softened and blurred, as it would when diffusing through the widened aperture of a regular lens. The same goes with other shapes in reflected light scenarios.

Now here’s the same shot from an iPhone X, note the indiscriminate blur of the light. This modeling effort is why I’m glad that the adjustment slider proudly carries f-stop or aperture measurements. This is what this image would look like at a given aperture, rather than a 0-100 scale. It’s very well done and, because it’s modeled, it can be improved over time. My hope is that eventually, developers will be able to plug in their own numbers to “add lenses” to a user’s kit.

And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.

It’s also great for removing unwanted people or things from the background by cranking up the blur.

And yes, it works on non-humans.

If you end up with an iPhone XS, I’d play with the feature a bunch to get used to what a super wide aperture lens feels like. When its open all the way to f1.4 (not the actual widest aperture of the lens, by the way; this is the virtual model we’re controlling) pretty much only the eyes should be in focus. Ears, shoulders, maybe even nose could be out of the focus area. It takes some getting used to but can produce dramatic results.

A 150% crop of a larger photo to show detail preservation.

Developers do have access to one new feature though, the segmentation mask. This is a more precise mask that aids in edge detailing, improving hair and fine line detail around the edges of a portrait subject. In my testing it has led to better handling of these transition areas and less clumsiness. It’s still not perfect, but it’s better. And third-party apps like Halide are already utilizing it. Halide’s co-creator, Ben Sandofsky, says they’re already seeing improvements in Halide with the segmentation map.

“Segmentation is the ability to classify sets of pixels into different categories,” says Sandofsky. “This is different than a “Hot dog, not a hot dog” problem, which just tells you whether a hot dog exists anywhere in the image. With segmentation, the goal is drawing an outline over just the hot dog. It’s an important topic with self driving cars, because it isn’t enough to tell you there’s a person somewhere in the image. It needs to know that person is directly in front of you. On devices that support it, we use PEM as the authority for what should stay in focus. We still use the classic method on old devices (anything earlier than iPhone 8), but the quality difference is huge.”

The above is an example shot in Halide that shows the image, the depth map and the segmentation map.

In the example below, the middle black-and-white image is what was possible before iOS 12. Using a handful of rules like, “Where did the user tap in the image?” We constructed this matte to apply our blur effect. It’s no bad by any means, but compare it to the image on the right. For starters, it’s much higher resolution, which means the edges look natural.

My testing of portrait mode on the iPhone XS says that it is massively improved,  but that there are still some very evident quirks that will lead to weirdness in some shots like wrong things made blurry and halos of light appearing around subjects. It’s also not quite aggressive enough on foreground objects — those should blur too but only sometimes do. But the quirks are overshadowed by the super cool addition of the adjustable background blur. If conditions are right it blows you away. But every once in a while you still get this sense like the Neural Engine just threw up its hands and shrugged.

Live preview of the depth control in the camera view is not in iOS 12 at the launch of the iPhone XS, but it will be coming in a future version of iOS 12 this fall.

I also shoot a huge amount of photos with the telephoto lens. It’s closer to what you’d consider to be a standard lens on a camera. The normal lens is really wide and once you acclimate to the telephoto you’re left wondering why you have a bunch of pictures of people in the middle of a ton of foreground and sky. If you haven’t already, I’d say try defaulting to 2x for a couple of weeks and see how you like your photos. For those tight conditions or really broad landscapes you can always drop it back to the wide. Because of this, any iPhone that doesn’t have a telephoto is a basic non-starter for me, which is going to be one of the limiters on people moving to iPhone XR from iPhone X, I believe. Even iPhone 8 Plus users who rely on the telephoto I believe will miss it if they don’t go to the XS.

But, man, Smart HDR is where it’s at

I’m going to say something now that is surely going to cause some Apple followers to snort, but it’s true. Here it is:

For a company as prone to hyperbole and Maximum Force Enthusiasm about its products, I think that they have dramatically undersold how much improved photos are from the iPhone X to the iPhone XS. It’s extreme, and it has to do with a technique Apple calls Smart HDR.

Smart HDR on the iPhone XS encompasses a bundle of techniques and technology including highlight recovery, rapid-firing the sensor, an OLED screen with much improved dynamic range and the Neural Engine/image signal processor combo. It’s now running faster sensors and offloading some of the work to the CPU, which enables firing off nearly two images for every one it used to in order to make sure that motion does not create ghosting in HDR images, it’s picking the sharpest image and merging the other frames into it in a smarter way and applying tone mapping that produces more even exposure and color in the roughest of lighting conditions.

iPhone XS shot, better range of tones, skintone and black point

iPhone X Shot, not a bad image at all, but blocking up of shadow detail, flatter skin tone and blue shift

Nearly every image you shoot on an iPhone XS or iPhone XS Max will have HDR applied to it. It does it so much that Apple has stopped labeling most images with HDR at all. There’s still a toggle to turn Smart HDR off if you wish, but by default it will trigger any time it feels it’s needed.

And that includes more types of shots that could not benefit from HDR before. Panoramic shots, for instance, as well as burst shots, low light photos and every frame of Live Photos is now processed.

The results for me have been massively improved quick snaps with no thought given to exposure or adjustments due to poor lighting. Your camera roll as a whole will just suddenly start looking like you’re a better picture taker, with no intervention from you. All of this is capped off by the fact that the OLED screens in the iPhone XS and XS Max have a significantly improved ability to display a range of color and brightness. So images will just plain look better on the wider gamut screen, which can display more of the P3 color space.

Under the hood

As far as Face ID goes, there has been no perceivable difference for me in speed or number of positives, but my facial model has been training on my iPhone X for a year. It’s starting fresh on iPhone XS. And I’ve always been lucky that Face ID has just worked for me most of the time. The gist of the improvements here are jumps in acquisition times and confirmation of the map to pattern match. There is also supposed to be improvements in off-angle recognition of your face, say when lying down or when your phone is flat on a desk. I tried a lot of different positions here and could never really definitively say that iPhone XS was better in this regard, though as I said above, it very likely takes training time to get it near the confidence levels that my iPhone X has stored away.

In terms of CPU performance the world’s first at-scale 7nm architecture has paid dividends. You can see from the iPhone XS benchmarks that it compares favorably to fast laptops and easily exceeds iPhone X performance.

[embedded content]

The Neural Engine and better A12 chip has meant for better frame rates in intense games and AR, image searches, some small improvement in app launches. One easy way to demonstrate this is the video from the iScape app, captured on an iPhone X and an iPhone XS. You can see how jerky and FPS challenged the iPhone X is in a similar AR scenario. There is so much more overhead for AR experiences I know developers are going to be salivating for what they can do here.

The stereo sound is impressive, surpassingly decent separation for a phone and definitely louder. The tradeoff is that you get asymmetrical speaker grills so if that kind of thing annoys you you’re welcome.

Upgrade or no

Every other year for the iPhone I see and hear the same things — that the middle years are unimpressive and not worthy of upgrading. And I get it, money matters, phones are our primary computer and we want the best bang for our buck. This year, as I mentioned at the outset, the iPhone X has created its own little pocket of uncertainty by still feeling a bit ahead of its time.

I don’t kid myself into thinking that we’re going to have an honest discussion about whether you want to upgrade from the iPhone X to iPhone XS or not. You’re either going to do it because you want to or you’re not going to do it because you don’t feel it’s a big enough improvement.

And I think Apple is completely fine with that because iPhone XS really isn’t targeted at iPhone X users at all, it’s targeted at the millions of people who are not on a gesture-first device that has Face ID. I’ve never been one to recommend someone upgrade every year anyway. Every two years is more than fine for most folks — unless you want the best camera, then do it.

And, given that Apple’s fairly bold talk about making sure that iPhones last as long as they can, I think that it is well into the era where it is planning on having a massive installed user base that rents iPhones from it on a monthly or yearly or biennial period. And it doesn’t care whether those phones are on their first, second or third owner, because that user base will need for-pay services that Apple can provide. And it seems to be moving in that direction already, with phones as old as the five-year-old iPhone 5s still getting iOS updates.

With the iPhone XS, we might just be seeing the true beginning of the iPhone-as-a-service era.

Zortrax launches a new high-speed, high-resolution printer, the Inkspire

Zortrax has launched a new printer, the Inkspire, that prints using an LCD to create objects in high-quality resin in minutes. The printer – essentially an upgrade to traditional stereolithography (SLA) printers – uses a single frame of light to create layers of 25 microns.

Most SLA printers use a laser or DLP to shine a pattern on the resin. The light hardens the resin instantly, creating a layer of material that the printer then pulls up and out as the object grows. The UV LCD in the $2,699 Inkspire throws an entire layer at a time and is nine times more precise than standard SLA systems. It can print 20 to 36 millimeters per hour and the system can print objects in serial, allowing you to to print hundreds of thousands of small objects per month.

“The printer is also perfect for rapid prototyping of tiny yet incredibly detailed products like jewelry or dental prostheses. But there are more possible applications,” said co-founder Marcin Olchanowski. “Working with relatively small models like HDMI cover caps, one Zortrax Inkspire can 3D print 77 of them in 1h 30min. 30 printers working together in a 3D printing farm can offer an approximate monthly output of 360,000 to over 500,000 parts (depending on how many shifts per day are scheduled). This is how Zortrax Inkspire can take a business way into medium or even high scale production territory.”

The printer company, which is now one of the largest in Central Europe, explored multiple technologies before settling on this form of SLA printing.

“At the early stage of this project we were investigating the technology itself, and it seemed very unlikely we were able to create such a device,” said Olchanowski. “We tried SLA and DLP but we were not happy with these technologies. We perceived them undeveloped. But, step by step, we succeeded. We see huge prospects of development for resin 3D printing technology, because nowadays customers expect the higher quality of printed models.”

The company sells 6,500 printers yearly and will see $13.7 million in revenue this year. They are also selling resins for their new printers and they will ship in about two months.

Printers like the Inkspire are a bit harder to use than traditional extruder-based printers like Makerbots. However, the quality and print speed is far better and paves the way to truly 3D-printed production runs for one-off parts.

OnePlus is developing its own smart TV

Smartphone upstart OnePlus’s upcoming 6T flagship promises to bring changes — it’ll see it ditch the headphone jack and sport an in-screen fingerprint reader — but first there’s something else. OnePlus is developing its first smart TV.

CEO Pete Lau revealed the details today, explaining that the device will mark the five-year-old company’s next step to “building a connected human experience.”

“For most of us, there are four major environments we experience each day: the home, the workplace, the commute, and being on-the-move. The home – perhaps the most important environment experience – is just starting to enjoy the benefits of intelligent connectivity,” he wrote on the company’s website.

“We want to bring the home environment to the next level of intelligent connectivity. To do this, we are building a new product of OnePlus’ premium flagship design, image quality and audio experience to more seamlessly connect the home,” Lau added.

Shenzhen-based OnePlus has distinguished itself from a raft of Chinese mobile wannabees with some beautifully designed and well-functioning devices — eight phones to date — while it has developed its own Android-based OS and branched out into headphones. It has seen particular success in India, where it has beaten Samsung and Apple to become the country’s top ‘premium’ smartphone brand, and it has also landed a carrier distribution deal in the U.S. — something that has alluded larger rivals Huawei and Xiaomi. That’s impressive for such a young business.

It has tinkered around before. OnePlus co-founder Carl Pei previously told TechCrunch that the company had developed (and then abandoned) a number of prototype devices outside of phones before, but the TV project is very real. That said, the company is opening it up to its community of five million registered users who will be given the opportunity to name it. You can find more details about that in the announcement post here.

Kegel trainer startup Elvie is launching a smaller, smarter, hands-free breast pump

Elvie, a Berlin-based startup known best for its connected Kegel trainer is jumping into the breast pump business with a new $480 hands-free system you can slip into your bra.

Even with all the innovation in baby gear, breast pumps have mostly sucked (pun intended) for new moms for the past half a century. My first experience with a pump required me to stay near a wall socket and hunch over for a good twenty to thirty minutes for fear the milk collected might spill all over the place (which it did anyway, frequently). It was awful!

Next I tried the Willow Pump, an egg-shaped, connected pump meant to liberate women everywhere with its small and mobile design. It received glowing reviews, though my experience with it was less than stellar.

The proprietary bags were hard to fit in the device, filled up with air, cost 50 cents each (on top of the $500 pump that insurance did not cover), wasted many a golden drop of precious milk in the transfer and I had to reconfigure placement several times before it would start working. So I’ve been tentatively excited about the announcement of Elvie’s new cordless (and silent??) double breast pump.

Displayed: a single Elive pump with accompanying app.

Elvie tells TechCrunch its aim all along has been to make health tech for women and that it has been working on this pump for the past three years.

The Elvie Pump is a cordless, hands-free, closed system, rechargeable electric pump designed by former Dyson engineers. It can hold up to 5 oz from each breast in a single use.

It’s most obvious and direct competition is the Willow pump, another “wearable” pump moms can put right in their bra and walk around in, hands free. However, unlike the Willow, Elvie’s pump does not need proprietary bags. You just pump right into the device and the pump’s smartphone app will tell you when each side is full.

It’s also half the size and weight of a Willow and saves every precious drop it can by pumping right into the attached bottle so you just pump and feed (no more donut-shaped bags you have to cut open and awkwardly pour into a bottle).

On top of that, Elvie claims this pump is silent. No more loud suction noise off and on while trying to pump in a quiet room in the office or elsewhere. It’s small, easy to carry around and you can wear it under your clothes without it making a peep! While the Willow pump claims to be quiet — and it is, compared to other systems –you can still very much hear it while you are pumping.

Elvie’s connected breast pump app

All of these features sound fantastic to this new (and currently pumping) mom. I remember in the early days of my baby’s life wanting to go places but feeling stuck. I was chained to not just all the baby gear, hormonal shifts and worries about my newborn but to the pump and feed schedule itself, which made it next to impossible to leave the house for the first few months.

My baby was one of those “gourmet eaters” who just nursed and nursed all day. There were days I couldn’t leave the bed! Having a silent, no mess, hands-free device that fit right in my bra would have made a world of difference.

However, I mentioned the word “tentatively” above as I have not had a chance to do a hands-on review of Elvie’s pump. The Willow pump also seemed to hold a lot of promise early on, yet left me disappointed.

To be fair, the company’s customer service team was top-notch and did try to address my concerns. I even went through two “coaching” sessions but in the end it seemed the blame was put on me for not getting their device to work correctly. That’s a bad user experience if you are blaming others for your design flaws, especially new and struggling moms.

Both companies are founded by women and make products for women — and it’s about time. But it seems as if Elvie has taken note of the good and bad in their competitors and had time to improve upon it — and that’s what has me excited.

As my fellow TechCrunch writer Natasha put it in her initial review of Elvie as a company, “It’s not hyperbole to say Elvie is a new breed of connected device. It’s indicative of the lack of smart technology specifically — and intelligently — addressing women.”

So why the pump? “We recognized the opportunity [in the market] was smarter tech for women,” Boler told TechCrunch on her company’s move into the breast pump space. “Our aim is to transform the way women think and feel about themselves by providing the tools to address the issues that matter most to them, and Elvie Pump does just that.”

The Elvie Pump comes in three sizes and shapes to fit the majority of breasts and, in case you want to check your latch or pump volume, also has transparent nipple shields with markings to help guide the nipple to the right spot.

The app connects to each device via Bluetooth and tracks your production, detects let down, will pause when full and is equipped to pump in seven different modes.

The pump retails for $480 and is currently available in the U.K. However, those in the U.S. will have to wait till closer to the end of the year to get their hands on one. According to the company, It will be available on Elvie.com and Amazon.com, as well in select physical retail stores nationally later this year, pending FDA
approval.