All posts in “Developer”

Riskified prevents fraud on your favorite e-commerce site

Meet Riskified, an Israel-based startup that has raised $64 million in total to fight online fraud. The company has built a service that helps you reject transactions from stolen credit cards and approve more transactions from legitimate clients.

If you live in the U.S., chances are you know someone who noticed a fraudulent charge for an expensive purchase with their credit card — it’s still unclear why most restaurants and bars in the U.S. take your card away instead of bringing the card reader to you.

Online purchases, also known as card-not-present transactions, represent the primary source of fraudulent transactions. That’s why e-commerce websites need to optimize their payment system to detect fraudulent transactions and approve all the others.

Riskified uses machine learning to recognize good orders and improve your bottom line. In fact, Riskified is so confident that it guarantees that you’ll never have to pay chargebacks. As long as a transaction is approved by the product, the startup offers chargeback protection. If Riskified made the wrong call, the company reimburses fraudulent chargebacks.

On the other side of the equation, many e-commerce websites leave money on the table by rejecting transactions and false declines. It’s hard to quantify this as some customers end up not ordering anything. Riskified should help you on this front too.

The startup targets big customers — Macy’s, Dyson, Prada, Vestiaire Collective and GOAT are all using it. You can integrate Riskified with popular e-commerce payment systems and solutions, such as Shopify, Magento and Stripe. Riskified also has an API for more sophisticated implementations.

Crate.io raises $11M and launches its hosted IoT data platform.

Crate.io, the winner of our Disrupt Europe 2014 Startup Battlefield competition, today announced that it has raised an $11 million Series A round. In addition, the company also launched its ‘Crate Machine Learning Platform’ today, a new hosted solution for businesses that want to use the company’s SQL-based database platform for working with IoT data.

The new funding round was led by Zetta Venture Partners and Deutsche Invest Equity, with participation from Chalfen Ventures, Momenta Partners and Charlie Songhurst. Existing investors, including Draper Espirit, Vito Ventures and Docker founder Solomon Hykes also participated.

Crate co-founder and CEO Christian Lutz told me that over the course of the last year or so, the company has seen a large increase in paying customers, which now tally up to about 30. That has also allowed Crate to grow its revenue beyond $1 million in annual run rate. He attributed the current success of the startup to its renewed focus on machine data, something the team wasn’t really focused on when it first launched its product.

It was also this focus that made fundraising easier, Lutz told me. “What made the difference no is that very strong focus on machine data — in combinate with delivering sales,” he said. The fact that Crate now also has a number of well-known reference customers, including the likes of Skyhigh Networks and ALPLA, a packaging manufacturer that you have probably never heard of but that produces virtually all the bottles for Coca-Cola and Unilever for the U.S. market (as well as a bunch of other bottles that you probably have at home).

Unsurprisingly, the company, which now has over 30 employees, plans to use the new funding to expand its marketing and sales efforts, as well as to expand its core engineering team.

Talking about engineering. With its Machine Platform, Crate also today launched its first hosted offering, which lives on Microsoft’s Azure platform. That’s not a major surprise for two reasons: a) many of Crate’s industrial customers are already betting on Azure anyway and, b) Crate was part of the 2017 class of the Microsoft Growth Accelerator in Berlin. The focus of the new platform is to provide businesses with a single solution for ingesting large amounts of data from IoT devices. The platform supports real-time analytics and allows users to set up their own rules to trigger workflows and alerts as necessary. The platform itself handles all of the scaling (which is handled by the popular Kubernetes container orchestration tool), as well as backup, archiving and the usual role-based security functions.

Crate also today launched version 3.0 of its open source offering. While the company’s commercial focus is obviously on the value-added features for enterprises, it continues to actively develop the open source version, too and Lutz noted that this new version offers a 100x performance increase for some types of queries.

Zenaton lets you build and run workflows with ease

French startup Zenaton raised $2.35 million from Accel and Point Nine Capital, with the Slack Fund, Kima Ventures, Julien Lemoine and Francis Nappez also participating. The company wants to take care of the most tedious part of your application — asynchronous jobs and background tasks.

While it has never been easier to develop a simple web-based service with a database, building and scaling workflows that handle tasks based on different events still sucks.

Sometimes your background task fails and it’s going to take you days before you notice that your workflow stopped working. Some workflows might require so much resources that you’ll end up paying a huge server bill to get more RAM to handle those daily cron jobs and performance spikes.

And yet, many small companies would greatly benefit from adding asynchronous jobs. For instance, you could improve your retention rate by sending email reminders. You could try to upsell your customers with accessories if you’re running an e-commerce website. You could ask for reviews a few hours after a user found a restaurant through your app.

“We work hard to make it super easy – as a developer, you just have to install the Zenaton agent on your worker servers. That’s all. Specifically, you’ll no longer have to maintain a queuing system for your background jobs, there’s no more cron, no more database migrations to store transient states,” co-founder and CEO Gilles Barbier told me. Barbier previously worked at The Family and Zenaton is part of The Family’s portfolio.

Zenaton is already working with a big client and handles millions of workflow instances for them. You can try Zenaton for free if you execute less than 250,000 tasks per month. After that, plans start at $49 per month and you’ll pay more depending on how much RAM you consume with your workflows.

For now, you can integrate Zenaton with a PHP and a Node application, but the company is working on more languages, starting with Python, Ruby and Java. It’s clear that the product is still young.

But it sounds like a promising start. If you have a small development team, it could make sense to use Zenaton and a workflow-as-a-service approach.

Snapchat launches privacy-safe Snap Kit, the un-Facebook platform

Today Snapchat finally gets a true developer platform, confirming TechCrunch’s scoop from last month about Snap Kit. This set of APIs lets other apps piggyback on Snap’s login for signup, build Bitmoji avatars into their keyboards, display public Our Stories and Snap Map content, and generate branded stickers with referral links users can share back inside Snapchat. Snap Kit’s big selling point is privacy — a differentiator from Facebook. It doesn’t even let you share your social graph with apps to prevent a Cambridge Analytica-style scandal.

Launch partners include Tinder bringing Bitmojis to your chats with matches, Patreon letting fans watch creators’ Stories from within its app, and Postmates offering order ETA stickers you can share in Snapchat that open the restaurant’s page in the delivery app. Developers that want to join the platform can sign up here.

Snap Kit could help the stumbling public company colonize the mobile app ecosystem with its buttons and content, which could inspire Snapchat signups from new users and reengagement from old ones. “Growth is one of our three goals for 2018, so we absolutely hope it can contribute to that, and continue to strengthen engagement, which has always been a key metric for us” Snap’s VP of product Jacob Andrea tells me. That’s critical since Snapchat sunk to its lowest user growth rate ever last quarter under the weight of competition from Instagram and WhatsApp.

“There have been areas inside of our products where we’ve really set standards” Andreou explains. “Early, that was seen in examples like Stories, but today with things like how we treat user data, what we collect, what we share when people login and register for our service . . . Snap Kit is a set of developer tools that really allow people to take the best parts of our products and the standards that we’ve set in a few of these areas, and bring them into their apps.”

This focus on privacy manifests as a limit of 90 days of inactivity before your connection with an app is severed. And the login features only requires you bring along your changeable Snapchat display name, and optionally, your Bitmoji. Snap Kit apps can’t even ask for your email, phone number, location, who you follow, or who you’re friends with.

“It really became challenging for us to see our users then use other products throughout their day and have to lower their expectations. . . having to be ok with the fact that all of their information and data would be shared” Andreou gripes. This messaging is a stark turnaround from four years ago when it took 10 days for CEO Evan Spiegel to apologize for security laziness causing the leak of 4 million users’ phone numbers. But now with Facebook as everyone’s favorite privacy punching bag, Snapchat is seizing the PR opportunity.

“I think one of the parts that [Spiegel] was really excited about with this release is how much better our approach to our users in that way really is — Without relying on things like policy or developer’s best intentions or them writing perfect bug free code, but instead by design, not even exposing these things to begin with.”

Yet judging by Facebook’s continued growth and recovered share price, privacy is too abstract of a concept for many people to grasp. Snap Kit will have to win on the merits of what it brings other apps, and the strength of its partnerships team. Done right, Snapchat could gain an army of allies to battle the blue menace.

Snapvengers Assemble

Snap’s desire to maintain an iron grip on its ‘cool’ brand has kept its work with developers minimal until now. Its first accidental brush with a developer platform was actually a massive security hazard.

Third-party apps promising a way to secretly screenshot messages asked users to login with their Snapchat usernames and passwords, then proceeded to get hacked, exposing some users’ risqué photos. Snap later cut off an innocent music video app called Mindie for finding a way to share to users’ Stories. Last year I wrote how A year ago I wrote that “Snap’s anti-developer attitude is an augmented liability”, as it’d need help to populate the physical world with AR.

2017 saw Snap cautiously extend the drawbridge, inviting in ads, analytics, and marketing developer partners to help brands be hip, and letting hacker/designers make their own AR lenses. But the real transition moment was when Spiegel said on the Q4 2017 earnings call that “We feel strongly that Snapchat should not be confined to our mobile application—the amazing Snaps created by our community deserve wider distribution so they can be enjoyed by everyone.”

At the time that meant Snaps on the web, embedded in news sites, and on Jumbotrons. Today it means in other apps. But Snap will avoid one of the key pitfalls of the Facebook platform: over-promising. Snap Deputy General Counsel for Privacy Katherine Tassi tells me “It was also very important to us that there wasn’t going to be the exchange of the friends graph as part of the value proposition to third party developers.”

How Snap Kit Works

Snap Kit breaks down to four core pieces of functionality that will appeal to different apps looking to simplify signup, make communication visual, host eye-catching content, or score referral traffic. Developers that want access to Snap Kit must pass a human review and approval process. Snap will review their functionality to ensure they’re not doing anything shady.

Once authorized, they’ll have access to these APIs:

  • Login Kit is the foundation of Snap Kit. It’s a OAuth-style alternative to Facebook Login that lets users skip creating a proprietary username and password by instead using their Snapchat credential. But all the app gets is their changeable, pseudonym-allowed Snapchat display name, and optionally, their Bitmoji avatar to use as a profile pic if the user approves. Getting that login button in lots of apps could remind people Snapchat exists, and turn it into a fundamental identity utility people will be loathe to abandon.
  • Creative Kit is how apps will get a chance to create stickers and filters for use back in the Snapchat camera. Similar to April’s F8 launch of the abilitu to share from other apps to Instagram and Facebook Stories, developers can turn content like high scores, workout stats and more into stickers that users can overlay on their Snaps to drive awareness of the source app. Developers can also set a deep link where those stickers send people to generate referral traffic, which could be appealing to those looking to tap Snap’s 191 million teens.

  • Bitmoji Kit lets developers integrate Snapchat’s personalized avatars directly into their app’s keyboard. It’s an easy way to make chat more visually expressive without having to reinvent the wheel. This follows the expansion of Friendmoji that feature avatars of you and a pal rolling out to the iOS keyboard. But Bitmoji Kit means developers do the integration work instead of having to rely on users installing anything extra.
  • Story Kit allows developers to embed Snapchat Stories into their apps and websites. Beyond specific Stories, apps can also search through public Stories submitted to Our Story or Snap Map by location, time, or captions. That way, a journalism app could surface first-hand reports from the scene of breaking news or a meme app could pull in puppy Snaps. Snap will add extra reminders to the Our Story submission process to ensure users know their Stories could appear outside of Snapchat’s own app.

One thing that’s not in Snap Kit, at least yet, is the ability to embed Snapchat’s whole software camera into other apps which TechCrunch erroneously reported. Our sources mistakenly confused Creative Kit’s ability to generate stickers as opposed a way to share whole stories, which Andreou called “an interesting first step” for making Snapchat the broadcast channel for other apps.

Additional launch partners include bringing Bitmoji to Quip’s word processor, RSVP stickers from Eventbrite, GIF-enhanced Stories search in Giphy, Stories from touring musicians in Bands In Town, Storytelling about your dinner reservation on Quandoo, music discovery sharing from SoundHound, and real-time sports score sharing from ScoreStream.

While other platforms have escaped their host’s control, like Facebook’s viral game outbreak in 2009 or Twitter having to shut down errant clients, Snapchat’s approval process will let it direct the destiny of its integrations.

Bitmoji Kit in Tinder

When asked why Snapchat was building Snap Kit, Andreou explained that “We think that giving people more tools to be able to express themselves freely, have fun and be creative, both on Snapchat and other apps is a good thing. We also think that helping more people outside of Snapchat learn about our platform and our features is a good thing.”

Without much data sharing, there’s a lot less risk here for Snapchat. But the platform won’t have the same draw that Facebook can dangle with its massive user base and extensive data access. Instead, Snapchat will have to leverage the fear of being left out of the visual communication era and tout itself as the way for apps to evolve.

Snap needs all the help it can get right now. If other apps are willing to be a billboard for it in exchange for some of its teen-approved functionality, Snapchat could find new growth channels amidst stiff competition.

How Facebook’s new 3D photos work

In May, Facebook teased a new feature called 3D photos, and it’s just what it sounds like. But beyond a short video and the name, little was said about it. But the company’s computational photography team has just published the research behind how the feature feature works and, having tried it myself, I can attest that the results are really quite compelling.

In case you missed the teaser, 3D photos will live in your news feed just like any other photos, except when you scroll by them, touch or click them, or tilt your phone, they respond as if the photo is actually a window into a tiny diorama, with corresponding changes in perspective. It will work for both ordinary pictures of people and dogs, but also landscapes and panoramas.

It sounds a little hokey, and I’m about as skeptical as they come, but the effect won me over quite quickly. The illusion of depth is very convincing, and it does feel like a little magic window looking into a time and place rather than some 3D model — which, of course, it is. Here’s what it looks like in action:

I talked about the method of creating these little experiences with Johannes Kopf, a research scientist at Facebook’s Seattle office, where its Camera and computational photography departments are based. Kopf is co-author (with University College London’s Peter Hedman) of the paper describing the methods by which the depth-enhanced imagery is created; they will present it at SIGGRAPH in August.

Interestingly, the origin of 3D photos wasn’t an idea for how to enhance snapshots, but rather how to democratize the creation of VR content. It’s all synthetic, Kopf pointed out. And no casual Facebook user has the tools or inclination to build 3D models and populate a virtual space.

One exception to that is panoramic and 360 imagery, which is usually wide enough that it can be effectively explored via VR. But the experience is little better than looking at the picture printed on butcher paper floating a few feet away. Not exactly transformative. What’s lacking is any sense of depth — so Kopf decided to add it.

The first version I saw had users moving their ordinary cameras in a pattern capturing a whole scene; by careful analysis of parallax (essentially how objects at different distances shift different amounts when the camera moves) and phone motion, that scene could be reconstructed very nicely in 3D (complete with normal maps, if you know what those are).

But inferring depth data from a single camera’s rapid-fire images is a CPU-hungry process and, though effective in a way, also rather dated as a technique. Especially when many modern cameras actually have two cameras, like a tiny pair of eyes. And it is dual-camera phones that will be able to create 3D photos (though there are plans to bring the feature downmarket).

By capturing images with both cameras at the same time, parallax differences can be observed even for objects in motion. And because the device is in the exact same position for both shots, the depth data is far less noisy, involving less number-crunching to get into usable shape.

Here’s how it works. The phone’s two cameras take a pair of images, and immediately the device does its own work to calculate a “depth map” from them, an image encoding the calculated distance of everything in the frame. The result looks something like this:

Apple, Samsung, Huawei, Google — they all have their own methods for doing this baked into their phones, though so far it’s mainly been used to create artificial background blur.

The problem with that is that the depth map created doesn’t have some kind of absolute scale — for example, light yellow doesn’t mean 10 feet, while dark red means 100 feet. An image taken a few feet to the left with a person in it might have yellow indicating 1 foot and red meaning 10. The scale is different for every photo, which means if you take more than one, let alone dozens or a hundred, there’s little consistent indication of how far away a given object actually is, which makes stitching them together realistically a pain.

That’s the problem Kopf and Hedman and their colleagues took on. In their system, the user takes multiple images of their surroundings by moving their phone around; it captures an image (technically two images and a resulting depth map) every second and starts adding it to its collection.

In the background, an algorithm looks at both the depth maps and the tiny movements of the camera captured by the phone’s motion detection systems. Then the depth maps are essentially massaged into the correct shape to line up with their neighbors. This part is impossible for me to explain because it’s the secret mathematical sauce that the researchers cooked up. If you’re curious and like Greek, click here.

Not only does this create a smooth and accurate depth map across multiple exposures, but it does so really quickly: about a second per image, which is why the tool they created shoots at that rate, and why they call the paper “Instant 3D Photography.”

Next the actual images are stitched together, the way a panorama normally would be. But by utilizing the new and improved depth map, this process can be expedited and reduced in difficulty by, they claim, around an order of magnitude.

Because different images captured depth differently, aligning them can be difficult, as the left and center examples show — many parts will be excluded or produce incorrect depth data. The one on the right is Facebook’s method.

Then the depth maps are turned into 3D meshes (a sort of two-dimensional model or shell) — think of it like a papier-mache version of the landscape. But then the mesh is examined for obvious edges, such as a railing in the foreground occluding the landscape in the background, and “torn” along these edges. This spaces out the various objects so they appear to be at their various depths, and move with changes in perspective as if they are.

Although this effectively creates the diorama effect I described at first, you may have guessed that the foreground would appear to be little more than a paper cutout, since, if it were a person’s face captured from straight on, there would be no information about the sides or back of their head.

This is where the final step comes in of “hallucinating” the remainder of the image via a convolutional neural network. It’s a bit like a content-aware fill, guessing on what goes where by what’s nearby. If there’s hair, well, that hair probably continues along. And if it’s a skin tone, it probably continues too. So it convincingly recreates those textures along an estimation of how the object might be shaped, closing the gap so that when you change perspective slightly, it appears that you’re really looking “around” the object.

The end result is an image that responds realistically to changes in perspective, making it viewable in VR or as a diorama-type 3D photo in the news feed.

In practice it doesn’t require anyone to do anything different, like download a plug-in or learn a new gesture. Scrolling past these photos changes the perspective slightly, alerting people to their presence, and from there all the interactions feel natural. It isn’t perfect — there are artifacts and weirdness in the stitched images if you look closely and of course mileage varies on the hallucinated content — but it is fun and engaging, which is much more important.

The plan is to roll the feature out mid-summer. For now the creation of 3D photos will be limited to devices with two cameras — that’s a limitation of the technique — but anyone will be able to view them.

But the paper does also address the possibility of single-camera creation by way of another convolutional neural network. The results, only briefly touched on, are not as good as the dual-camera systems, but still respectable and better and faster than some other methods currently in use. So those of us still living in the dark age of single cameras have something to hope for.