All posts in “Developer”

Algolia adds search analytics so that you can optimize your search results

Algolia had made a tiny acquisition to complement its product offering. The company is acquiring the technology behind SeaUrchin.IO — the team of two behind this product is not joining Algolia.

For those not familiar with Algolia, the startup has developed an impressive search engine API. In just a few lines of codes, you can boost the search box on your site with Algolia’s search.

After that, Algolia provides instant letter-by-letter search results. It feels like you’re searching for something on your local computer using Spotlight.

Integrating Algolia into your website is one part, but you also need to feed Algolia with your search data and customize your data set.

This is where SeaUrchin.IO’s technology becomes useful. Instead of blindingly reordering your search results by setting your custom ranking rules, you get insights from Algolia’s analytics feature.

From the initial idea to the actual release, it took Algolia six months to release this feature. SeaUrchin.IO’s acquisition was just a way to kick off the work more quickly.

Now, you can tie conversion rates to queries. For instance, if you’re running an e-commerce website, you see which query has a higher chance of leading to adding a product to your cart. You can track clicks and conversion events to make small changes and optimize your search flow.

This way, you can fix problematic queries, tag your search results more appropriately to resurface interesting content or add synonyms.

Customers with higher plans can start using the new analytics product now. A lighter version of the analytics feature might come later for smaller plans.

And this is just a start as analytics opens up other opportunities. For instance, Algolia customers will be able to A/B test different relevance formulas. Eventually, you could also imagine models and recommendations based on machine learning across all Algolia customers.

Algolia currently has a bit less than 5,000 customers generating over a billion searches per day. Its clients include Under Armour, Twitch, Periscope, Medium and Stripe.

These hackers put together a prosthetic Nerf gun

How can you participate in a Nerf gun fight if you’re missing a hand? The ingenious Hackerloop collective of tinkerers solved that problem by putting together a prosthetic Nerf gun that you can control with your arm muscles.

In other words, Nicolas Huchet became Barret Wallace from Final Fantasy VII or Mega Man for a day. And here’s what it looks like:

[embedded content]

Let’s look at the device more closely. In particular, Hackerloop had to find a way to replace the trigger on the Nerf gun with another firing gesture.

The base gun is a Swarmfire Nerf blaster without the handle. Thanks to some 3D printing, Huchet could wear the device as a prosthetic extension of his right arm — it’s a custom-made casing.

The Nerf gun is then connected to an Arduino-like microcontroller that activates the gun on demand. And finally, Huet is wearing three electrodes near his elbow. If he contracts his muscle, the electrodes send the electrical activity to the microcontroller.

If the voltage reaches a certain level, the microcontroller fires the Nerf gun. And of course, Huchet played around with it in the streets of Paris. Pretty neat!

In the past, Hackerloop has worked on other creative hacks. The team built a replica of the house in “Up” using paper and foam and sent it above Paris to post photos on Instagram using a Raspberry Pi.

They also worked on the Nosulus Rift, a VR fart simulator to promote Ubisoft’s South Park game (The Fractured But Whole). Every time you fart in the video game, the Nosulus Rift emits a farting smell.

I tried it myself and it really stinks.

CodeStream wants to move developer chat straight to the source code

There are tons of services out there from Slack to Jira that are designed to help developers communicate with one another about code issues, but there is a surprising dearth of tools that have been purpose-built to provide communication capabilities right in the IDE where developers work. CodeStream, a member of the Y Combinator Winter 2018, aims to fix that.

“We are team chat in your IDE and make it easier for developers to talk about code where they code,” company co-founder and CEO Peter Pezaris explained. He says that having the conversation adjacent to the code also has the advantage of creating a record of the interactions between coders, and they could learn from that over time, while making it easier to on-board new developers to the team.

Unlike many YC companies, Pezaris and his co-founder and COO Claudio Pinkus have more than 20 years of experience building successful companies. They say the idea for this one came from a problem they experienced over and over around developer communication. “CodeStream is story of scratching your own itch. We work as developers and my contention is that people tend to work too much in isolation,” Pezaris said.

Developers can go into Github and see every line of code they ever created back to the very start of the project, but conversations around that code tend to be more ephemeral, he explained. While the startup team uses Slack to communicate about the company, they saw a need for a more specific tool built right inside the code production tool to discuss the code.

  1. CodeStream – code comment

  2. CodeStream – merge conflict

If you’re thinking that surely something like this must exist already, Pezaris insists it doesn’t because of the way IDEs were structured until recently. They weren’t built to plug in other pieces like CodeStream. “You would be shocked how developers are sharing code,” he said. He spoke to team recently that took pictures of the code snippets with their mobile phones, then shared them in Facebook Messenger to discuss it.

A big question is why an experienced team of company builders would want to join Y Combinator, which is typically populated by young entrepreneurs with little experience looking for help as they build a company. The CodeStream team had a different motivation. They knew how to build a company, but having spent the bulk of their professional lives in New York City, they wanted to build connections in Silicon Valley and this seemed like a good way to do it.

They also love the energy of the young startups and they are learning from them about new tools and techniques they might not have otherwise known about, while also acting as mentors to the other companies given their atypical depth of experience.

The company launched last June. They eventually will charge a subscription fee to monetize the idea. As for what motivated them to start yet another company, they missed working together and the rush involved in building a company. “I took two years off after the sale of my previous business, and I got the itch. I feel better and happier when I’m doing this,” Pinkus said. “It’s the band. We got it back together,” says Pezaris.

[embedded content]

Featured Image: PeopleImages/Getty Images

Here’s the first developer preview of Android P

Just like in the last two years, Google is using the beginning of March to launch the first developer preview of the next version of Android. Android P, as it’s currently called, is still very much a work in progress and Google isn’t releasing it into its public Android beta channel for over-the-air updates just yet. That’ll come later. Developers, however, can now download all the necessary bits to work with the new features and test their existing apps to make sure they are compatible.

As with Google’s previous early releases, we’re mostly talking about under-the-hood updates here. Google isn’t talking about any of the more user-facing design changes in Android P just yet. The new features Google is talking about, though, will definitely make it easier for developers to create interesting new apps for modern Android devices.

So what’s new in Android P? Since people were already excited about this a few weeks ago, let’s get this one new feature out of the way: Android P has built-in support for notches, those display cutouts Apple had the courage to pioneer with the iPhone X. Developers will be able to call a new API to check whether a device has a cutout and its dimensions and then request full-screen content to flow around it.

While Google isn’t talking much about user-facing features, the company mentions that it is once again making changes to Android notifications. This time around, the company is focusing on notifications from messaging apps and it’s giving developers a couple of new tools for highlighting who is contacting you and giving developers the ability to attach photos, stickers and smart replies to these notifications.

A couple of new additions to the Android Autofill Framework for developers who write password managers will also make life a bit easier for users, though right now, the focus here is on better data set filtering, input sanitization and a compatibility mode that will allow password managers to work with apps that don’t have built-in support for Autofill yet.

While Google isn’t introducing any new power-saving features in Android P (yet), the company does say that it continues to refine existing features like Doze, App Standby and Background Limits, all of which it introduced in the last few major releases.

What Google is adding, though, is new privacy features. Android P will, for example, restrict access to the microphone, camera and sensors from idle apps. In a future build, the company will also introduce the ability to encrypt Android backups with a client-side secret and Google will also introduce per-network randomization of associated MAC address, which will make it harder to track users. This last feature is still experimental for now, though.

One of the most interesting new developer features in Android P is the multi-camera API. Since many modern phones now have dual front or back cameras (with Google’s own Pixel being the exception), Google decided to make it easier for developers to access both of them with the help of a new API to call a fused camera stream that can switch between two or more cameras. Other changes to the camera system are meant to help image stabilization and special effects developers build their tools and to reduce the delays during initial captures. Chances are, then, that we’ll see the more Frontback-style apps with the release of Android P.

On the media side, Android P also introduces built-in support for HDR VP9 Profile 2 for playing HDR-enabled movies on devices with the right hardware, as well as support for images in the increasingly popular High Efficiency Image File Format (HEIF), which may just be the JPEG-killer the internet has been searching for for decades (and which Apple also supports). Developers now also get new and more efficient tools for handling bitmap images and drawables thanks to ImageDecode, a replacement for the current BitMapFactory object.

Indoor positioning is also getting a boost in Android P thanks to support for the IEEE 802.11mc protocol, which provides information about Wi-Fi round-trip time, which in turn allows for relatively accurate indoor positioning. Devices that support this protocol will be able to locate a user with an accuracy of one to two meters. That should be more than enough to guide you through a mall or pop up an ad when you are close to a store, but Google also notes that some of the use cases here include disambiguated voice controls.

Once you are in that store in the mall and want to pay, Android P now also supports the GlobalPlatform Open Mobile API. That name may evoke the sense of green meadows and mountain dew, but it’s basically the standard for building secure NFV-based services like payments solutions.

Developers who want to do machine learning on phones are also in luck, because Android P will bring a couple of new features to the Neural Networks API that Google first introduced with Android 8.1. Specifically, Android P will add support for nine operations: Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub and Squeeze.

But wait, there’s more. Now that Kotlin is a first-class language for Android development, Google is obviously optimizing its compiler, and for all apps, Google is also promising performance and efficiency improvements in its ART runtime.

Clearly, this is one of the more meaningful Android updates in recent years. It’s no surprise then that Google is only making images available to developers right now and that you won’t be able to get this version over the air just yet. Like with previous releases, though, Google does plan to bring Android P to the Android beta channel (Google I/O is about two months away, so that may be the time for that). As usual, Google will likely introduce a couple of other new features over the course of the beta period and at some point, it’ll even announce the final name for Android P…

Featured Image: Bryce Durbin

The day that changed your phone forever

Whether you’re a developer who’s working on mobile apps, or just someone enjoying the millions of apps available for your phone, today is a very special day.

It’s the 10-year anniversary of the original iPhone SDK. I don’t think it’s an understatement to say that this release changed a lot of people’s lives. I know it changed mine and had a fundamental impact on this company’s business. So let’s take a moment and look back on what happened a decade ago.

There are a lot of links in this piece, many of which were difficult to resurrect on today’s web. Make sure you take the time to explore! I’ve also tried to avoid technical jargon, so even if you don’t know your Swift from a hole in the ground, you can still follow along.

Touching the Future

For many of us, holding that first iPhone at the end of June 2007 was a glimpse of the future. We all wanted to know what was inside the glass and metal sitting in our pockets.

Apple had told us what the device could do but said very little about how it was done. We didn’t know anything about the processor or its speed, how much memory was available, or how you built apps. In many ways, this new device was a black, and silver, box.

As developers, we wanted to understand this device’s capabilities. We wanted to understand how our software design was about to change. We were curious and there was much to learn.

And learn we did. We called it Jailbreaking.

[embedded content]

Breaking out of jail

Discoveries happened quickly. It took just a matter of weeks before the filesystem was exposed. A couple of months later, the entire native app experience was unlocked. Development toolchains were available and folks were writing installers for native apps.

The first iPhone app created outside of Apple.

This rapid progress was made possible thanks to the tools used to build the original iPhone. Apple relied on the same infrastructure as Mac OS. They chose a familiar environment to expedite their own development, but that same familiarity allowed those of us outside Cupertino to figure things out quickly.

Hello world.

For example, much of the software on the iPhone was created using Objective-C. Mac developers had long used a tool called class-dump to show the various pieces of an app and learn how things communicated with each other. After getting access to the first iPhone’s apps and frameworks, this software gave great insight into what Apple had written.

The most important piece was a new thing called UIKit. It contained all the user interface components, like buttons and table views. Since they were similar to the ones we’d used on the Mac, it took little effort to make items for taps and scrolling.

Another important piece of the puzzle was the operating system: Unix. This choice by Apple meant that a lot of open source software was immediately available on our iPhones. We could use it to build our apps, then copy them over to the phone, and, most likely, view the content of LatestCrash.plist in /var/logs/CrashReporter.

I distinctly remember the first time I got a shell prompt on my iPhone and used uname to see the system information. I was home.

Early app development

I was not alone. Thousands of other developers were finding that the inside of this new device was just as magical as the outside. It shouldn’t come as a surprise to hear that there was an explosion of iPhone app development.

One of the pivotal moments for the burgeoning community came at an independent developer conference called C4[1]. Held in August 2007, many of the attendees had the new device and were discovering its capabilities. Most of us were also experienced Mac developers. We had just been to WWDC and heard Apple’s pitch for a “sweet solution”.

Amid this perfect storm, there was an “Iron Coder” competition for the “iPhone API”. The conference organizer, Jonathan “Wolf” Rentzsch, asked us to “be creative”. We were.

My own submission was a web app that implemented a graphing calculator in JavaScript. It epitomized what we all disliked about Apple’s proposal a few months earlier: a clunky user interface that ran slowly. Not the sandwich most of us were hoping for…

On the other hand, the native apps blew us away. The winner of the contest was a video conferencing app written by Glen and Ken Aspeslagh. They built their own front-facing camera hardware and wrote something akin to FaceTime three years before Apple. An amazing achievement considering the first iPhone didn’t even have a video camera.

[embedded content]

But for me, the app that came in second place was the shining example of what was to come. First, it was a game and, well, that’s worked out pretty well on mobile. But more importantly, it showed how great design and programming could take something from the physical world, make it work seamlessly on a touch screen and significantly improve the overall experience.

Lucas Newman and Adam Betts created the Lights Off app a few days before C4. Afterwards, Lucas helped get me started with the Jailbreak tools, and at some point he gave me the source code so I could see how it worked. Luckily, I’m good at keeping backups and maintaining software: your iPhone X can still run that same code we all admired 10 years ago!

Lucas Newman presenting Lights Off at C4[1]. (Source: John Gruber)

If you’re a developer who uses Xcode, get the project that’s available on GitHub. The project’s Jailbreak folder contains everything Lucas sent me. The Xcode project adapts that code so it can be built and run – no changes were made unless necessary. It’s much easier to get running than the original, but please don’t complain about the resolution not being 1-to-1.

In the code you’ll see things like a root view controller that’s also an application delegate: remember that we were all learning how to write apps without any documentation. There’s also a complete lack of properties, storyboards, asset catalogs, and many other things we take for granted in our modern tools.

If you don’t have Xcode, you’re still in luck. Long-time “iPhone enthusiast” Steve Troughton-Smith sells an improved version on the App Store. I still love this game and play it frequently: Its induction into iMore’s Hall of Fame is well-deserved.

Now I was armed with tools and inspiration. What came next?

The Iconfactory’s first apps

The first version of Twitterrific on the iPhone. And pens. And slerp.

In June 2007, we had just released version 2.1 of our wildly popular Mac app for Twitter. It should have be pretty easy to move some Cocoa code from one platform to another, right?

The first version of Twitterrific on the iPhone. And pens. And slerp.

Not really. But I was learning a lot and having a blast. The iPhone attracted coders of all kinds, including our own Sean Heber. In 2007, Sean was doing web development and didn’t know anything about Objective-C or programming for the Mac. But that didn’t stop him from poking around in the class-dump headers with the rest of us and writing his first app.

But he took it a step further with a goal to write an app for every day of November 2007 (inspired by his wife doing NaNoWriMo.) He called it iApp-a-Day and it was a hit in the Jailbreak community. The attention eventually landed him a position at Tapulous, alongside the talented folks responsible for the iPhone’s first hit franchise: Tap Tap Revenge.

Over the course of the month, Sean showed that the iPhone could be whatever the developer wanted it to be. Sure, it could play games, but it could also keep track of your budgetplay a tune, or help you hang a painting.

Screenshots from Sean Heber’s iApp-a-Day.

Both Sean and I have archives of the apps we produced during this period. The code is admittedly terrible, but for us it represents something much greater. Reading it brings back fond memories of the halcyon days where we were experimenting with the future.

There were a lot of surprises in that early version of UIKit. It took forever to find the XML parser because it was buried in the OfficeImport framework. And some important stuff was completely missing: there was no way to return a floating point value with Objective-C.

There were also strange engineering decisions. You could put arbitrary HTML into a text view, which worked fine with simple tags like <b>, but crashed with more complex ones. Views also used LKLayer for compositing, which was kinda like the new Core Animation in Mac OS Leopard, but not the same. Tables also introduced a new concept called “cell reuse” which allowed for fast scrolling, but it was complex and unwieldy. And it would have been awesome to have view controllers like the ones just released for AppKit.

But that didn’t stop us from experimenting and learning what we could do. And then something happened: we stopped.

A real SDK

Apple had worked its butt off to get the iPhone out the door. Those of us who were writing Jailbreak apps saw some warts in that first product, but they didn’t matter at all. Real artists ship. Only fools thought it sucked.

Twitterrific’s design at the App Store launch.

Everyone who’s shipped a product knows that the “Whew, we did it!” is quickly followed by a “What’s next?”

Maybe the answer to that question was influenced by all the Jailbreaking, or maybe the managers in Cupertino knew what they wanted before the launch. Either way, we were all thrilled when an official SDK was announced by Steve Jobs, a mere five months after release of the phone itself.

The iPhone SDK was promised for February of 2008, and given the size of the task, no one was disappointed when it slipped by just a few days. The release was accompanied by an event at the Town Hall theater.

Ten years ago today was the first time we learned about the Simulator and other changes in Xcode, new and exciting frameworks like Core Location and OpenGL, and a brand new App Store that would get our products into the hands of customers. Jason Snell transcribed the event for Macworld. There’s also a video.

Our turn to be real artists

After recovering from all the great news, developers everywhere started thinking about shipping. We didn’t know exactly how long we would have, but we knew we had to hustle.

Winning an Apple Design Award. Thank you. (Source: Steve Weller)

In the end, we had about four months to get our apps ready. Thanks to what The Iconfactory learned during the Jailbreak era, we had a head start understanding design and development issues. But we still worked our butts off to build the first iPhone’s Twitter app.

Just before the launch of the App Store, Apple added new categories during its annual design awards ceremony. We were thrilled to win an ADA for our work on the iPhone.

How thrilled? The exclamation I used while downloading the new SDK was the same as getting to hold that silver cube.

After that, we were among the first apps to be featured in the App Store and ranked high in the early charts.

We knew we were a part of something big. Just not how big.

The journey continues

The second version of Twitterrific and some guy.

The Iconfactory’s first mobile app entered a store where there were hundreds of products. There are now over 2 million.

We now sell mobile apps for consumers and tools for the designers & developers who make them.

We now do design work for mobile apps at companies largemedium, and

We now develop mobile apps for a select group of clients. (Get in touch if you’d like to be one of them.)

A lot can happen in a decade.

But one thing hasn’t changed. Our entire team is still proud to be a part of this vibrant ecosystem and of the contributions we make to it. Here’s to another decade!