All posts in “Github”

Codota raises $2M from Khosla as autocomplete for developers


In recent years, GitHub has fundamentally changed developer workflows. By centralizing code on an easily accessible platform, the company was able to rapidly change the way people code. Following in these footsteps, Israeli startup Codota wants to further optimize workflows for the often neglected developer community — this time with machine intelligence. The company is announcing a $2 million seed round from Khosla Ventures for its autocomplete tool that helps engineers push better code in less time.

Codota interfaces with integrated development environments like Eclipse, expanding on intelligent code completion. Instead of just offering up brief suggestions of intended code, Codota can recommend larger chunks.

Co-founders Dror Weiss and Eran Yahav took advantage of open source code on the internet from GitHub and StackOverflow to build Codota. All of this public code was fed into machine learning models to enable them to recognize higher-level meaning across blocks of code.

The Codota team at its Tel Aviv headquarters

Programing languages share a lot of structural similarities with their distant spoken cousins. Words can be arranged in infinitely many ways to express a single thought or sentiment. Likewise, the same command can be represented in code in a number of ways. This is why it’s so critical that Codota understands the macro picture of what code is doing.

Of course natural language and code are not completely analogous. The team explained in an interview that in natural language processing, meaning is determined by looking at nearby words. Programs are more structured and meaning isn’t always strongly correlated with locality. So instead of just training on text, Codota also focused on the behaviors of a program.

Aside from improving speed and accuracy, Codota can help with discovery and education. Because Codota has been trained on millions of API implementations, it can help offer up best practices to developers. When open side-by-side with an IDE, the tool can highlight irregularities and demonstrate better ways to write code, lessons often pulled straight from the original creators of libraries.

The startup makes its money by allowing enterprises to keep their internal code private while benefitting from Codota’s insights. Right now the tool is limited to Java, but in the future additional languages will be added.

Featured Image: maciek905/Getty Images

Amazon wants to make Alexa sound like your human friends

Alexa can now whisper sweet nothings in response to your demands to order new toilet paper.
Alexa can now whisper sweet nothings in response to your demands to order new toilet paper.

Image: christina ascani/mashable

Alexa is always there for those of us with Amazon’s AI-enabled speakers, but when she speaks, sometimes it’s hard to tell that she really cares. 

We’re not saying we need to all fall in love with our smart assistants, Her-style, but hearing a little bit more warmth in Alexa’s automated responses could make the human-AI overlord relationship so much better. Or, you know, just make us more comfortable allowing a sophisticated AI system to learn just about everything there is to know about us.

Fortunately, Amazon has heard our cries for machine validation, and the company is finally giving Alexa the capability to speak like the true AI friend she is.

In a new blog post on Thursday, Amazon announced five new Speech Synthesis Markup Language (SSML) protocols for developers to use when they create their apps for Alexa. The new tags, which are employed through the skills’ code, cover a wide range of responses previously unavailable in the app script. 

These additions will help app developers mold Alexa’s speech patterns more specifically, allowing her to whisper, bleep out dirty words, or even change the pitch and speed of her responses. 

After all, if Alexa is going to critique all of our outfits, she should at least be able to disappoint us nicely.

Developers in the U.S., UK, and Germany will have soon have the five new SSML tags at their disposal to make Alexa’s voice more natural when users engage her for their skills.

Whispers will make Alexa speak softer, while expletive beeps will give her the chance to curse without offending your grandma. The sub tag will let the AI say something other than what’s written in the code, and the emphasis and prosody tags will make Alexa’s voice more natural, changing the volume, pitch, and rate of her speech.   

Amazon created a quiz game template on Github for developers to use to play with the new tags, giving them a shot to test them out before unleashing the new abilities in their own skills. 

So be nice to Alexa, Amazon users. The next time you ask her to do something, she might offer a friendly response … or she could just cuss you out in a series of extra-endearing bleeps.  

WATCH: Someone rigged Amazon Echo and Google Home to talk to each other and it’s hilarious