All posts in “Artificial Intelligence”

Gmail for iOS will now use AI to filter push notifications

The new Gmail AI feature is part of a larger redesign announced a few months ago
The new Gmail AI feature is part of a larger redesign announced a few months ago

Image: NurPhoto via Getty Images

Google wants to use AI to help determine which emails you receive as notifications.

The Gmail app on iOS (it’s not yet available on the Android version) now offers users an option to only get notifications for “high priority emails.” The feature uses artificial intelligence to determine which messages recipients would deem important and lets them turn off notifications for all the others.

“Notifications are only useful if you have time to read them — and if you’re being notified hundreds of times a day, chances are, you don’t,” read the update announcement. “That’s why we’re introducing a feature that alerts you only when important emails land in your Gmail inbox, so you know when your attention is really required.”

Google just released an AI ethics guide on the tails of terminating a controversial contract with the Pentagon to help it develop military technology — developers working at Google protested having to create technology they disagreed with.

But the company’s deep exploration into AI with a bevy of products that replace specifically human tasks such as scheduling appointments, has stirred backlash surrounding privacy issues.

And with this new email function that wants to reduce the number of push notifications people receive, AI has to understand the email contents to know whether to notify recipients.

This feature is part of the massive new Gmail redesign announced a few months ago that promised to introduce new capabilities to its smartphone apps — an updated desktop Gmail recently became available and will have no opt-out option by the end of this year.

To enable the new AI-based notifications, users must tap the drop-down menu on the top left-hand side of the app, select settings, tap the email address they want to use, notifications, and then “high priority only.”

The feature will hit all Gmail apps on iOS devices within one to three days.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2018%2f5%2fb352294d acf1 37a6%2fthumb%2f00001

Amazon’s DeepLens is one smart camera

Amazon just launched a camera that thinks.

The AWS DeepLens AI Deep Learning Video Camera, also called the DeepLens, was introduced in November during the Amazon Web Services (AWS) re:Invent conference. Now it’s finally available for for $249 after much delay (it was supposed to begin shipping by “early 2018”).

DeepLens is similar to the AI-powered Google Clips camera, but while Clips is targeted at consumers, DeepLens is a new toy for developers.

According to Amazon’s website, it’s the first video camera designed to teach deep learning basics and optimized to run machine learning models on the camera. This kind of machine learning is usually done by gathering information on one device and computing in the cloud — as opposed to doing it all on one gadget.

It ultimately helps people create their own deep learning tools and has six sample projects built into it: object detection, hot dog not hot dog (hey Silicon Valley, your sitcom has fans at Amazon), cat and dog, artistic style transfer, activity detection, and face detection.

The camera is currently optimized to learn from datasets available in Apache MXNet, but those in TensorFlow and Caffe will soon be compatible with it as well.

The back of the Amazon DeepLens camera.

Image: amazon

The front of the Amazon DeepLens camera.

Image: amazon

The camera runs on the free, open-source operating system Ubuntu and an Intel Atom X5 processor with four cores and four threads. It does not currently work with Alexa but is equipped with 8GB of RAM, 16GB of storage, microphones, a micro HDMI port, two USB ports, a speaker, headphone jack, a 4-megapixel (1080p video) camera and an Intel ninth-generation graphics engine.

DeepLens has a camera comparable to that of a webcam, but its operating capabilities are basically as powerful as that of a computer. DeepLens currently only ships within the United States.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f85417%2fdb3b6f71 3156 4768 9a6c 0c6499fcde7d

Tableau gets AI shot in the arm with Empirical Systems acquisition

When Tableau was founded back in 2003, not many people were thinking about artificial intelligence to drive analytics and visualization, but over the years the world has changed and the company recognized that it needed talent to keep up with new trends. Today, it announced it was acquiring Empirical Systems, an early stage startup with AI roots.

Tableau did not share the terms of the deal.

The startup was born just two years ago from research on automated statistics at the MIT Probabilistic Computing Project. According to the company website, “Empirical is an analytics engine that automatically models structured, tabular data (such as spreadsheets, tables, or csv files) and allows those models to be queried to uncover statistical insights in data.”

The product was still in private Beta when Tableau bought the company. It is delivered currently as an engine embedded inside other applications. That sounds like something that could slip in nicely into the Tableau analytics platform. What’s more, it will be bringing the engineering team on board for some AI knowledge, while taking advantage of this underlying advanced technology.

Francois Ajenstat, Tableau’s chief product officer says this ability to automate findings could put analytics and trend analysis into the hands of more people inside a business. “Automatic insight generation will enable people without specialized data science skills to easily spot trends in their data, identify areas for further exploration, test different assumptions, and simulate hypothetical situations,” he said in a statement.

Richard Tibbetts, Empirical Systems CEO, says the two companies share this vision of democratizing data analysis. “We developed Empirical to make complex data modeling and sophisticated statistical analysis more accessible, so anyone trying to understand their data can make thoughtful, data-driven decisions based on sound analysis, regardless of their technical expertise,” Tibbets said in a statement.

Instead of moving the team to Seattle where Tableau has its headquarters, it intends to leave the Empirical Systems team in place and establish an office in Cambridge, Massachusetts.

Empirical was founded in 2016 and has raised $2.5 million.

Jane.ai raises $8.4M to bring a digital assistant into your office software

Even as AI assistants delve deeper into consumer hardware, companies still seem a bit reticent to bring them deep into their office software workflows.

Jane.ai is aiming to bring natural language processing and intelligence into an employee-facing solution that lets people query a digital assistant to give them information about documents, meetings and general company knowledge.

The St. Louis startup announced today that it is raising an $8.4 million Series A from private investors to power this vision.

Jane lives inside apps like Slack and Skype for Business (in addition to its own web app) where users are already chatting with co-workers and may need to surface information quickly that they don’t have ready access to. With Jane, employees can just message the assistant directly and the system will comb through information and apps that were uploaded and connected to the system in order to find answers. You can ask for a file by name and quickly get a link. You can ask for a specific department’s phone number and Jane will slack it to you.

The startup currently supports integrations with Office 365, Slack, Salesforce and Zenefits, and has more partnerships “on the horizon.”

The big focus will be outsourcing some of the more basic questions that you would ordinarily ask HR or IT so you don’t have to bombard the same person’s email to get the latest phone number for the workaround for a particular problem.

The Jane.ai team

The basic goal of the system is to learn over time and give appointed admins the ability to be called on to answer certain questions when Jane doesn’t have an answer so that Jane will learn from the company experts and get more informed over time.

“Pitting humans against machines is one of the big design flaws of a lot of AI systems,” Jane.ai CEO David Karandish told TechCrunch.

The startup will also have a general knowledge base where users can call on some quickly available info that will also grow over time. It takes time for these solutions to gather the information to be accessible enough to turn to, but Jane.ai is hoping that by ensuring that data is cleaned up for every customer, a lot of employees’ frequent questions are answered on day 1.

Google Translate is getting a big upgrade with improved offline mode

Google Translate is about to get a lot more useful to those with spotty internet connections.

Google is rolling out an update to the Google Translate to the Android app that will enable offline mode on more phones since it takes up less storage. That’s thanks to a new, phrase-based algorithm that uses an artificial intelligence-based translation system developed in collaboration with their deep learning team Google Brain.

That means translations now take into consideration the whole phrase instead of doing small chunks, says Google Translate product manager Julie Cattiau, who travelled the world to understand what improvements people wanted from the tool.

“Now our motto when users send us a query is to take into account higher queries every step of the translation,” Cattiau tells Mashable. “We’re just taking into account the whole phrase, the context of the phrase and sometimes every part of the paragraph into the translation.”

Cattiau says said the team rebuilt Google Translate from the ground up within the past year and a half, which resulted in a “more perfect technology” than what they had build over the previous decade of Google Translate’s existence.

Phrase-based machine translations versus neural machine translations

Phrase-based machine translations versus neural machine translations

Image: Google

Cattiau went to places like India and Indonesia, where Google Translate sees the heaviest use (Google says 90 percent of the service’s translations are done outside the United States). She found that people often didn’t have internet access when they wanted to conduct translations, showing the demand for offline capabilities.

Although people could previously download languages on Google Translate, the files were usually too large for the cheaper smartphones that are common in those markets. With the update, each language is now just 35-45 MB, vastly increasing the tool’s accessibility.

But not all features work in offline mode. Without an internet connection, the app won’t support the augmented reality camera translations — previously known as Word Lens — or handwritten translations, but Cattiau said developers are looking into adding those in a future update.

Google has been testing offline translations with a small chunk of users and will roll it out in the coming weeks. The new function is available in 59 languages total ranging from Telugu to Kannada — check out Google Translate’s website for the full list.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2018%2f6%2f7b3f5b3d 50e4 5afa%2fthumb%2f00001