Today during its annual September hardware event, which was held virtually for the first time, Amazon announced updates across its portfolio of Alexa developer tools and frameworks. Those arrived alongside a slew of new Alexa capabilities, including a feature that lets Alexa read along with kids called Alexa Sidekick. Amazon also announced Alexa Voice Profiles for kids, which automatically recognizes a kid’s voice and switch to a kid-friendly mode.
Starting today, Amazon says that Alexa will now ask questions of users to help her understand what they mean. Alexa can remember, for instance, that “Dad’s reading mode” means to set the living room lights to 60% brightness. It’s personalized to individual customers, and Amazon says that it’ll work for smart home concepts and actions to begin with before expanding to other demands.
Alexa will also soon be able to change intonation depending on the context of conversations. Building on Amazon’s advances neural text-to-speech technology, the assistant will stress certain words, even inserting pauses and breaths. This will roll out in the coming months, according to Alexa VP and head scientist Rohit Prasad.
An improvement to Follow-Up Mode, which was introduced a while back, lets multiple people join conversations with Alexa without having to use a wake word for every command. Alexa leveragse acoustic linguistic and even visual cues to determine whether the request is directed towards it, Prasad says.
Alexa Guard has expanded, as well, with detection for things like a baby crying, s barking dog, and the sound of snoring. More than 2 million customers are now using it, Amazon says, and it expects at least a portion of those will enroll in Alexa Guard Plus, a new premium offering. Alexa Guard Plus adds detection for the sound of footfalls, doors closing and opening, and more, and it includes 24/7 monitoring with access to an emergency hotline. A complimentary feature lets customers can add “high-level” relationships with family members to get an activity feed that shows when they interact with smart home devices, as a way to check in on those with mobility and health issues.
The new tools and features come on the heels of others launched at Amazon’s Alexa Live event in July. There, the company rolled out deep neural networks aimed at making Alexa natural language understanding more accurate for custom apps, as well as an API that allows the use of web technologies to build gaming apps for select Alexa devices. Amazon also launched Alexa Conversations in beta, a deep learning-based way to help developers create more natural-feeling apps with fewer lines of code. And it debuted a new service in preview — Alexa for Apps — that lets Alexa apps trigger actions like searches within smartphone apps.