Sometimes Siri doesn’t understand me. And it looks like I’m not alone.
Voice recognition software that powers many of our smart devices continues to struggle to understand accents of all kinds. Although non-native English speakers are the first category that springs to mind, smart virtual assistants run into trouble with strong regional accents of native speakers as well. Southern accents in the United States and Scottish accents abroad seem to pose particular difficulties.
Voice recognition algorithms will only grow in their ability to accurately decode human speech as we feed them more data. Amazon recently released a new feature for their echo system called ‘cleo’ in an attempt to gain even more data. Through rounds of questions, users are prompted to ‘teach’ Alexa how to say different phrases in a new language. The same system can be used if Alexa struggles to understand your particular accent. Keep teaching Alexa how you pronounce certain words, and she will become increasingly capable of understanding flavors of accents. In making this feature a game, Amazon hopes to entice users to use their Alexa devices more frequently.
It is inevitable that algorithms will determine more and more of our lives – how we drive, what we watch, what we read. It seems a small nuisance at the moment if my smart speaker can’t understand that I want it to play a particular song, but it would be very important if my car relies on voice commands to operate.
The firm Adobe Analytics projects that nearly half of Americans (48%) will own a smart speaker after the 2018 holiday season. Considering the increasingly diverse demographic makeup of the country, these speaker systems will certainly be contending with more accents than ever before.