Yes, People Are Listening…They Have To

– This post is from our CEO Susan Hura. Enjoy! The Banter Team.

I just read an article about the shocking revelation that there are people listening to the recordings made by the smart speakers. The thing is, if you understand how natural language processing works, this isn’t surprising at all.

Of course, smart speaker manufacturers should tell consumers in clear language that people may listen to their recordings, they should continue to allow consumers to refuse permission to do this, and they should ensure that any recordings are stripped of all personally identifiable data.

But humans are always going play a role in improving natural language processing. Decoding spoken language is different from lots of other jobs we ask computers to do. There is no objectively correct answer that an NLP algorithm can compute—we can only judge the correctness of a result in comparison to what a human hears.

When humans process spoken language, we aren’t simply analyzing the sounds that we hear and trying to match them against the words we know. We’re interpreting the language we hear in context and (usually) rejecting the matches that don’t make sense. But even today’s successful and sophisticated NLP sometimes comes up with crazy matches like “I couldn’t doozy Aztec Lee back every single one of those options.” Believe it or not, the algorithm only got two words wrong when I dictated this text and yet the result is meaningless. I guarantee that any person listening to what I said would never have come up with this interpretation. They would’ve heard “I could enthusiastically back every single one of those options.”

Consumers can take steps to make their smart speakers as private as possible, but remember: NLP algorithms can only be improved by comparing their results to what humans hear.

You may also like...