A New Kind of Creepy

– This post is from our head of design Jenni McKienzie

A decade ago I co-authored a paper about customization of IVRs, and we discussed how too much can be creepy. My, how times have changed. We now expect every system we interact with to know who we are and what our history with that system or company is. It’s not that we want them to know everything, but we want them to know all the stuff that makes our lives easier.

We even like it when our voice assistants make jokes. A friend asked her voice assistant where the nearest hair salon was, and the response was that there were five salons nearby, “but I think you look great now.” Sometimes we deliberately ask questions in the hope of getting a funny response, and sometimes a jokey response to a straightforward inquiry makes us laugh even more. I started wondering if there is such a thing as creepy, know-too-much voice interaction anymore.

I think the answer is yes. I hinted at one aspect above. We want voice assistants to know and act on what’s relevant. When you go to pull cash out of the ATM and it wishes you a happy birthday (true story), that’s weird because it isn’t relevant. Of course the bank has the information, but it’s not for this purpose. The bank has your address too, but you don’t see ATMs dispensing weather advice based on ZIP code. An age-old problem with automation has always been that people use tools to accomplish a task. Anything that gets in the way of that task is seen as an intrusion.

Crossover knowledge is disarming as well. We’ve become somewhat used to it happening online. You look at new shoes, and then for weeks every sidebar ad you see is for those same shoes. We get it, cookies. Yet it really weirds people out when they have a verbal discussion with a person about a topic (in range of their smart speaker) and then ads for that same thing start popping up. There is a higher expectation of privacy from devices that are listening. When we type something into our computer, we are more conscious of giving permission to use that information. A voice assistant that’s always listening for its wake-up word should never act on anything that’s said without that invocation being given.

The third thing that makes a voice assistant seem creepy is when it does something that clearly feels like faking a human connection. For a while, I would ask about the weather, and after giving me that information, the assistant would say something like, “Have a good morning.” It ruffled my feathers so much I quit asking. (As an aside, the assistant no longer says that, so I’m assuming I wasn’t alone in my unease.) Why did it bother me so much? It’s the kind of thing we say to each other to show we care, and coming from what we know is a machine, it’s completely disingenuous.

In short, we’re OK with cute and funny, but voice assistants should steer clear of emotional intent and stay on topic. People talk about Big Brother and big data a lot less these days, but there is still a line. We can still be creeped out by our technology.

You may also like...

Popular Posts