When AI Chatbots Are a Little too Personal

Researchers find consumers are less likely to engage with this fast-spreading technology when they anticipate feeling embarrassed.

By Nancy Sheehan

You’re searching online for a pain relief remedy. A friendly little chatbot pops up with a smile and announces: “Hi there, I’m Emma. I’ll be happy to help you find the perfect hemorrhoid cream for your needs.”

You pause. This is just a little too weird. At some level you know Emma isn’t real, but you don’t want to chat with “Emma” about your embarrassing health issue.

Bye, Emma. 

Businesses and medical websites increasingly use AI-powered chatbots to help customers find information and products, but there can be unintended consequences when the search involves subjects that people might find embarrassing.

Dr. Lagnajita Chatterjee

Research by Lagnajita Chatterjee, assistant professor in the Department of Business Administration and Economics, and her collaborators at the University of Illinois Chicago, where she earned her PhD in 2020, has found that, when conducting an online search, consumers are less likely to use a chatbot when they anticipate feeling embarrassed about the search than when they do not. This effect is driven by a sense of “social presence” while interacting with the technology, the researchers found.

Chatbots create social presence, or the feeling of a human interaction, in the way that they interact with users, respond to their questions, and address their concerns, and also through their design features. Their visual appearance, speech synthesis, discourse structure, and reasoning increasingly make these chatbots seem humanlike, which leads users to anthropomorphize them.

Turns out, that’s not always a good thing.

“It was always assumed that integrating chatbots and AI into any business is a positive, forward-moving thing,” Dr. Chatterjee said. “But the question that we were grappling with is, ‘Is it always good? Are there situations where we don’t want to really engage with the chatbot?’”

To answer those questions, the researchers gave people hypothetical scenarios and asked them to respond as if they were in those situations. Some examples might be people seeking information on personal care products like hemorrhoid cream, information about contraceptives including buying condoms or about sexually transmitted diseases, or financially sensitive information. Study participants were then asked whether they would prefer using an AI chatbot or a more standard search engine for those types of queries.

“And what we find is that people prefer to use the question-based search engine, or, if the only option is to use AI, they don’t want to use it,” Dr. Chatterjee said. “They don’t want to engage with it because of a fear of being judged because they somehow almost feel like the AI has a social presence, like it’s a human.”

Rationally, the research participants know that a chatbot is not a real person, she said, but such personal interactions with technology are still new to most people.

“There’s a lot of research out there that shows that, as humans, we haven’t evolved to where our behavior has a script for how we engage with computers, so when we are engaging with chatbots and AI, our tendency is to use our judgment about humans and apply that to chatbots and AI,” she said.

Dr. Chatterjee and her collaborators at the University of Illinois Chicago plan further research to understand why that sense of social presence while interacting with chatbots drives the tendency to avoid them when experiencing embarrassment. They also hope to identify design elements that will encourage the use of chatbots among users during embarrassing situations. Their ongoing research is funded in part by a grant from the Worcester State Foundation.

The researchers were surprised by their initial findings, Dr. Chatterjee said. “We had thought that people would probably be like, ‘Chatbots, fine. We are not talking to a human, so we are safe.’ But we have run nine studies at this point, and consistently we got results where people did not want to use human-like chatbots to look for information related to embarrassing things.”

Since just about every business is trying to build a chatbot and other AI capabilities these days, the researchers recommend that they be careful about how they implement this new technology. If the business involves more sensitive products or medical information, they might consider a more mechanical chatbot that doesn’t use a name or express emotions, she said. 

“That’s not to say that you absolutely cannot have a chatbot or AI if you have a certain kind of business,” she said. “It’s more to say, ‘Think about how humanlike your chatbot capability needs to be because there’s a wide range, and depending on what kind of business someone has, it’s important to figure out how you want to implement that AI and whether it should be implemented at all.’”

Tags: