A chatbot that asks questions could help you spot when it makes no sense
Fernanda Viégas, a professor of computer science at Harvard University, who did not participate in the study, says she is excited to see a fresh take on explaining AI systems that not only offers users insight into the system’s decision-making process but does so by questioning the logic the system has used to reach its decision.
“Given that one of the main challenges in the adoption of AI systems tends to be their opacity, explaining AI decisions is important,” says Viégas. “Traditionally, it’s been hard enough to explain, in user-friendly language, how an AI system comes to a prediction or decision.”
Chenhao Tan, an assistant professor of computer science at the University of Chicago, says he would like to see how their method works in the real world—for example, whether AI can help doctors make better diagnoses by asking questions.
The research shows how important it is to add some friction into experiences with chatbots so that people pause before making decisions with the AI’s help, says Lior Zalmanson, an assistant professor at the Coller School of Management, Tel Aviv University.
“It’s easy, when it all looks so magical, to stop trusting our own senses and start delegating everything to the algorithm,” he says.
In another paper presented at CHI, Zalmanson and a team of researchers at Cornell, the University of Bayreuth, and Microsoft Research, found that even when people disagree with what AI chatbots say, they still tend to use that output because they think it sounds better than anything they could have written themselves.
The challenge, says Viégas, will be finding the sweet spot, improving users’ discernment while keeping AI systems convenient.
“Unfortunately, in a fast-paced society, it’s unclear how often people will want to engage in critical thinking instead of expecting a ready answer,” she says.