Google Is Using A.I. to Answer Your Health Questions. Should You Trust It?

Experts say the new feature may offer dubious advice in response to personal health queries.
Google Is Using A.I. to Answer Your Health Questions. Should You Trust It?

Do you have a headache or is it a sinus infection? What does a stress fracture feel like? Should you be worried about the pain in your chest? If you Google those questions now, the answers may be written by artificial intelligence.

This month, Google rolled out a new feature called A.I. Overviews that uses generative A.I., a type of machine-learning technology that is trained on information from across the internet and produces conversational answers to some search questions in a matter of seconds.

In the weeks since the tool launched, users have encountered a wide array of inaccuracies and odd answers on a range of subjects. But when it comes to how it answers health questions, experts said the stakes were particularly high. The technology could point people toward healthier habits or needed medical care, but it also has the potential to give inaccurate information. The A.I. can sometimes fabricate facts. And if its answers are shaped by websites that aren’t grounded in science, it might offer advice that goes against medical guidance or poses a risk to a person’s health.

The system has already been shown to produce bad answers seemingly based on flawed sources. When asked “how many rocks should I eat,” for example, A.I. Overviews told some users to eat at least one rock a day for vitamins and minerals. (The advice was scraped from The Onion, a satirical site.)

“You can’t trust everything you read,” said Dr. Karandeep Singh, chief health A.I. officer at UC San Diego Health. In health, he said, the source of your information is essential.

Hema Budaraju, a Google senior director of product management who helps to lead work on A.I. Overview, said that health searches had “additional guardrails,” but declined to describe those in detail. Searches that are deemed dangerous or explicit, or that indicate that someone is in a vulnerable situation, such as with self-harm, do not trigger A.I. summaries, she said.