The Biggest Questions: Is it possible to really understand someone else’s mind?

Technically speaking, neuroscientists have been able to read your mind for decades. It’s not easy, mind you. First, you must lie motionless within the narrow pore of a hulking fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks. Meanwhile, the machine will bang and knock as it records the shifting patterns…
The Biggest Questions: Is it possible to really understand someone else’s mind?

Even with the help of micro-phenomenology, however, wrapping up what’s going on inside your head into a neat verbal package is a daunting task. So instead of asking subjects to struggle to represent their experiences in words, some scientists are using technology to try to reproduce those experiences. That way, all subjects need to do is confirm or deny that the reproductions match what’s happening in their heads.

In a study that has not yet been peer reviewed, a team of scientists from the University of Sussex, UK, attempted to devise such a question by simulating visual hallucinations with deep neural networks. Convolutional neural networks, which were originally inspired by the human visual system, typically take an image and turn it into useful information—a description of what the image contains, for example. Run the network backward, however, and you can get it to produce images—phantasmagoric dreamscapes that provide clues about the network’s inner workings. 

The idea was popularized in 2015 by Google, in the form of a program called DeepDream. Like people around the world, the Sussex team started playing with the system for fun, says Anil Seth, a professor of neuroscience and one of the study’s coauthors. But they soon realized that they might be able to leverage the approach to reproduce various unusual visual experiences. 

Drawing on verbal reports from people with hallucination-causing conditions like vision loss and Parkinson’s, as well as from people who had recently taken psychedelics, the team designed an extensive menu of simulated hallucinations. That allowed them to obtain a rich description of what was going on in subjects’ minds by asking them a simple question: Which of these images best matches your visual experience? The simulations weren’t perfect, although many of the subjects were able to find an approximate match. 

Unlike the decoding research, this study involved no brain scans—but, Seth says, it may still have something valuable to say about how hallucinations work in the brain. Some deep neural networks do a respectable job of modeling the inner mechanisms of the brain’s visual regions, and so the tweaks that Seth and his colleagues made to the network may resemble the underlying biological “tweaks” that made the subjects hallucinate. “To the extent that we can do that,” Seth says, “we’ve got a computational-level hypothesis of what’s happening in these people’s brains that underlie these different experiences.”

This line of research is still in its infancy, but it suggests that neuroscience might one day do more than simply telling us what someone else is experiencing. By using deep neural networks, the team was able to bring its subjects’ hallucinations out into the world, where anyone could share in them. 

Externalizing other sorts of experiences would likely prove far more difficult—deep neural networks do a good job of mimicking senses like vision and hearing, but they can’t yet model emotions or mind-wandering. As brain modeling technologies advance, however, they could bring with them a radical possibility: that people might not only know, but actually share, what is going on in someone else’s mind.