Brain scans can translate a person’s thoughts into words
The researchers also showed the participants short Pixar videos that didn’t contain any dialogue, and recorded their brain responses in a separate experiment designed to test whether the decoder was able to recover the general content of what the user was watching. It turned out that it was.
Romain Brette, a theoretical neuroscientist at the Vision Institute in Paris who was not involved in the experiment, is not wholly convinced by the technology’s efficacy at this stage. “The way the algorithm works is basically that an AI model makes up sentences from vague information about the semantic field of the sentences inferred from the brain scan,” he says. “There might be some interesting use cases, like inferring what you have dreamed about, on a general level. But I’m a bit skeptical that we’re really approaching thought-reading level.”
It may not work so well yet, but the experiment raises ethical issues around the possible future use of brain decoders for surveillance and interrogation. With this in mind, the team set out to test whether you could train and run a decoder without a person’s cooperation. They did this by trying to decode perceived speech from each participant using decoder models trained on data from another person. They found that they performed “barely above chance.”
This, they say, suggests that a decoder couldn’t be applied to someone’s brain activity unless that person was willing and had helped train the decoder in the first place.
“We think that mental privacy is really important, and that nobody’s brain should be decoded without their cooperation,” says Jerry Tang, a PhD student at the university who worked on the project. “We believe it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy.”