In a new study published in Nature Neuroscience, researchers have created a customized language decoder that uses artificial intelligence (AI) and functional MRI (fMRI) scans to translate a person's thoughts into text.
Currently, researchers are working on language decoding systems to help patients who are unable to speak due to a stroke or another brain injury. These systems can also help broaden understanding of how the brain processes words and thoughts. However, many of these systems currently require sensors to be placed directly on the brain to detect signals in areas that help articulate language.
In the new study, researchers from the University of Texas at Austin (UT Austin) took a different approach, developing a non-invasive method that used fMRI scans to map brain activity and a large AI language model to predict text associated with participants' thoughts.
To train the decoder, the researchers recorded fMRI data from three study participants, which included two men and one woman in their 20s and 30s, as they listened to stories from radio shows and podcasts for 16 hours over several sessions.
Then, the researchers trained a large language model GPT-1, an earlier version of ChatGPT, to match the brain activity from the fMRI scans to the semantic features of the recorded stories. This allowed the AI to associate certain words and phrases with specific brain patterns.
After the training was complete, the decoder was used to analyze participants' brain activity as they listened to new stories that were not part of the original dataset. The decoder was also used to translate participants' thoughts when they watched silent movies or imagined stories silently.
Overall, the AI-generated text was able to closely match the intended meanings of the original words of the new stories around half the time. The decoder was also able to roughly paraphrase what the participants were thinking during the silent movies or in their imagined stories.
"Our system works at the level of ideas, semantics, meaning," said Alexander Huth, an assistant professor of neuroscience and computer science at UT Austin and one of the study's authors. "This is the reason why what we get out is not the exact words, it's the gist."
"For a non-invasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," Huth added.
However, the researchers also noted several limitations of the decoder. Each decoder had to be customized to a specific person, and a decoder that was trained on one person would not work with another. Participants were also easily able to confuse the decoder by thinking of unrelated thoughts or ideas.
According to Tim Behrens, a computational neuroscientist at the University of Oxford, the decoder was "technically extremely impressive," and it opened up many new, experimental opportunities, including translating thoughts from someone dreaming or investigating how ideas could develop from background brain activity.
"These generative models are letting you see what's in the brain at a new level," Behrens said. "It means you can really read out something deep from the fMRI."
Separately, Shinji Nishimoto, a neuroscientist from Osaka University, called the study a "significant advance" since it "showed that the brain represents continuous language information during perception and imagination in a compatible way."
"This is a non-trivial finding and can be a basis for the development of brain-computer interfaces," he added.
Although the study's authors have expressed optimism about potential uses of the technology in the future, they have acknowledged ethical concerns about mental privacy.
"This is very exciting, but it's also a little scary," Huth said. "What if you can read out the word that somebody is just thinking in their head? That's potentially a harmful thing."
According to Jerry Tang, a doctoral student at UT Austin and one of the study's authors, policies to protect mental privacy may be necessary as the technology continues to improve. "I think right now, while the technology is in such an early state, it's important to be proactive by enacting policies that protect people and their privacy," he said. "Regulating what these devices can be used for is also very important."
"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," Tang added. "We want to make sure people only use these types of technologies when they want to and that it helps them." (George, MedPage Today, 5/1; Ferreira, Vice, 5/1; Devlin, The Guardian, 5/1; Hamilton, "Shots," NPR, 5/1; Whang, New York Times, 5/1)
As interest in artificial intelligence (AI) chatbots, including ChatGPT, continues to grow, many healthcare organizations are taking notice and considering potential uses of the technology. Read on to find out the areas they might need to be more cautious with as AI continues to develop.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.