What Can ChatGPT-like Language Models Tell Us About the Brain? – The Scientist

Current Issue
Current Issue
Natalia Mesa was previously an intern at The Scientist and now freelances. She has a PhD in neuroscience from the University of Washington and a bachelor’s in biological sciences from Cornell University.
View full profile.
Learn about our editorial policies.
ABOVE: MODIFIED FROM © ISTOCK.COM, ArtemisDiana
For more than a decade, Alexander Huth from the University of Texas at Austin had been striving to build a language decoder—a tool that could extract a person’s thoughts noninvasively from brain imaging data. Earlier this year, he succeeded.1
 To build a language decoder, Huth first needed functional MRI (fMRI) data to input into the model. He and his team recorded brain activity from participants as they listened to 16 hours of narrative podcasts such as The Moth Radio Hour and The Modern Love Podcast. The team then used this data to teach an artificial intelligence-based decoder which language patterns correlated with what brain activity. Finally, they instructed the participants to memorize a new story that neither they nor the decoder had ever heard before and narrate it in their heads. The model made guesses of what the participant was thinking and ranked these guesses based on how well they corresponded to the participant’s brain activity.
 The decoder wasn’t perfect. It wasn’t very good at conserving pronouns, for example, and it mixed up first and third person. But it successfully extracted the meaning of what participants thought. For example, if the scientists directed a person to think, “I don’t have my license yet,” the decoder generated the sentence, “She has not even started to learn to drive yet.” It captured the gist of what the participant was thinking, explained Huth. It was even able to capture the flavor of what participants viewed in a video, which Huth found shocking. 
For the first time in history, there is a system other than the human that’s able to do something that looks like language.
 Huth hopes that the technique will help people who are unable to speak to communicate again. But these types of experiments may also tell scientists something fundamental about how the brain understands and organizes meaning. As artificial intelligence more accurately mimics human speech, cognitive neuroscientists hope that it can tell us about how humans distinguish “apple” from “orange.” 
In Huth’s study, he used a large language model called GPT1, an early version of the engine that runs ChatGPT, to decode brain activity. “For the first time in history, there is a system other than the human that’s able to do something that looks like language,” said Huth, but whether it is anything like a human brain remains an open question. “And these language models are just wildly useful.”
The algorithm’s ability to decode imagined speech was pretty remarkable, said Huth. Other scientists agree. “It’s amazing that something like fMRI, which has such a slow temporal resolution, is even capable of doing this,” said Laura Gwilliams, a neuroscientist at Stanford University. fMRI is agonizingly slow compared to the speed of human thoughts or even human speech. The technique measures changes in blood flow within the brain as proxies for brain activity, which can take seconds.2
 Scientists have few other good options for studying language in the brain. Humans share many senses and cognitive processes with other animals, but not language. Scientists can rarely use invasive electrical recordings, which have much higher temporal resolution, in humans. The only possibility is when patients are undergoing treatment for neurological diseases such as epilepsy. 
The question of how the brain distinguishes meaning is also extremely complicated, Gwilliams explained. The average English speaker knows 15,000 words and countless phrases, and understanding whether one phrase is similar to another is an extremely difficult task. It would require an extremely large number of measurements, said Alona Fyshe, a computational neuroscientist at the University of Alberta. 
 Most of what we know about language is based on linguistic theory rather than experimental data, said Gwilliams. While we generally know which parts of the brain process language, we don’t know which parts handle syntactical information (word order) or semantic information (the meaning of words). 
Even if fMRI is a noisy, slow approximation of the brain’s electrical activity, Huth wasn’t particularly surprised by his findings. Researchers have predicted brain activity from words for years and managed to decode music and even dreams from brain activity.2 Since the early days of neural networks trained to recreate language, scientists have theorized that brains and neural networks share similar properties. 
Scientists measure neural activity with functional MRI (fMRI), which captures the blood oxygen level dependent (BOLD) signal created by changes in blood flow in the brain.
Scientists teach an artificial neural network to associate brain activity with certain stimuli (1). When scientists feed an auditory signal input (such as “I saw a big dog”) into the neural network, the first layer computes and feeds it into the next hidden layers (2). Hidden layers select specific stimulus features that correlate with brain activity. An example of a feature might be a word, such as “dog.” Related features such as “dog” and “cat” might elicit more similar brain activity than “dog” and “apple.” (3).
The neural network returns these features in the final layer. These features can be mapped onto the brain activity data with a computational model (4). To compare neural networks and brains, scientists can test whether the two pick out similar features of the stimulus or behave similarly during tasks, such as when predicting a missing word (5). The encoding model predicts brain responses based on the presented auditory stimuli, while the decoding model attempts to infer the corresponding auditory stimuli using the observed brain responses (6). 

Scientists have decoded natural speech from fMRI data using the transformer neural network, GPT. The neural network keeps a list of potential phrases that par-ticipants could have heard such as “I saw a big” and “I saw a dog.” (1). This neural network is trained to predict the next word in the sequence (2). Scientists seek help from an encoding model to predict brain responses for the proposed next word (3). Scientists compare the BOLD predictions made by the neural network to the actual BOLD signals (4). The model spits out the most likely phrases and loops to repeat the process with a new list of potential phrases (5). Models output the final phrase after iterations (6).
While neural networks take inspiration from the brain, they are not specifically designed to mimic it, explained Fyshe. Computational neural networks are built in layers. Each layer is composed of multiple building blocks called neurons, which are connected in various ways. When a neural network is asked to perform a task, an input goes into the network, and the artificial neurons in each layer extract information from that input by performing a series of computations, with subsequent or deeper layers gathering and combining information from previous layers, much like how information is hierarchically organized in the brain. But unlike a human brain, neural networks theoretically can be taken apart and analyzed piece by piece, making them potentially useful tools for studying the brain. 
There are two ways in which artificial neural networks help scientists study the brain. In the first way, scientists use artificial neural networks or other computational models to predict or encode neural responses to a stimulus, such as an auditory word input. This gives researchers an abstract model of the brain. When training an encoding model, researchers feed an auditory stimulus into a neural network, where the stimulus goes through multiple layers of computation. Each layer spits out a series of numbers, which can be mapped onto the neural response.5,6 
In the second approach, researchers decode a stimulus from actual neural responses, such as fMRI measurements, using artificial neural networks. These data provide insights into which brain areas are active when a stimulus is present. 
Back in 2016, Fyshe tested the first approach; her team used data from multiple modalities of noninvasive brain imaging to understand how the brain responded to single words. Their goal was to understand if the brain encoded information like neural networks designed to produce language, specifically those trained to predict the other words in a sentence. An algorithm called Skip-gram, first introduced three years prior to their study, served as the point of comparison.3 
They found that the brain and neural networks group words together in similar ways. For example, both might group words like “apple” and “banana” together, recognizing them as being more related than “banana” and “car.” Fyshe’s team discovered this by looking at how similar brain responses were to words like banana and apple versus banana and car, and how likely neural networks were to predict the word as banana, apple, or car in a sentence like, “she ate a…” 
 Neural networks like Skip-gram couldn’t reproduce natural language very well since they represented words with a single vector and couldn’t capture words with multiple meanings. Finally in 2017, when transformer neural networks like GPT first appeared, neural networks could mimic something like human speech.4

In 2021, cognitive neuroscientist Evelina Fedorenko from the Massachusetts Institute of Technology and her team published a paper presenting their evaluation of how well several state-of-the-art artificial neural network models mapped neural responses.7 Many of these models were transformer language models, which were taking the field by storm with their ability to produce human-like language, said Fedorenko. Unlike other neural networks, which break an input down into similar components (for example, a language model groups similar words together), transformers predict an outcome based on what came before. Fedorenko tested how well transformer-based models that guess the next word in a sentence predict neural activity from words or phrases participants hear. “They capture neural responses very well,” she said.  She concluded that to produce language, the human brain makes predictions about what word comes next based on past brain activity, much like an artificial neural network does.5
Next, Fedorenko focused on finding the key components models need to map on to neural activity. “We have this amazing set of new toolkits for probing brains,” she said. “It’s really a revolution.”
The key to many recent advances, including those in his paper, Huth explained, was the rise of transformer neural networks that predict subsequent words. For his study, Huth used a transformer neural network GPT-1 as the basis for the decoder.
We have this amazing set of new toolkits for probing brains. It’s really a revolution.
 One key finding from Huth’s study is that the decoder could use any part of the brain to accurately predict what participants were thinking, while only the prefrontal cortex was active the entire time. There didn’t seem to be a specific part of the brain specialized to extract meaning from the sentences. The decoder also didn’t pick up on syntactic information well, but always returned the flavor of what the participant was thinking. According to Huth, this means that the brain cares more about meaning than syntax and implies that all parts of the brain keep track of meaning-related information, although they might do different things with it. “I’m a big proponent that it’s all meaning,” he said. 
Still, connecting artificial intelligence algorithms back to biology has been a sore spot for computational neuroscientists, especially as generative artificial intelligences such as GPT proliferate. The “neurons” in neural networks are unlike neurons in the brain, and it’s hard to relate what they do back to biology.
“In some ways, I know exactly what a neural network is doing,” Fyshe said. “In other ways, I have no idea what a neural network is doing.” 
According to Fyshe, the inputs and outputs in neural networks are recognizable, but the stuff in between is hard to interpret. Each layer of the neural network generates numbers using known computational functions. It’s really hard to relate those numbers to something meaningful in real life. 
 “The way that transformers are built is very nonbiologically plausible,” Fedorenko said. “At least what people have been thinking about human neural circuits, it’s really pretty different.”
The next step, said Fedorenko, is to relate computations that the language models perform to parts of the brain. “But we’re not quite there yet,” she said.
Even if they don’t behave like brains, neural networks are getting pretty good at predicting what humans are thinking. But does that amount to mind reading? Huth doesn’t think so. For one, his decoder model was not generalizable across subjects, meaning that the same decoder doesn’t work across multiple individuals. People can also resist the decoder by thinking about something else, and it can’t read memories. 
 That doesn’t eliminate privacy concerns in the future, however, as neuroscientists will build more perfect decoders. That’s the way that the field is trending, said George McConnell, a neuroscientist at Stevens University. No one knows how good this technology will get.
 Gwilliams agreed, adding that the field should be prepared to mull over privacy questions as encoding and decoding models improve and as cutting-edge advancements in imaging allow for more precise and less invasive measurements in the human brain. These advancements are happening quickly, she said. “It’s important that we’re having these conversations now so that we’re not caught unaware,” said Gwilliams. 
References
Interested in exclusive access to more premium content?
Natalia Mesa was previously an intern at The Scientist and now freelances. She has a PhD in neuroscience from the University of Washington and a bachelor’s in biological sciences from Cornell University.
View full profile.
© 1986–2023 The Scientist. All rights reserved.

source

Jesse
https://playwithchatgtp.com