Saturday, July 27, 2024
News

AI is unlocking the secrets of the human brain

101views

If you’re prepared to lie very still in a giant metal tube for 16 hours and let magnets blast your brain to listen, to hit podcasts, a computer may be able to read your mind. Or at least a rough outline of it. Researchers at the University of Texas at Austin recently trained AI models to grasp the gist of a limited range of sentences as people heard them—pointing to a near future in which artificial intelligence will give us a deeper understanding of the human mind. Can

The program analyzed fMRI scans of people who were listening to, or even just memorizing, sentences from three shows: modern love, small piece radio clockAnd Anthropocene reviewed, Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when a person heard “I don’t have my driver’s license yet,” the program deciphered that person’s brain scan and “He hasn’t started learning to drive yet”—one word— Not word-for-word again. construction, but a close approximation of the idea expressed in the original sentence. The program was also able to view fMRI data from people watching the short films and write approximate summaries of the clips, suggesting that the AI ​​was capturing underlying meanings, not individual words, from the brain scans.

conclusion, published in nature neuroscience Earlier this month, add in a new area of ​​research that turns traditional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and more recently voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, interact in a way to produce a desired result. send the output to the other. Yet human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains remains a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. “It’s certainly leading to advances that we couldn’t even imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT.

The apparent proximity of AI programs to mind reading has led to uproar But Social And traditional media, But that aspect of the work is “more of a parlor trick,” says Alexander Huth, a lead author. Nature study and a neuroscientist at UT Austin told me. The models were relatively accurate and fine-tuned for each person participating in the research, and most brain-scanning techniques provide very low-resolution data; We live a long way from a program that can plug into anyone’s brain and understand what they’re thinking. The true value of this work lies in predicting which parts of the brain light up when hearing or visualizing words, which is how our neurons work together to create one of humanity’s defining characteristics, language. Can gain more insight into specific methods.

Read: Difference between speaking and thinking

Huth said that successfully building a program that could reconstruct the meaning of sentences primarily served as “proof-of-principle that these models actually capture a lot about how the brain how it processes language. Before this nascent AI revolution, neuroscientists and linguists relied on somewhat generalized verbal descriptions of the brain’s language networks that were not precise and difficult to link directly to observable brain activity. Hypotheses for exactly what aspects of language different brain regions are responsible for—or even the fundamental question of how the brain learns language—were difficult or impossible to test. (Perhaps one area recognizes sounds, another deals with syntax, and so on.) But now scientists can use AI models to better pinpoint what those processes involve. The benefits may go beyond academic concerns — helping people with certain disabilities, for example, according to Jerry Tang, the study’s other lead author and a computer scientist at UT Austin. “Our ultimate goal is to help restore communication to people who have lost the ability to speak,” he told me.

There has been some resistance to the idea that AI could help study the brain, particularly from neuroscientists who study language. This is because neural networks, which excel at finding statistical patterns, seem to lack basic elements An understanding of how humans process language, such as what words mean. The difference between machine and human cognition is also intuitive: a program like GPT-4, which can write decent essays and excel at standardized tests, learns by processing terabytes of data from books and webpages, whereas a child learns a language. Choose fraction of 1 percent of that amount of words. “Teachers tell us that artificial neural networks are not really the same as biological neural networks,” neuroscientist Jean-Rémy King told me in the late 2000s about his studies. “It was just a metaphor.” Now in Meta leading research on the brain and AI, King is among many scientists rebuttal That old dogma. “We don’t think of it as a metaphor,” he told me. “we think [AI] As a very useful model How the brain processes information.

In the past few years, scientists have Shown that the internal workings Advanced AI Program a promising mathematical proposition Sample How our brains process language. When you type a sentence into ChatGPT or a similar program, its internal neural network represents that input as a set of numbers. When a person hears a single sentence, an fMRI scan can capture how neurons in their brain respond, and a computer is able to interpret those scans as basically another set of numbers. These processes repeat over many, many sentences to create two huge data sets: one for how the machine represents the language, and the other for a human. The researcher then builds a relationship between these data sets known as an algorithm. encoding Sample. Once this is done, the encoding model can begin to extrapolate: how the AI ​​responds to a sentence becomes the basis for predicting how neurons in the brain will fire in response to it.

New research using AI to study the brain’s language network seems To look like Everyone Some weeks, Each of these models can represent “a computationally accurate hypothesis about what might be going on in the brain,” Nancy Kanwisher, a neuroscientist at MIT, told me. For example, AI could help answer the open question of what exactly the human brain aims to do when it acquires a language – not just that a person is learning to communicate, but specific neural pathways. Mechanism through which communication takes place. The idea is that if a computer model is trained with a specific objective – such as learning predict next word judge the grammatical coherence of a sequence or a sentence – proves best at predicting brain responses, then it is likely that the human mind shares that goal; Maybe our brains work like GPT-4 by determining which words are most likely to follow each other. The inner workings of a language model, then, becomes a computational theory of mind.

Read: ChatGPT is already obsolete

These computational approaches are only a few years old, so there are many disagreement And Competition principles, Francisco Pereira, director of machine learning at the National Institute of Mental Health, told me, “There’s no reason why the representations you learn from language models have anything to do with how the brain represents a sentence.” does.” But that doesn’t mean a relationship can’t exist, and there are several ways to test whether it does. Unlike the brain, scientists can virtually isolate, probe, and manipulate language models limitless– Even though AI programs are not a complete hypothesis of the brain, they are powerful tools for studying it. For example, cognitive scientists might try predict reactions of targeted brain regions, and Examination Greta Takute, who studies the brain and language at MIT, told me how different types of sentences elicit different types of brain responses, to find out what those specific groups of neurons did “and then to that area. stepped which is unknown”.

For now, the usefulness of AI may not be in accurately replicating that unknown neurological area, but rather in providing approximations to its exploration. “If you have a map that reproduces every little detail of the world, the map is useless because it is the same size as the world,” Anna Ivanova, a cognitive scientist at MIT, told me, invoking a famous borges parable, “And that’s why you need abstraction.” It is by specifying and testing what to keep and what to keep – choosing between roads and landmarks and buildings, then seeing how useful the resulting map is – that scientists are beginning to navigate the linguistic terrain of the brain.