Artificial intelligence has changed our understanding of language learning in children

Unlike the long, beautiful dialogues that are carefully written in most books and movies, the language of our everyday interaction is usually messy and incomplete, because it is full of sentences. False, false starts and inappropriate pauses. This includes any type of conversation, from casual chats between friends, arguments between siblings, to formal discussions in the boardroom. Therefore, given the random and experiential nature of language, it seemed miraculous that anyone could learn language by experience alone. For this reason, many scientists in the field of linguistics, including Noam Chomsky, the founder of modern linguistics, believe that language learners need some kind of glue to control the unruly nature of everyday language, and that is nothing but grammar (a system of rules for production of grammatical sentences) is not.

BingMag.com Artificial intelligence has changed our understanding of language learning in children

Unlike the long, beautiful dialogues that are carefully written in most books and movies, the language of our everyday interaction is usually messy and incomplete, because it is full of sentences. False, false starts and inappropriate pauses. This includes any type of conversation, from casual chats between friends, arguments between siblings, to formal discussions in the boardroom. Therefore, given the random and experiential nature of language, it seemed miraculous that anyone could learn language by experience alone. For this reason, many scientists in the field of linguistics, including Noam Chomsky, the founder of modern linguistics, believe that language learners need some kind of glue to control the unruly nature of everyday language, and that is nothing but grammar (a system of rules for production of grammatical sentences) is not.

From this point of view, children should probably have a grammar model in their brain that helps them to overcome the limitations of their language experience. For example, the pattern may contain a specific super-rule that dictates how new sections are added to existing expressions; After that, the child's brain checks whether his native language matches what he made or not?
In this way, the English-speaking child knows according to the same basic pattern that the verb comes before the object (eg: I eat sushi); While the same child, if he has a Japanese nationality, he knows according to his super law that the verb comes after the object (in Japanese, the structure of the same sentence is: I sushi eat).

But the new insight about how Language learning comes from a special source that you probably haven't thought of, and it's nothing but artificial intelligence! In fact, a new generation of AI models can write newspaper articles, poetry, and computer code after being exposed to large amounts of linguistic data as input. The most surprising part of the story is that all these tasks are done for artificial intelligence without the help of grammar. AI can sometimes be weird, nonsensical, or contain racist, sexist, and other biases, but one thing is abundantly clear about the phrasing of these models; And that is that the vast majority of the output of these AI language models is grammatically correct! While they are not given any grammar patterns or rules and the models only rely on linguistic experience to provide correct expressions as output.

The GPT-3 model is a deep learning neural network with 175 billion parameter and has a little brother called GPT-2! Among these, one of the most well-known artificial intelligence models is GPT-3, which is a very gigantic deep learning neural network with 175 billion parameters. During the training process, the AI was given hundreds of billions of words from the Internet, books, and Wikipedia as input, and the AI was asked to use what it learned to predict the next word in a sentence. In this regard, when artificial intelligence made a wrong prediction, its parameters were adjusted using an automatic learning algorithm so that it could predict the next word with less error! produce a belief in response to sentences such as "a summary of the latest Fast and Furious movie..." or "a poem in the style of Emily Dickinson". In addition, GPT-3 can answer SAT-level comparisons, reading comprehension questions, and even simple math problems, all learned using next-word predictive learning!

BingMag.com Artificial intelligence has changed our understanding of language learning in children

The AI model and the human brain may produce the same output, but do they Do they do the work in the same way?

Comparison of artificial intelligence models with the human brain

However, the similarities of the output expressions of artificial intelligence models with human language do not stop here, and published research published in the famous journal "Nature Neuroscience" shows that these deep learning artificial networks use the same computational principles of the human brain. The researchers in this field, led by neuroscientist Uri Hasson, first compared the predictions of the next words of a story from the podcast "This American Life" between humans and the artificial intelligence model GPT-2 (the little brother of the GPT-2 model). 3) They paid. During these investigations, it was found that the human brain and artificial intelligence predicted exactly the same word in the story almost 50% of the time.

Researchers recorded the volunteers' brain activity while listening to the story, and in this regard, the best The explanation for the patterns of activation observed in the volunteers' brains was that the brain, like GPT-2, does not use just one or two previous words when predicting, but relies on an accumulated context of 100 previous words. For this reason, in general, the authors of the conclusion did that their findings of spontaneous and predictive neural signals while participants listened to natural test speech suggest that active prediction may underlie lifelong language learning in humans.

Of course, you should be aware that a possible concern in this experiment is that the AI models use big data (the GPT-3 AI model was trained with 20,000 years of linguistic experience!) as input. But another preliminary study, which has yet to be fully peer-reviewed, shows that GPT-2 can still model next-word predictions and brain activation even when trained with only 100 million words. It should be noted that this amount of linguistic input is equivalent to the words of an average child during the first 10 years of life!

It should be emphasized that in this article I am not claiming in any way that GPT-3 or GPT-2 artificial intelligence models They learn the language learning method exactly the same as children; Because one of the important aspects of considering such similarity is perception and AI does not understand much of what it says

It is necessary to emphasize that I am not claiming in any way in this article that GPT-3 or GPT-2 AI models They learn the language learning method exactly the same as children; Because one of the important aspects of considering such a similarity is understanding, and artificial intelligence does not understand much of what it says, while understanding is one of the basic elements in human language. What these models do prove, however, is that a learner (albeit made of silicon!) can produce language sufficiently through exposure to perfectly correct grammatical sentences, and do so in a manner similar to human brain processing.

Redefining the language learning debate

BingMag.com Artificial intelligence has changed our understanding of language learning in children

For the year For years, many linguists believed that language learning is impossible without an internal grammatical pattern. But as we mentioned throughout this article, the new artificial intelligence proved the opposite. In fact, artificial intelligence models show that the ability to produce grammatically correct sentences can be learned only through language experience. Therefore, it can be claimed that probably children do not need to have an innate grammar and match it with what they hear to learn a language! Instead, children should be involved in as many different conversations as possible in order to develop their language skills. Because language experience, not grammar, is the key to becoming a person with correct and expressive expression.

  • The Stanford Prison Experiment is one of the scariest psychological experiments

original source

Leave a Reply

Your email address will not be published. Required fields are marked *