According to Scientific American, Evidence Rebuts Chomsky’s Theory of Language Learning. Chomsky had theorized that there was all languages had beneath them an underlying structure whose rules for use were embedded in the human brain and that when we encountered language, we fitted it to this universal grammar. This led to numerous attempts to define what was in that universal grammar and make sure all languages fit it. Too often, though, they didn’t. Now, according to the article:
A key flaw in Chomsky’s theories is that when applied to language learning, they stipulate that young children come equipped with the capacity to form sentences using abstract grammatical rules. (The precise ones depend on which version of the theory is invoked.) Yet much research now shows that language acquisition does not take place this way. Rather young children begin by learning simple grammatical patterns; then, gradually, they intuit the rules behind them bit by bit.
Thus, young children initially speak with only concrete and simple grammatical constructions based on specific patterns of words: “Where’s the X?”; “I wanna X”; “More X”; “It’s an X”; “I’m X-ing it”; “Put X here”; “Mommy’s X-ing it”; “Let’s X it”; “Throw X”; “X gone”; “Mommy X”; “I Xed it”; “Sit on the X”; “Open X”; “X here”; “There’s an X”; “X broken.” Later, children combine these early patterns into more complex ones, such as “Where’s the X that Mommy Xed?”
In other words, we grow our language skills by saying things we’ve heard before and mixing and matching to get desired results. This is not about fitting language to an underlying structure. It’s about pattern matching. The process, then, is closer to machine learning than equation solving. A year or two ago, I went through a data science curriculum that culminated in the building of a text prediction engine. To do this, I sorted through a million lines of text, collected around 10,000 bi-grams, then identified the most common tri-grams and 4-grams in which they were contained. From there, I built a simple algorithm to suggest whichever word came after whichever bi-gram or tri-gram most often. It was a primitive thing and not my finest piece of work. And yet, if you type two words and follow its suggestions from there, you usually end up with understandable sentences. This works because it’s just reassembling sentences that have already been built. No grammar, just pattern recognition and matching.
The question comes: What does this mean for language learning? Lately, I’ve been doing Say Something in Welsh, which essentially just drills and drills you on using sentence patterns. I find myself capable of speaking proper Welsh even when I have no idea what I’m saying: The patterns are there and followed and only retrospectively do I determine that while I gave the wrong response, I gave a correct sentence. I’ve also been reading about Glossika, which gives you lots of sentences to practice. It advertises itself as a supplement for language exposure more than a language learning system, but I wonder, if you had enough sentences, how little initial knowledge you could get by with. At any rate, if patterns more than rules allow for language learning, then the place of content and speaking practice becomes all the more important. Something to keep in mind as you look for music and youtube videos instead of doing your grammar exercises when the language learning hits a plateau.