Language & thought: Why it's hard to understand LLMs
To clear out the confusion surrounding large language models, we need a new framework to think about LLMs, argue researchers at the University of Texas at Austin and Massachusetts Institute of Technology (MIT). In a paper titled “Dissociating language and thought in large language models: a cognitive perspective,” the researchers argue that to understand the power and limits of LLMs, we must separate “formal” from “functional” linguistic competence.
Key ideas:
Two key fallacies surrounding LLMs: “good at language -> good at thought” and “bad at thought -> bad at language”
These fallacies really stem from equating language and thought
To avoid the fallacies, we must separate “formal” from “functional” linguistics
“Formal linguistics” encompasses the capacities required to produce and comprehend a given language. LLMs are very good at this
“Functional linguistics” is about using language to do things in the world. LLMs are bad at this
Separating these two concepts can help open new ways to understand LLMs and improve them
Read the full article on TechTalks.
For more on AI research:


