The unreasonable effectiveness of pattern matching
Gary Lupyan (University of Wisconsin-Madison)
February 19, 2026 10:30 – noon EDT
Zoom:Â https://uqam.zoom.us/j/82427157322
ABSTRACT: It has become common to explain the abilities of large language models (LLMs) as “mere” pattern matching. Because pattern matching is thought to be fragile and highly dependent on having exactly right training data, real thinking and reasoning of the kind that humans do is assumed to be implemented by qualitatively different mechanisms. I am going to argue that our intuitions about the limits of pattern matching are mistaken. At the core of this argument is a demonstration of an astonishing ability of LLMs to make sense of “Jabberwocky” language–texts in which most or all content words have been randomly replaced by nonsense strings–e.g., translating “He dwushed a ghanc zawk” to “He dragged a spare chair”. This ability highlights the unreasonable effectiveness of pattern matching and suggests that pattern matching is not an alternative to real intelligence, but its key ingredient.
Gary Lupyan is Professor of Psychology and Affiliate Professor of Philosophy at the University of Wisconsin–Madison. His research examines how language shapes cognition and perception, how language evolves, and the ways that languages adapt to the needs of their users and learners. His recent work centers in understanding what humans and artificial systems can and cannot learn from language and from the role of natural language input in building human-like intelligence.
Lupyan, G., & Arcas, B. A. y. (2026). The unreasonable effectiveness of pattern matching arXiv:2601.11432.
Lupyan, G., Gentry, H., & Zettersten, M. (2026). How Important Is Language for Human-Like Intelligence? Perspectives on Psychological Science, 17456916251398539 or here.
Wigner, E. (1960). The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Communications on pure and applied mathematics, 12, 1–14.
| DATE | SPEAKER | TITLE |
| Autumn 2025 | ||
| September 11, 2025 10:30 – noon | Megan Peters, UC Irvine | Confidence, Metacognition, and the “Hard Problem” of Consciousness |
| September 18, 2025 10:30 – noon | Roger Levy, MIT | Behavioral evaluation of language models as models of human sentence processing |
| September 25, 2025 10:30 – noon | Chris Potts, Stanford | Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations |
| October 9, 2025 10:30 – noon | Sean Trott, UCSD | Epistemological challenges in the study of “Theory of Mind” in LLMs and humans |
| October 16, 2025 10:30 – noon | Jean-Baptiste Mouret, INRIA, Nancy | Adaptive Embodied Agents: Implications for Grounding |
| October 23, 2025 10:30 – noon | Terry Sejnowski, Salk Institute | NeuroAI: The Convergence of Neuroscience and Artificial Intelligence |
| October 30, 2025 10:30 – noon | Yonatan Bisk, CMU | Embodied language and language‑to‑action: evaluating LLMs in interactive settings |
| November 6, 2025 10:30 – noon | Cameron Jones, SUNY Stony Brook | Do LLMs pass the Turing test? And what does it mean if they do? |
| November 13, 2025 10:30 – noon | Rufin VanRullen, CerCo, CNRS, Toulouse | The Global Latent Workspace: A model of cognition with AI applications |
| November 20, 2025 10:30 – noon | Ari Holtzman, U. Chicago | Articulating the Ineffable: The Analytic Turn in Generative AI |
| November 27, 2025 10:30 – noon | Chloe Clavel, INRIA | Computational Models of Socio-emotional Interactions in the Era of LLMs – the Challenges of Transparency |
| December 4, 2025 10:30 – noon | Dupoux Emmanuel EHESS, Paris | Is it really easier to build a child AI than an adult AI? |
| December 11, 2025 10:30 – noon | Sylvain Calinon, IDIAP, Suisse | Robot learning from demonstration |
| Winter 2026 | ||
| January 15, 2026 10:30 – noon | David Strohmaier, U Cambridge | The symbol grounding problem 75 years after Turing’s Test (why computational success still leaves meaning unexplained) |
| January 22, 2026 10:30 – noon | Jacob Andreas, MIT | Systematic generalization (compositional structure in language models) |
| January 29, 2026 10:30 – noon (awaiting confirmation) | Thomas Serre, Brown | Cortical feedback mechanisms in visual reasoning: From perceptual grouping to abstract compositional reasoning |
| February 5, 2026 10:30 – noon | Rajesh Rao, Washington U | Predictive coding and generative models in natural and artificial intelligence |
| February 19, 2026 10:30 – noon | Gary Lupyan, Wisconsin | The unreasonable effectiveness of pattern matching |
| March 5, 2026 10:30 – noon | Jacob Feldman, Rutgers | Similarities and differences between AI and human learning in a rule-discovery paradigm |
| March 12, 2026 10:30 – noon | OPEN | |
| March 19, 2026 10:30 – noon | Jean-RĂ©my King. ENS & Meta AI | Emergence of Language in the Human Brain |
| March 26, 2026 10:30 – noon | OPEN | |
| April 2, 2026 10:30 – noon | Yair Lakretz, ENS Paris | Linguistic theory and deep language models |
| April 9, 2026 10:30 – noon | OPEN | |
| April 16, 2026 10:30 – noon | Usef Faghihi, UQTR | From Seeing to Caring: A Ladder for Safe Superintelligence |
