DIC/ISC/CRIA Seminar in Cognitive Informatics

The unreasonable effectiveness of pattern matching

Gary Lupyan (University of Wisconsin-Madison)

February 19, 2026 10:30 – noon EDT

Zoom: https://uqam.zoom.us/j/82427157322

ABSTRACT: It has become common to explain the abilities of large language models (LLMs) as “mere” pattern matching. Because pattern matching is thought to be fragile and highly dependent on having exactly right training data, real thinking and reasoning of the kind that humans do is assumed to be implemented by qualitatively different mechanisms. I am going to argue that our intuitions about the limits of pattern matching are mistaken. At the core of this argument is a demonstration of an astonishing ability of LLMs to make sense of “Jabberwocky” language–texts in which most or all content words have been randomly replaced by nonsense strings–e.g., translating “He dwushed a ghanc zawk” to “He dragged a spare chair”. This ability highlights the unreasonable effectiveness of pattern matching and suggests that pattern matching is not an alternative to real intelligence, but its key ingredient.

Gary Lupyan is Professor of Psychology and Affiliate Professor of Philosophy at the University of Wisconsin–Madison. His research examines how language shapes cognition and perception, how language evolves, and the ways that languages adapt to the needs of their users and learners. His recent work centers in understanding what humans and artificial systems can and cannot learn from language and from the role of natural language input in building human-like intelligence.

Lupyan, G., & Arcas, B. A. y. (2026). The unreasonable effectiveness of pattern matching arXiv:2601.11432. 

Lupyan, G., Gentry, H., & Zettersten, M. (2026). How Important Is Language for Human-Like Intelligence? Perspectives on Psychological Science, 17456916251398539 or here.

Wigner, E. (1960). The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Communications on pure and applied mathematics, 12, 1–14.

DATESPEAKERTITLE
Autumn 2025
September 11, 2025 10:30 – noonMegan Peters, UC Irvine Confidence, Metacognition, and the “Hard Problem” of Consciousness
September 18, 2025 10:30 – noonRoger Levy, MIT Behavioral evaluation of language models as models of human sentence processing 
September 25, 2025 10:30 – noonChris Potts, Stanford Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations
October 9, 2025 10:30 – noonSean Trott, UCSDEpistemological challenges in the study of “Theory of Mind” in LLMs and humans
October 16, 2025 10:30 – noonJean-Baptiste Mouret, INRIA, NancyAdaptive Embodied Agents: Implications for Grounding
October 23, 2025 10:30 – noonTerry Sejnowski, Salk InstituteNeuroAI: The Convergence of Neuroscience and Artificial Intelligence
October 30, 2025 10:30 – noonYonatan Bisk, CMUEmbodied language and language‑to‑action: evaluating LLMs in interactive settings
November 6, 2025 10:30 – noonCameron Jones, SUNY Stony BrookDo LLMs pass the Turing test? And what does it mean if they do?
November 13, 2025 10:30 – noonRufin VanRullen, CerCo, CNRS, ToulouseThe Global Latent Workspace: A model of cognition with AI applications
November 20, 2025 10:30 – noonAri Holtzman, U. ChicagoArticulating the Ineffable: The Analytic Turn in Generative AI
November 27, 2025 10:30 – noonChloe Clavel, INRIAComputational Models of Socio-emotional Interactions in the Era of LLMs – the Challenges of Transparency
December 4, 2025 10:30 – noonDupoux Emmanuel EHESS, ParisIs it really easier to build a child AI than an adult AI? 
December 11, 2025 10:30 – noonSylvain Calinon, IDIAP, SuisseRobot learning from demonstration
Winter 2026
 January 15, 2026 10:30 – noonDavid Strohmaier, U CambridgeThe symbol grounding problem 75 years after Turing’s Test  (why computational success still leaves meaning unexplained)
January 22, 2026 10:30 – noonJacob Andreas, MITSystematic generalization (compositional structure in language models)
January 29, 2026 10:30 – noon (awaiting confirmation)Thomas Serre, BrownCortical feedback mechanisms in visual reasoning: From perceptual grouping to abstract compositional reasoning
February 5, 2026 10:30 – noonRajesh Rao, Washington UPredictive coding and generative models in natural and artificial intelligence
   
February 19, 2026 10:30 – noonGary Lupyan, WisconsinThe unreasonable effectiveness of pattern matching
  
March 5, 2026 10:30 – noon Jacob Feldman, RutgersSimilarities and differences between AI and human learning in a rule-discovery paradigm
March 12, 2026 10:30 – noonOPEN
March 19, 2026 10:30 – noonJean-RĂ©my King. ENS & Meta AI Emergence of Language in the Human Brain
March 26, 2026 10:30 – noonOPEN 
April 2, 2026 10:30 – noonYair Lakretz, ENS ParisLinguistic theory and deep language models
April 9, 2026 10:30 – noonOPEN
April 16, 2026 10:30 – noonUsef Faghihi, UQTRFrom Seeing to Caring: A Ladder for Safe Superintelligence