The Enigma of Reason: Rethinking Human Cognition and Its Echoes in AI
- Aki Kakko
- 5 minutes ago
- 5 min read
Hugo Mercier and Dan Sperber's 2017 book, "The Enigma of Reason," presents a provocative and influential challenge to traditional views of human reasoning. Instead of seeing reason as a primarily individual tool for discovering objective truth and making optimal decisions, they propose an "interactionist" or "argumentative theory of reasoning" (ATR). This theory posits that human reason evolved mainly for social functions: to produce arguments to persuade others and to evaluate the arguments presented by others. This radical reframing holds profound implications for how we understand ourselves and, crucially, how we approach the design and integration of Artificial Intelligence.

Challenging the Intellectualist Tradition
For centuries, the dominant view (often termed the "intellectualist" view) has regarded reason as the pinnacle of human cognition, a faculty allowing individuals to transcend instinct and bias, deliberate logically, and arrive at sound judgments and true beliefs. From Plato to Descartes to modern cognitive science emphasizing heuristics and biases (often seen as flaws in reasoning), the underlying assumption was that reason's purpose is individual enlightenment and better decision-making. However, Mercier and Sperber argue this view struggles to explain empirical observations:
Pervasive Biases: Humans consistently exhibit cognitive biases, like the confirmation bias (seeking evidence supporting pre-existing beliefs), even when striving for objectivity. Why would a tool designed for truth-seeking be so inherently biased?
Poor Solo Performance: Individuals often perform poorly on logical reasoning tasks (like the Wason selection task) when working alone.
Post-Hoc Justification: People often make decisions intuitively or emotionally and then construct reasons to justify them, rather than reasoning their way to the decision beforehand.
Group Improvement: Reasoning performance dramatically improves in group settings where individuals debate, challenge, and evaluate each other's arguments.
The Argumentative Theory of Reasoning (ATR)
Mercier and Sperber propose that these seeming flaws are actually features when viewed through an evolutionary, social lens. They argue reason evolved with two primary, intertwined functions operating within a social context:
Producing Arguments: The ability to generate reasons and justifications to convince others to accept one's claims or adopt one's point of view. This function is inherently biased – the confirmation bias, for example, becomes an efficient mechanism for finding supporting arguments for your position, which is useful in persuasion.
Evaluating Arguments: The ability to critically assess the arguments presented by others. This function, termed "epistemic vigilance," helps protect individuals from misinformation and manipulation. While we are biased producers of arguments, we are often quite adept (though not perfect) evaluators of others' arguments, especially in dialogue.
In this framework, reason isn't primarily for solitary contemplation but for social interaction. It's a tool for navigating the complex social world, coordinating actions, establishing reputations, and engaging in collective decision-making. The "enigma" is why we have this powerful faculty that seems so flawed when used individually; the solution is that its design specification was social and argumentative, not solitary and purely truth-seeking.
Implications for Artificial Intelligence
The argumentative theory of reason offers a powerful, alternative perspective for AI development and deployment, moving beyond simply replicating idealized logic or mimicking surface-level human behaviour.
AI Design Philosophy: Solitary Genius vs. Social Reasoner?
Traditional AI: Often aims to build systems embodying idealized rationality – unbiased, logically perfect, optimal decision-makers. This mirrors the intellectualist view of reason.
ATR-Inspired AI: Suggests exploring AI architectures that excel in interaction. This could mean designing AI agents whose primary strength lies not just in computation, but in their ability to generate justifications, understand counterarguments, persuade, and critically evaluate information presented by humans or other AIs. The goal shifts from a single, omniscient AI to potentially more robust systems of interacting, "argumentative" agents.
Explainable AI (XAI) and Trust:
The Challenge: Current XAI often struggles to provide truly transparent explanations of complex models (like deep neural networks). Sometimes explanations are post-hoc rationalizations, mirroring human behaviour described by ATR.
ATR Perspective: If human reason generates justifications primarily for social acceptance rather than revealing the true causal chain of thought, should AI explanations aim for the same? Instead of (or in addition to) trying to perfectly map the internal state, AI explanations might need to be designed as persuasive arguments tailored to the user's understanding and need for trust. This raises ethical questions but aligns with how humans actually build trust through communication. An AI might need to explain why its decision is defensible or reasonable in a given context, rather than just outputting raw computational steps.
Multi-Agent Systems and Collective Intelligence:
ATR's Strength: The theory highlights that reasoning performs best in diverse groups where arguments are exchanged and evaluated.
AI Application: This strongly supports the development of multi-agent AI systems where different AI agents (perhaps with different initial biases, data access, or algorithms) debate or critique each other's conclusions. This "argumentative" process could lead to more robust, reliable, and less biased outcomes than a single monolithic AI system. Imagine AI "teams" collaborating and challenging each other before presenting a final recommendation.
Addressing Bias in AI:
Human Bias as Feature: ATR frames confirmation bias not just as a bug, but as a feature for efficiently producing arguments.
AI Bias Mitigation: While we want to avoid harmful societal biases in AI, understanding the function of certain cognitive biases (like confirmation bias in argumentation) could inform AI design. Could we design AI systems that strategically use a form of "argumentative bias" within a controlled multi-agent system to explore a problem space thoroughly, while epistemic vigilance (evaluation) components ensure faulty arguments are discarded? This is speculative but offers a different angle on bias beyond simple elimination.
Human-AI Interaction:
Beyond Command-and-Control: If human reason is argumentative, interactions with AI might be more effective if they resemble dialogues or debates rather than simple instruction-following.
AI as Socratic Partner: AI could be designed to act as a critical thinking partner, challenging user assumptions, requesting justifications, and presenting counterarguments, thereby leveraging the human capacity for evaluating arguments (epistemic vigilance). This could enhance human decision-making rather than just automating it.
Defining and Evaluating AI "Intelligence":
Shifting Metrics: ATR suggests that evaluating AI solely on solitary task performance might be insufficient. A crucial aspect of intelligence, both human and potentially artificial, could be the ability to effectively participate in argumentative interaction, persuade, justify, and understand social context. Future AI benchmarks might need to incorporate interactive, argumentative tasks.
"The Enigma of Reason" forces a fundamental rethink of what reason is for. By proposing that it is primarily a social, argumentative tool, Mercier and Sperber provide a compelling explanation for observed human cognitive phenomena. This perspective shift has far-reaching implications for AI. Instead of solely pursuing the ideal of the flawless individual reasoner, the argumentative theory encourages us to explore AI designs centered on interaction, justification, evaluation, and social context. It suggests that the path towards more robust, trustworthy, and perhaps even more "intelligent" AI may lie not in creating solitary digital geniuses, but in building systems that can effectively reason together – with each other, and with us. The enigma of human reason, when understood through this lens, becomes a powerful blueprint for the future of artificial minds.