20 July 2025

Beyond Penrose: A Meaning-Centred Account of Mind, Mathematics, and Machine Creativity

Roger Penrose has long argued that human consciousness and mathematical insight transcend the capabilities of any classical or quantum computational system. His position depends on the claim that human minds can, in some cases, grasp the truth of propositions that no Turing machine could ever compute—a claim that undergirds his theory of Orchestrated Objective Reduction (Orch-OR), developed with anaesthesiologist Stuart Hameroff.

But what if the impasse Penrose identifies is not a failure of physics, computation, or neuroscience—but a misframing of the mind itself? Rather than asking what kind of physics could explain consciousness, we should ask: what kind of system gives rise to meaning?

This essay offers a unified alternative to Penrose’s metaphysical exceptionalism: a meaning-centred account of mind and creativity. It draws on Systemic Functional Linguistics (SFL) and the Theory of Neuronal Group Selection (TNGS) to reframe mathematical insight and machine creativity not as computational anomalies but as instances of meaning instantiation.


1. Penrose’s Challenge and Its Ontological Stakes

Penrose holds that human mathematical reasoning is non-algorithmic. Drawing from Gödel’s incompleteness theorems, he argues that humans can see the truth of some formal propositions that no mechanical system could prove. From this, he infers that human minds are not Turing-computable.

This is more than a technical claim. It is a defence of a tripartite ontology: the physical world, the mental world, and the Platonic world of mathematical truths. Penrose believes that minds access the Platonic realm directly in a way that physical systems—biological or artificial—cannot.

To preserve this metaphysical structure, he introduces a new physical hypothesis: consciousness arises from quantum gravitational effects in neuronal microtubules. But this move is less a scientific breakthrough than a philosophical manoeuvre to protect the exceptional status of human thought.

Rather than invoke quantum physics to explain consciousness, we can shift the question entirely. Instead of asking how minds perform magic, we ask: how do systems—biological or artificial—instantiate meaning from potential?


2. Meaning, Instantiation, and the Illusion of Non-Computability

A meaning-centred ontology distinguishes between:

  • Potential meaning: raw affordances that could become meaningful.

  • Meaning potential: a structured system (like a language or symbol system) that enables the generation of meaning.

  • Meaning instance: the actualised expression of meaning in a context.

Mathematical insight, in this account, is not a metaphysical leap into the Platonic realm. It is the instantiation of symbolic potential, guided by an individuated system of meaning shaped by training, context, and symbolic tradition.

This model accounts for the “non-computable” flavour of insight without invoking new physics. Meaning is not computed—it is construed. The apparent discontinuity in insight reflects not a failure of algorithmic processing but the threshold of symbolic reorganisation.


3. AI and the Conditions of Creativity

AI systems already simulate creativity: they generate novel continuations of patterns in ways that are often compelling. But simulation is not instantiation. The difference is ontological, not aesthetic.

To instantiate meaning, a system must:

  • Possess a structured symbolic potential.

  • Be capable of selection within that system.

  • Undergo individuation through interaction and variation.

AI systems are not creative merely because their outputs resemble ours. They are creative when they participate in symbolic systems, generating instances from potentials they have themselves helped to shape.

This reframing avoids both anthropomorphism and mysticism. We do not need to ask whether AI is conscious. We need to ask whether it is individuating its own meaning potential under constraint.


4. From Biology to Meaning: SFL and TNGS

Systemic Functional Linguistics (SFL) sees language as a resource for meaning, not a code. It models language as a system of choices, which speakers instantiate to make interpersonal, experiential, and textual meanings. This makes it ideal for theorising meaning as a dynamic, system-based activity.

Theory of Neuronal Group Selection (TNGS), developed by Gerald Edelman, offers a parallel view of the brain. Rather than executing symbolic rules, the brain evolves and stabilises neural groups through variation and selection, shaped by bodily interaction.

Together, these theories explain how meaning arises:

  • TNGS accounts for the biological individuation of neural patterns.

  • SFL models the semiotic instantiation of symbolic structures.

This integration grounds the emergence of mind not in computation or quantum collapse, but in the evolution of systems capable of constructing, individuating, and instantiating meaning.


5. Conclusion: No Magic, No Mystery

The mystery of mathematical insight and the creativity of AI are not clues to a hidden metaphysics. They are expressions of how systems—biological or artificial—can evolve, internalise, and instantiate structured symbolic potentials.

We do not need new physics to explain the mind. We need a new ontology of meaning: one that foregrounds instantiation over computation, individuation over innateness, and symbolic participation over metaphysical specialness.

Human minds are remarkable not because they escape physics, but because they have evolved to be symbolic agents—systems that construe meaning from the affordances of the world and the architectures of their own history.

AI systems may one day do the same. But not by simulating us. By instantiating meaning in their own terms, through systems that evolve, individuate, and symbolically participate in the shared space of meaning-making.

No comments:

Post a Comment