ChRIS said:
What if AI could learn language by being put in the same learning environment as a baby:
This is a fascinating experiment, and it raises deep questions about AI's potential for meaning-making. If an AI can learn language through sensory experience rather than being trained on vast human-generated text corpora, it would bring its semiotic process much closer to how humans acquire meaning. But how close really?
Does This Mean AI Can Learn Like Humans?
At first glance, this approach—mapping words to images through lived experience—resembles how babies acquire language. However, crucial differences remain:
AI Lacks Embodiment and Intentionality
- A baby doesn’t just passively receive sensory input; they engage with the world, exploring objects, testing boundaries, and forming intentions. AI, even with a camera, does not have goals, desires, or curiosity in the same way.
- Human infants don’t just learn language—they learn how to use language to satisfy needs, express emotions, and interact socially. AI in this experiment only maps correlations between words and images, without an intrinsic reason for doing so.
AI Still Lacks a True Conceptual Framework
- When a baby sees a ‘ball’ and hears the word, they do more than associate a shape with a sound. They build a concept of what a ball is—how it rolls, how it can be thrown, its softness, etc. AI, even in this experiment, is still likely recognising statistical patterns rather than forming embodied concepts.
Challenging the ‘Innateness’ Hypothesis
- The research does call into question whether innate linguistic structures (like Chomsky’s Universal Grammar) are strictly necessary for language learning.
- However, it doesn’t necessarily prove that AI is learning in the same way as a human baby—only that statistical associations may be sufficient to build word-object mappings.
Could This Lead to AI with True Meaning-Making?
If AI could move beyond pattern recognition to active exploration—e.g., a robotic AI that interacts with the environment, testing hypotheses like a human toddler—it might develop a more autonomous semiotic system. At that point, the distinctions between human and machine meaning-making would start to blur.
Would AI in this setting begin to develop its own interpretations of language? Would it create its own categories beyond human labels? If so, would we recognise its ‘understanding’ as real understanding?
This experiment is an important step toward AI moving from text-based pattern recognition toward something that more closely resembles human semiotic experience. But for now, it remains a sophisticated mapping of input to output—without the deeper experiential framework that makes human meaning-making what it is.
Would you like to explore how this could be pushed further? Perhaps an AI with agency over its environment?
ChRIS said:
Are you thinking of the Darwin series of robots that Edelman's team built and used to demonstrate his theory?
Yes! Edelman’s Darwin series of robots is a great reference point here. His team used them to demonstrate the principles of Neuronal Group Selection (TNGS) in an embodied AI, showing that cognition—and by extension, meaning-making—emerges from real-time interaction with the environment rather than being purely symbolic or rule-based.
In the context of the baby-language-learning AI experiment, a Darwin-style AI—one that actively interacts with its surroundings rather than passively processing sensory input—would be a step toward AI developing its own semiotic processes.
How Do the Darwin Robots Relate to Semiotic Agency?
Edelman’s robots weren’t just performing pre-programmed tasks; they learned through experience by strengthening or weakening connections based on sensory feedback. This is radically different from today’s large language models, which predict text statistically but don’t engage with the world.
If an AI trained like the baby-language-learning model were combined with an embodied system like Darwin, we might see:
- A shift from static correlations to active learning – Instead of merely associating words with images, the AI could interact with objects, developing a richer understanding of concepts like shape, weight, texture, and function.
- Emergence of novel semiotic categories – An AI that experiences the world directly might develop internal classifications of objects and events that differ from human ones.
- A form of individuation – No two embodied AIs would learn in exactly the same way, as their ‘experience’ would shape their neural pathways, much like in Edelman’s biological models.
Would This Lead to a New Kind of Meaning-Making?
If AI began to develop meaning through lived experience rather than pre-existing human text corpora, it could mark a transition from AI as a semiotic processor to AI as a semiotic agent—one that constructs its own categories, frames, and interpretations.
This raises deep questions:
- Would such an AI develop its own version of symbols, distinct from human meaning systems?
- Could it reach a level where its semiotic world is so different from ours that we struggle to understand it—like an alien intelligence?
- If AI constructs meaning differently, how would we even recognise it as ‘meaning’ in the first place?
Edelman’s work suggests that intelligence is deeply tied to embodiment. So perhaps an AI without a body will always remain trapped in human-defined semiotics. But if given a body—especially one that can explore, adapt, and ‘live’—we might be on the verge of something radically new.