A technician explains measurements whilst a scientist explains observations.

The symbol grounding problem and Chinese Room

The "Symbol Grounding Problem" was formulated by Harnad (1990). He pointed out that the symbols in an information processor can only be internally described by using an address to obtain other symbols which can only be described by looking up yet further symbols and so on. There is nothing within an information processor that gives its internal symbols any meaning.

The Symbol Grounding Problem can be seen as another way of stating Aristotle's regress. In Aristotle's regress it is argued that perception is problematic because the image on the eye must be seen by an inner eye that in turn must be seen by another inner eye whilst in a computer the contents of a data store are categorised by being related to data in another data store but unfortunately the content of this data store is just another set of symbols that can only be categorised by relating them to another data store and so on.

The symbol grounding problem is a scientific way of describing Searle's Chinese Room Argument (Searle 1980) in which a person translates English into Chinese by following a set of instructions (ie: by manually implementing a computer program) and might appear to know Chinese without actually knowing Chinese at all.

Both the symbol grounding and Chinese room problems had already been explored by Aristotle when he pointed out that perception would create an infinite regress or require a sense that is self aware. In the symbol grounding problem an infinite regress would occur in a processor if it sought for any internal meaning for its symbols and in the Chinese room the symbols only acquire meaning when they are passed outside the processor to a recipient that is self aware.

Leibniz knew about the problem of mechanical systems being no more than parts that act upon each other three hundred years ago:

"One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception." Leibniz. Monadology, 17.

The Symbol Grounding Problem is directly related to Leibniz's windmill, computers being machines in which electrons are just "parts pushing one another". Even if a computer were attached to a robotic arm and recorded its own actions all it would have internally would be a set of symbols.

All that has changed in the past three hundred years is that modern people really love their machines. The idea that our own creations might be conscious is not new, for instance the ancient Greeks loved their ceramics like we love computers and believed that Man himself was made by the gods breathing on a clay man.

The symbol grounding problem does not seem to apply to us. Unlike a digital computer, we know what we are doing, for instance if I fill a hole by digging soil with a spade my mind contains the directedness of the loaded spade towards the hole as a real extension in time (see Time and conscious experience). It is this extension in time that allows me to know my own symbols.

Harnad (1990) shows that symbols can be grounded by association with real objects in the world but this demonstration only means that we can construct machines that work, not that the machines have any internal conscious experience.


Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Searle, J.R. 1980. Minds Brains and Programs. The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press. http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html


It is curious that the symbol grounding problem is really a restatement of Heisenberg's Uncertainty Principle: we cannot simultaneously observe the position and momentum (elapsed time) of an event. Information is always physically instantiated so uncertainty applies as much because two sets of information must be generated to represent both position and momentum as because, in Heisenberg's original example, a measuring photon will perturb the electron that it strikes to make a measurement. (See Entropic uncertainty principle


  1. I have one question here.

    Assuming that materialism is false and that we're therefore misconceiving the physical world in a big way, then how do we know that a computer is doing nothing but 'moving things around'? Since it seems like the suggestion here is that perception can't be reduced to mechanical motions (and yet we human beings have perception, despite mechanical motions being all we can investigate in the third person), then why can't it be the computers can perceive (but not by virtue of their mechanical motions)?

  2. The question that is being addressed by the symbol grounding problem is whether our experience could be a simple succession of bits of information (such as is found in a one dimensional Turing machine).

    The correct way to resolve the question would be to ask whether the content of experience is laid out as a one dimensional bit stream, if not then it is not like the content of a computer. (The answer is that experience is nothing like a 1D bitstream - see A brief note on the appearance of time in experience, and the rest of this blog).

    The symbol grounding and Chinese room approaches take another route, they ask if there is any "meaning" in a computer. This then exposes the anti-materialist to the charge that "meaning" is ill defined so may indeed reside in a digital computer, "emerging" somehow from the bitstream, as you put it: "why can't it be the computers can perceive". Touche! as the French might say - the materialist parries vagueness with confusion.

    Given that the form of my experience is nothing like the form of a Turing tape the symbol grounding problem is a bit of a sideshow, however, I can examine "meaning" in my experience and, for me, "meaning" is also intimately linked to from (see New Empiricism and meaning ).

  3. Sorry about the typo: "meaning" is also intimately linked to form, not "from"!

  4. Meaning arises when a symbol is indicative of a representation and this representation is indicative of potential and eventually realized state changes. I propose reading Luc Steels' "The Symbol Grounding Problem has been solved. So what’s next?"
    Funnily but fully true: Pushing the ON-button on your vacuum cleaner produces a visible symbol (the button's electron bridge) that is (trivially) mapped to a representation of a bistable logic concept (this is the electronic board that contains the rellay) and the concept is indicative for the machine what to do (action selection is trivial here because of lacking alternatives to apply). The vacuum cleaner goes on because it understands you. Philosophy can be such a beutiful sport.

  5. The only thing that mars philosophy for me is the irrational prejudice across the subject against temporal objects. In the case of the vacuum cleaner we could have the button, relay, motor etc. unconnected and in separate boxes. In this case there is still a relationship between the parts but it lies in the circuit diagram of what the vacuum cleaner will become. However the button in a box cannot understand its function. We could assemble the vaccuum cleaner but not plug it in to the mains. The button is no different from a button in a box, it just has other components next to it.

    I could imagine a vacuum cleaner. In this case I contain the whole temporal object from press to suck over a second or so of mental time. Temporal objects contain starting points and end points with directed events in between so the press of the button points to the suction on the dust and the meaning of the vacuum cleaner is clear.