The Case of
Signs and Symbols
|expands on the use of symbols discussed in The Case of Being Digital, discussing the value of arbitrariness in symbols' connection to the world. Arbitrariness is crucially constrained by requirements of systematicity of symbolic connections. To clarify these questions, we look at signs and symbols in the animal world and propose that some natural signs are in fact symbols from the point of view of evolution.|
In The Case of Being Digital, we were introduced to the use of symbols in models. In a digital numbering system, the use of symbols simplifies operations like addition or multiplication. In this chapter we look at how symbols help us to share models among individuals and even among different species! We finish by introducing an important distinction between symbolic and mechanistic models.
In describing the symbols used in digital arithmetic, we focused on compression. The easy recognition of individual symbols (such as "7") as tokens of something more complex (such as "|||||||"), made arithmetic operations on symbolic descriptions much more convenient and efficient. Compression is one important advantage provided by models and is especially important whenever the raw complexity of the world outstrips our capacity to understand it. By using symbols in our models, we simplify our thoughts.
Symbols are also used in sharing models. When an item in a shop is marked with the symbols "$19.95", it connects our models of the money we have with the storekeeper's model of the money they require for the item. Symbol sharing beyond numbers is deeply involved in our use of a common language. When we ask a shopkeeper for a "croissant", we are saved the burden of demanding "a flaky slightly curved pastry between 5 and 10cm long with lots of butter" (which itself involves other symbols). In addition, money itself is a kind of symbol, representing effort or potential or commitment. As we saw in the case of the bank and the soup kitchen, models around money tend to be complex and multi-faceted. Part of the reason for this complexity is that money is a symbol with many different meanings in many different contexts.
Though symbols have been used by human beings throughout history, our understanding of symbols was transformed in the mid-1800's by the French linguist Ferdinand Sassure. Sassure was beginning to lay down a scientific foundation for the study of human language and introduced one of the first accounts of reference, which he described as a mapping between "signifiers" and "things in the world." He also introduced a useful way of dividing these signifiers into two important categories: signs and symbols.
The relations of signs to the world is a logical one which we can figure out from a mix of appearance and context; the relation of symbols to the world, on the other hand, is arbitrary. A sign, like the one shown here indicates the possible presence of horses with a stylized but recognizable picture of a horse. The sign, of course, still makes assumptions about the reader: for instance, orientation to shape rather than size and to vision rather than smell.
On the other hand, a symbol like this icon on the right might indicate nearly anything from a trinitarian church to the price of a toll (three quarters or 75 cents?) to a meeting place (three chairs around a table). In fact, the symbol identifies a potentially hazardous source of radiation, but one could not tell this without knowing what it meant ahead of time.
The arbitrariness of the symbol allows a user or designer of a model to choose symbols which suit their purposes and not the peculiarities of whatever they are describing. Symbolic systems can be designed for the convenience of access to the user. If the mapping of symbols were not arbitrary, users would need to figure out what the symbols meant, losing the advantage of "single step access" offered by symbols. Also, the system using the model would not be free to change the meaning of symbols to suit what they thought was important or what they wished to make easy to express.
Though arbitrariness provides freedom for the designer of a
model, it also constrains the designer to avoid symbols which
already have interpretations for the user. The brilliant "Who's On
First" dialogue of Lou Abbott and Bud Costello illustrates this
|(Read left to right and down...)||Abbott: Well, let's see, we have on the bags, Who's on first, What's on second, I Don't Know is on third...|
|Costello: That's what I want to find out.||Abbott: I say Who's on first, What's on second, I Don't Know's on third.|
|Costello: Are you the manager?||Abbott: Yes.|
|Costello: You gonna be the coach too?||Abbott: Yes.|
|Costello: And you don't know the fellows' names.||Abbott: Well I should.|
|Costello: Well then who's on first?||Abbott: Yes.|
|Costello: I mean the fellow's name.||Abbott: Who.|
|Costello: The guy on first.||Abbott: Who.|
|Costello: The first baseman.||Abbott: Who.|
|Costello: The guy playing...||Abbott: Who is on first!|
|Costello: I'm asking you who's on first.||Abbott: That's the man's name.|
|Costello: That's who's name?||Abbott: Yes.|
where the arbitrary use of words which already have meaning ("who") leads to confusion and frustration. This doesn't mean that the symbols (like the sound "whooo") aren't arbitrary, but that once we begin giving meaning to symbols, those meanings may constrain what other symbols and combinations may mean. In fact, symbolic models often use the non-arbitrariness of known symbols to their advantage as in the figure on the right, letting us interpret the "sign" even if the language of the text is a mystery to us.
Metaphor in language ("prices fell") relies on systematicity, using the interpretation of a word like "fell" applied to plummeting objects to describe a sudden reduction in monetary valuation. The systematicity of metaphor, which we described above, is the flip side of the arbitrariness of the symbol. Arbitrariness lets us choose, but systematicity arises because we have to (and get to) live with our choices.
Because we have defined models as natural phenomena (interfaces which serve purposes), it is certainly the case that animals use the kinds of models which we are discussing. As we saw in The Case of the Frog's Eye, animal visual systems act as a kind of model, hiding and highlighting complexity. Thinking about animal models may teach us about models in general, because they challenge the preconceptions and assumptions we hold when we think only about human models. Such is the case with the role of symbols in models.
Though animals certainly use models in the general sense I've discussed in this book, the use of symbolic models is tricker to ascribe to them. First, because symbols are arbitrary, we must understand the animal to know which manifestations are signs and which are symbols. For example, we might confusedly think that a dog understands an abstract concept like "owned by" because they will distinguish their master's frisbee from another, when in fact they really understand "smells like" with a precision to which we humans are blind.
Second, because symbols are shared, we need to look at the interactions of animals to see evidence of symbolic communication. Despite these problems, numerous experiments with dolphins and higher primates have shown that some animals, at least, can use symbols. For example, in a series of experiments by Louis Herman in Hawaii, dolphins were taught a language of human body gestures (somewhat like semaphores) and could consistently and correctly interpret commands made by combining these gestures. To these aquatic mammals, the physical gestures were almost certainly arbitrary but they nonetheless learned to interpret their combinations systematically.
Some psychologists, linguists, and philosophers have asked whether or not this sort of learning really counts as symbol use. A key question is whether the symbol (the body gesture) is really interpreted in ways different from those that the animal was trained to produce. If the symbol is never used in any different ways than the animal was trained to produce, we would have to say that the token is a symbol for the trainer but a sign for the animal since the use of the token is not arbitrary when the animal uses it.
By carefully documenting the training process, Herman showed that the dolphins were able to use symbols correctly in contexts for which they had not been explicitly trained, proving that the dolphins were dealing with both the arbitrariness of the symbol and the systematicity of its use. And in another striking result, dolphins were able --- without training --- to handle the substitution of objects for symbols. Dolphins could correctly interpret sentences where a representative object (for instance, a hoop) were held up in place of the arbitrary symbol.
When a bird in a flock spies a predator, it cries out with a distinctive sound, warning the other birds of the danger. In some cases, the sound also specifies the kind of danger. Chickens, for instance, have different cries for airborne predators (hawks) and terrestrial predators (foxes).
Is the bird's cry a sign or a symbol? On the one hand, it is not arbitrary, since the sound is "in the genes" and birds in separate communities or raised by other birds will make the same sound when the "right" sort of predator appears. On the other hand, the logic connecting the sign to what it denotes (the predator) is also not completely clear. Is the bird conveying (in some instinctive sense) the concept "I will soon be screaming in pain"? This seems a little far-fetched and leaves us with the interesting question of why we make noises when we are in pain.
Here's an interesting but speculative explanation. Making noises when we are in pain has several possible advantages:
so that it makes evolutionary sense to make noise when we are in pain. From the point of view of evolution, the cry of pain is a symbol; from the point of view of the organism in pain, the cry of pain is "just" a sign. For other kinds of organisms, where sound is not quite as important, pain can be indicated by (for instance) the secretion of chemicals which its comrades can readily detect. From the point of view of evolution, the indicator is a symbol selected for the organism using it. But it is a sign for the creature, since the arbitrariness is at the level of evolution rather than at the level of the organism.
This is not to say that evolution is some sort of mysterious agent making choices just like you or I would. Evolution is a natural process which requires "opportunities" and is limited by "constraints." The arbitrariness of the symbol associated with predator response is simply a range of opportunities open to this process.
Of course, for the organisms themselves, these "symbols" are no more arbitrary than the expected number of our fingers and toes. But the next time we yelp when we stub our toes, it may be interesting (and distracting!) to think that our cry of pain is in fact part of a model which our evolutionary development has selected for the purpose of our species' well-being.
What makes something a symbol is a matter of boundaries. If a system can change or has chosen the meaning of a token, we call it a symbol. If it cannot be changed, we call it a sign.
We can generalize this story to distinguish between how models are used. When we use a model symbolically, we have choices about the symbols we use and the connections we make, limited only by systematicity. When we use a model mechanically, the choices have already been made for us. In both cases, we are using a model (with its patterns of reference, convenience, and imagination). But in one case, the model may be changed by the user to better fit the purposes for which it is being used.
We might think that it is always better to use models symbolically rather than mechanically. Certainly, it would seem, more choices are a good thing. However, this may not be the case, depending on how the choices are used.
In the late 1980s, I did an experiment in computer creativity where a computer program chose and changed the way it accessed its own models. The details are a littled complicated but can be found here. The surprising discovery from the program was that changing its model at the wrong time --- for perfectly good reasons --- crippled the program. In its initial configuration, I gave the program some symbols which I knew would eventually be useful to it; however, the program changed its model before these had been used, and it essentially discarded my preparations. I ended up having to keep the system from changing its original configuration to keep it from eventually getting stuck.
Models are constrained by the nature of the world and the purposes of their users. If we understood the world and our purposes perfectly, it would always be better to have symbolic models. Unfortunately, we do not. As a consequence, we use some models mechanically, letting other processes --- individual maturation, social change, evolution --- change our mechanical models based on factors we do not completely understand. Of course, these processes also makes mistakes (no model, and thus no process, is perfect), but their mistakes may be less harmful in the long term.
One infamous story about animal communication concerns a German horse dubbed "Clever Hans" in the early 1900s. For some time, it seemed that Hans (the horse) was able to answer simple written arithmetic operations. Presented with a written mathematics problem (like 7+5=?), Hans would stamp his hoof to indicate the answer (12 times). The real explanation of this amazing feat was uncovered by the psychologist Oskar Pfungst.
Hans' owner, it turns out, was unconciously giving away the answer by nodding his head or giving other signs as Hans stamped his hoof, but stopping when it was no longer correct to stamp. If Hans could not see his owner (or other experimenters) or if the owner were confused about the answer, Hans would not correctly answer the question. The real information being conveyed was not "what is 5 plus 7" but "you're not done yet/you're done".
This story is often used to illustrate how easy it is to mis-attribute symbolic skills to non-human creatures. However, I'd like to use it for a different purpose by looking at Hans' owner rather than Hans. Human communication is filled with non-verbal (head nodding, hand motions) and para-verbal ("uh huh", "ok") elements. Usually, these cues function mechanically for both partners in a conversation. They act as part of a model of mutual comprehension and coordination which is partially cultural and partially just "human".
Hans' owner was certainly communicating with Hans using these non-verbal cues for the "ok/not ok" model. That was what Pfungst discovered. On reflection, this shared model was probably more useful than equine arithmetic, since it could describe things which Hans did not or could not know directly: which road to take at a new turning or exactly how far to pull a plow in a small plot. What is impressive is that Hans learned to interpret the signs of this model used in human non-verbal communication.
Ironically, the model shared by Clever Hans and his owner was used mechanically by the human but might have been being used symbolically by the horse! Hans may have learned the "human code" in a way that would make it symbolic: it could be changed as circumstances changed. But asserting this for sure for Hans or other animals would require more experimentation.
Early in this book, we discussed The Case of the Year 2000 where computers designed in and for this century would break down in the next, due to the assumptions in their design. We can understand this problem better when we see the distinction between symbolic and mechanical models. For the human being, the string "69" in the appropriate context is a symbol for the calendar year "1969 A.D." with all which that implies. It is arbitrary and the human being probably has many other symbols describing roughly the same period of time (the year of the Moon Landing, the start of the Nixon Presidency, etc).
For the computer, however, the string "69" (or the byte in computer memory encoding the number) is not arbitrary. The computer does not have other symbols or designations for the same period, nor can it learn new sorts of ascriptions or change the models it has. The string "69", for the computer, is part of a mechanism. It was a symbol for the programmer who understood the context and could have changed the program. But for the computer was "just" part of an incredibly complex but fixed mechanism.
To be a symbol, rather than merely a mechanism, the parts of a model must be subject to change and some system containing the model must be able to learn. Computers are not incapable of learning or changing, but they are not generally designed to do so. Computers are not incapable of using genuine symbolic models, but they generally do not.
Working with a stubborn computer is usually more frustrating than interacting with a stubborn human being because the models which are symbolic for us are mechanical for the computer. While it is not always useful to use models symbolically, sometimes it is and today's computers are often tragically incapable of symbolic model use.