|We learn a lot about our models by asking some particular sorts of questions: Reference: what is the model describing? Convenience: how is the model used? Imagination: what does the model add? We can look for answers to these questions by looking for a certain kind of systematicity in our models. This chapter introduces these ways of opening up models to see how they work while the subsequent cases demonstrates them.||
How do models work? The first part of this book introduced a way of looking at models as interfaces between the way we think and the world we are thinking about. We saw how models select, transform, and rearrange to serve particular purposes. And we saw how when situations or purposes change, fixed models suffer and need to change along with them.
This part of the book takes a closer look at models and how they work. Understanding a model involves asking questions and questions about models can be organized into three broad categories:
We will see that many of the answers to these questions share a common thread. Reference, convenience, and imagination all rely on systematicity in their connections, transformations, and additions. "Systematicity" is nothing more than the requirement that models be reliable as interfaces. Once a model translates the upper left corner of the tic-tac-toe grid to the number "9", it cannot later translate it to "5". Once a mathematical model of moving objects ignores the objects' emotions or goals, it cannot suddenly take them into account. When a model appears to be non-systematic, it is because our description of the model is incomplete or we are trying to understand a set of models as a single entity.
Systematicity is required of individual models but is definitely not required among models. The physical and emotional models we have of lovers running towards are not systematic with respect to each other, though each has a systematicity of its own, where certain things are consistent and others are not.
A system need not be systematic to make use of a model. Human beings use a wide range of models and may or may not be systematic in choosing between them. Likewise, a system need not be systematic to be modelled. Social systems, for example, may not be systematic in any identifiable way, but the models we have of them are.
Systematicity is useful for picking out actual models from possible models among the web of possible interconnections in the world. We know that the two digit numbers in the computer are a model of years because that connection is systematic. We know that children's recall of event ordering is part of a model of causality because causally significant orderings are systematically preserved while causally insignificant orderings are lost.
This chapter expands on questions of reference, convenience, and
imagination and the criteria of systematicity. Following it, a series
of cases illustrates these further by considering particular models or
sets of models. The reason for this detail and the examples is that
good models are so natural that we tend to underestimate the
complexity of their structure and assumptions. By unpacking
apparently simple models, I hope to show the complex choices and
considerations underlying simple-appearing models.
Reference is how models connect to the system they are describing. It is about "naming" things so that we can think about them, learn about them, and share our knowledge about them. In thinking about reference, it is important to keep in mind that our models are usually models of other models rather than models of the world "itself."
Because reference is the foundation of the models we use and of our communication with one another, it has to seem straightforward. If it did not, the most basic operations of understanding the world, of judging similarity and difference, of telling what had changed and what had stayed the same, would all be impossibly complicated.
However, reference itself is anything but simple. There are many different ways to pick objects and kinds of objects out of the environment, as in this science-fiction parable:
Attempting to teach the alien Zt'har a human language, the semiotician began with the names of objects. Pointing at the diamond sample between them, the semiotican carefully enunciated "dye-ah-mawnd." One of the aliens, turning to the other, then said "Qzera?" which humans eventually discovered meant "Which atom do you think he means?"
Reference assumes a common set of conventions to start with. Pointing, labelling, and grouping are complex actions which seem simple when we agree on scale, purpose, and similarity. But they are almost entirely opaque when we don't agree on what scale of things we are pointing to, what kinds of things we are labelling, or what criteria we are using for grouping. Another parable of alien communication shows that scale or level of description (atom or stone) is only the beginning of the problem:
In the space-station lounge, sitting across from the two Kloathar representatives, the linguist started on vocabulary. Pointing at the table, he said "tay-bel"; pointing at the iris in the vase between them, he said "flaowerr;" pointing at his nose, he uttered "noase." After this series of demonstrations, the younger Kloathar repeated each action while saying the human English name. The human linguist beamed as the younger Kloathar commented (in a short un-transcribable utterance only interpreted a decade later) "It's fascinating. We've never found a species with so many words for `point.'"
The very act of naming can be full of assumptions and unless those assumptions are shared, we may end up with very different names or schemes of naming.
These science-fiction parables are fanciful demonstrations of the assumptions implicit in something as apparently simple as naming, which is a very simple kind of reference. Between humans, in fact, there are seldom such confusions because the ways in which our perceptual systems organize the world are so similar. But it is more complicated to think about reference in either historically recent or technologically based models. We saw, in the discussion of relativity as a model (The Case of the Special Theory), that sometimes the references of a theory can change radically. When Einstein proposed his special theory, the references of physical theory shifted from absolute space and time to relative measurements of space and time. Before special relativity, physical laws described points and intervals as fixed and objective in some absolute space; after special relativity, they had to describe only measurements of distances and intervals and include assumptions about the relative velocities of observer and observed.
Reference is fundamental to the functions of models. Reference serves the purpose of safety by keeping us separate yet connected to the world's challenges. Reference serves the purposes of consistency and simplicity by limiting the "part of the world" we are trying to understand. And reference serves the purposes of cooperation by allowing different models or different individuals to talk about the same things.
In this part of the book, particularly in The
Case of The Alien World and The Case of the
Lively Desert, we will look at some surprising cases of confused
reference, learning that reference often hides assumptions and that
changes in reference often free us from misunderstandings.
Convenience is how a model relates to the system which is using it. Obviously, convenience depends on the nature and structure of the system using it. In the game of Fifteen from the introduction, the utility of tic-tac-toe for describing the game of fifteen requires a "user" who has an easy time figuring out whether or not squares "line up." Change the user to a computer and what made the model convenient will make the model awkward.
Compression is one very general form of convenience, where a model reduces the physical size or complexity of whatever it is describing. Compression is a form of convenience because it is often easier for the system to manipulate smaller pieces than larger pieces. To reconsider the striking example from the beginning of the book, recall how the following addition
1,538,945 + 3,448,117 --------- 4,987,062
was easily accomplished in under half a minute with a digital representation, while the same addition using an analog representation would have taken almost four sleepless months to complete. A difference of this scale is hard to imagine but is characteristic of the kind of compression in complexity provided by many models. This is a logarithmic reduction: operations on the digital model take effort which is the logarithm (meaning very much smaller) of the effort which would be expended on a direct analog model.
People are familiar, at least vaugely, with the idea of exponential growth: when bacteria or rabbits keep multiplying, the numbers soon get out of control. Updating a classic example to the present day, if you had a penny and doubled it every day, it would take you less than a month and a half to have the wealth of Bill Gates (and your wealth would be in cash!). Of course, we seldom see such explosive growth for long because other factors (the amount of copper in the world, for one) end up limiting the process, but nature is filled with examples of short-term exponential growth, for both good and ill.
The idea of logarithmic reduction is less well known, but equally radical because logarithmic reduction is the inverse of exponential growth. Logarithmic reduction shrinks things as fast as exponential growth expands them. Thus, it isn't surprising that models that offer a logartithmic advantage can be exceedingly powerful. Suppose that instead of doubling your penny every day, the cost of things was cut in half every day; you would soon be able to live as though you had Bill Gates' wealth off of the interest on your single penny!
Convention is another very general way of providing convenience. A model uses conventions when it provides descriptions which are already in a form which the system using the model can already deal with. The convenience of convention is part of the reason why metaphor --- the systematic linking of different structures --- plays such an important role in human language and thought.
In the late 1970s, George Lakoff (a linguist) and Mark Johnson (a philosopher), began defending the radical assertion that nearly all language use is metaphorical. Previously, linguists and philosophers had claimed that metaphor was a "marginal" phenomenon of primarily historical interest for its role in language change and evolution. They thought that the only connection between "prices rose" and "balloons rose" was historical. According to this established theory, there had once been an act of imagination connecting them, but that the use of similar words did not denote connections between their meanings.
What Lakoff and Johnson demonstrated was that everday language remained systematic in its use of metaphor. Metaphorical mappings between systems of words ("prices rise" and "balloons rise") retained the relations between the words (if "balloons crash," then "prices crash"). Interestingly, this became visible in the words or turns of phrase which were not used: one does not generally say "prices rose until they hit the floor" and linguists and non-linguists alike will know that something sounds a little funny about such sentences.
In the same way that the two-digit model of calendar years fit the architecture of early computers, the metaphors articulated in the logic of language fit the architecture of human language users. Everyone who learns language on this planet has had experiences of gravity and motion, so they do not need to expend great effort to "figure out" what is meant by "prices fell to the floor" or "negotiations turned around." Our experience and inner lives, which we substantially share by being of similar construction in relatively similar worlds, connects naturally to such models, linking the logic of our experience to the logic of the models in our language.
The form of our experience and the form of our world combine with
systematicity to shape the form of our language and thoughts. This
does not mean that we cannot imagine things we haven't seen nor that
we cannot form thoughts of the unfamiliar. What it does mean is that
it is easier and more convenient to think in familiar terms
than unfamiliar ones or to recognize conventional patterns than
Imagination is the transformation and augmentation of the results of reference. Models add to what they describe in all sorts of ways. Though this addition is sometimes the source of error, it also serves the purposes of safety and consistency. We know that the glowing stove is hot without touching it and we can respond to a tiger's glowing eyes just as we would respond to seeing its entire body. But the ways in which imagination augments our descriptions takes many forms.
Completion is one sort of imagination, taking partial descriptions and making them whole. We saw this in the picture and sketch discussed in Chapter 2:
where the sketch includes parts of the chair which are not actually visible but which we can guess at because of your past experience with chairs. For instance, the seat is not visible in the picture, but you would know that chairs have seats and that those seats are often made of the same material as their backs. This allows one to draw a picture which is more "complete" than what one could actually see.
Without the natural provision of completeness by our models, the world would be far too complicated to deal with or learn about. Each obscuration or orientation of view would be a different kind of thing to be remembered, learned about, or distinguished.
Imagination is not limited to things we might actually see, like hidden chair seats. Looking at these figures
we might naturally divide the left-hand figure into two pieces which could be separated by the dashed line in the right-hand version. That line of separation is not something we would actually see in the left-hand image, but is something that our models of our world --- where lines and boundaries are important --- might add to the description. If we rotate the left-hand figure, a different imaginary line seems quite natural:
In his philosophical writings, the poet Samuel Taylor Coleridge introduced a striking distinction between fancy and imagination. Fancy involved the mere juxtaposition of familiar components without substantial regard for form, logic, or consistency with the world of experience. Imagination honors that logic while putting things together in unexpected ways. The sense of imagination involved in our models is very much Coleridge's sort of imagination, mixing freedom and constraint.
In some of the cases to follow, we will see how extraordinarily
useful these imaginary constructs may be. In contexts ranging from
open-ocean navigation to the understanding of complex physical
phenomena and devices, imaginary entities have very real and important
Systematicity is part of the glue which keeps models together. In much the same way that the chemical and statistical laws of molecular interaction drive the mechanisms of cells and their membranes, systematicity is the driver or engine for models. Systematicity is a requirement for local consistency in the translations, reductions, and expansions which underly the models we think with.
For example, systematicity is the constraint on metaphor which makes phrases like "prices rose to the floor" sound odd. In the fragment "prices rose," we treat prices as objects ascending vertically in some imagined space. But in the fragment "to the floor," we treat prices as though they were descending in the same sort of vertical space. Systematicity requires that we treat different fragments identically when they occur in or rely on the same context.
By changing the context, certainly, we could make a sentence where the juxtaposition would make sense: "at the start of trading, prices immediately rose to the floor predicted by analysts" where analysts were predicting a range for the day's trading. But systematicity is what requires different contexts (the day as a whole and the day of trading) to separate out the two systematic mappings.
Translations between human languages are not generally systematic. For example, the English sentence "The book is by May Sarton" can be translated to French as "Le livre est de May Sarton" with (roughly) the following component translations:
|May Sarton||May Sarton|
However, the English sentence "The map is by the Metro" translates to "Le carte est pres le Metro" with the following components:
where "by" translates to "pres" rather than "de." (With apologies to the lost Belgian truck driver who helped me unknowingly generate this example. His request for directions at 2AM yielded my accidentally mystical recommendation that "There is a map written by the subway").
The reason that translations between human languages are not systematic is that human language is not a single model in the way that (for instance) most computer languages are. Human languages are patchworks of many models which humans can naturally learn because many (if not all) of the models come from human experience.
It is mostly the non-systematicity of translation between natural languages which makes computer translation between natural languages so difficult. In the early days of computers, before the depth of this non-systematicity was fully understood, engineers thought that computers would soon be translating languages routinely and effectively. The profound dependence of natural language on context was one of the lessons learned from the introduction of the computer into natural languages.
However systematicity is not limited to the use of words. In the physical world, systematicity has to do with compatibility of the shapes and contours of objects. In electrical worlds, systematicity has to do with the matching of properties like voltage, current, and phase. Systematicity is important to interfaces because systems connected by an interface are really independent and we cannot rely on systematicity between systems. The more different we are, the more important it is that our language of communication is clear in the areas that matter to us mutually.
The fact that systematicity is so important in models does not mean that there is any sort of global consistency across models. Systematicity is a property of the interfaces among systems but need not be a property of the systems themselves. Especially, systems do not need to be systematic with respect to each other. In the examples to follow, we will look at many different models and see how systematicity --- in reference, imagination, and convenience --- enables a model to work as an interface between the quite different systems of natural or created world and our human or computer understanding.