Odds & Beginnings
(Chapter VI)
[Draft!]
Back Contents Comments Next

Answers are the by-product of science, not its essence. Science itself is a process of applying, questioning, and transforming models. Sometimes, this process produces something compact and reliable enough to be called an answer. When this happens, bright minds (often different minds than forged the answer in the first place) transplant it out of its context, attempting to preserve enough of the supporting models to keep it alive but leaving behind the process which produced it.

Many other human activities are like this. Religion, though credited with answers, is mostly about questions. Deep religion will admit that humans cannot understand the divine but that it is nonetheless vital that they try to understand it. Deep religion and deep science never conflict because both mingle humility and eagerness. Only bad science and bad religion conflict, and we're frankly better off without either of them.

This chapter is a series of questions and reflections of varying length, each of which could (and might) be chapters of future books or even books by themselves. I will share ideas about the answers to these questions, but they are not yet clear enough to merit a chapter or a book on their own. Perhaps it's not fair to send out such incomplete ideas, but thoughtful exposure may help them evolve by a combination of variation, reproduction and extermination. In any case, I'll count on the thoughts of many readers and will look forward to the fruits of the half-ideas which follow.

How are models related to artifacts?

Humans are fascinated by the artifacts of animals: the beaver's lodge, the spider's web, the bird's nest. Part of the reason is that much of how we define ourselves is by our own artifacts; the artifacts of the animals may be our closest connection to them. For a period in recent history, some thought that the creation and use of tools --- excluding the beaver's lodge or the spider's web as tools --- was the hallmark of the human species. This pretension was dissolved when Jane Goodall spied chimapanzees making "termite spoons" from wild plants, but we are still drawn to artifacts of our own and other species.

Our own human artifacts are one of the channels by which we share and exploit models. Some of the models which I have described are organized around particular artifacts: the tic-tac-toe grid, the subway map, the physical calendar. Some relied on different sorts of artifacts in different ways: the checks and bills flowing through the economy, the marks made on paper to describe numbers, the microscope and telescope enabling scientists to see the world in different ways.

Artifacts support and contain models in many different ways. One way is coordination: two people can point towards a map to coordinate their models of a space or terrain; one person can read the numbers another person has written. Artifacts also support models by focusing attention towards the kinds of things to which a model refers. Microscopes and telescopes are both very good for looking at the objects of certain kinds of models and miserable for looking at other things.

Artifacts can also provide a physical structure reflecting the non-physical structure by which a model provides convenience. The abacus, for instance, embodies the positional notation of numbering systems in a physical artifact. Combining an analog representation for each position with distinct "powers of ten," the abacus encodes the structural basis for the digital numbering system. As we do the physical "carries" of magnitude from column to column, we feel and hear how the torrent of clicks in one column becomes a single sound in the next.

The myriad paper forms which constitute the life-blood of the modern bureaucracy often have structures and layout which mirror the functional organization of the bureaucracy: sections for mailing addresses, billing addresses, medical history, or education, all indicate the division of processing and decision making across the bureaucracies which process them. This becomes most frustratingly clear when it is clear that the model embodied in the form does not fit our circumstances. But how much of how we think of ourselves is defined by the forms we and our caretakers have filled out throughout our lives?

In learning to use artifacts like the map and the abacus, we also learn the models that they support. We learn to "find ourselves" on maps and turn them to fit the space around us. On the abacus, we hear the accumulation of sums in the clicking of beads. Each of these gives us a sense of the structure of the model and brings us to pay attention to the things which it pays attention to and also to ignore the things which it ignores. Artifacts teach us the models they embody.

Does Reality Exist?

Q: Why is life so complex?
A: Because it is part real and part imaginary.

If we are always using models and models are always limited, how do we know that "reality" exists? Does it exist? Or are there just the models we construct and share? Are you saying that reality doesn't exist?

Whether this question is fair or not, it is often asked. There is a natural fear that any "subjective theory" of understanding (like that we use models which serve our purposes) is incompatible with "objective reality". Interpretation and external reality cannot sit at the same table; if one arrives, the other has to leave.

But this is just false. There are undeniably things we know which are undeniable. And there are things which are a matter of convention or interpretation. Though we are always thinking with models, we are influenced by "reality" whenever we have two models which differ in what they are telling us. It is through the relative imperfections of our models that we know about their incompleteness. That is why awareness is so important for creatures which use models: awareness is our connection to reality through the conflicts between our models.

Reality certainly exists, but we do not have direct access to it. Instead, we use models but no one model is adequate to both its diversity and our goals. So we are left dealing with models which are part real (fit to the world) and part imaginary (fit to our purposes).

Is human language a model of the world?

Natural language is not a single model but involves a set of interacting models of various sorts. Phonological, grammatical, and intonational models organize our perception of spoken language. Semantic and metaphorical models organize our understanding of its meaning. Different words invoke different models and by doing so, lead us to different expectations and assumptions.

We all know that certain words are loaded with emotional weight. Sometimes, we forbid words in certain contexts because of this weight. However, words also bring assumptions which may not be emotional but are nonetheless important.

In the middle of the 20th century, a heated debate began on the relationship between language and thought. The linguists Sapir and Whorf made some strong claims about the way in which the language of a culture effects the culture's thought. They pointed out, for instance, that the structures for talking about time in the Navajo language were very different from those in most Western languages. These differences, they noted, made it difficult to express certain sorts of common Western thoughts in Navajo and difficult to express certain common sorts of Navajo thoughts in Western languages.

This property of languages has been recognized for millenia. The Koran, for instance, asserts that any translations of the Koran into other languages are not actually the Koran but are just translations of the Koran. In Italian, the terms for "translation" and "unfaithfulness" share the same root. Henry the 8th reportedly said "To learn another language is to look into another soul." Since he never learned another language, I suppose he knew what he was avoiding!

Many reacted strongly to a naive misunderstanding of Sapir-Whorf hypothesis by denying that language limited the way we thought. But the Sapir-Whorf hypothesis actually made the much weaker claim that language merely biased how we thought. It was easier to say certain things in certain languages than in others, which in turn makes it easier to both conceive those things and to explain them once conceived.

This introduces an important point related to convenience in models. A model may be able to describe many things but may only be able to conveniently describe some of them. The fact that it is awkward to describe something does not limit the system in any ultimate way, but it may limit it in a practical way. One can encode the number 186,323 in analog tallies, but one probably wouldn't. One can avoid exponential notation in describing the number of molecules in a glass of water, but that is both inaccurate (we cannot yet count the exact number) and awkward. To say that language stricly limits thought is clearly false. But the contrary, to assert that thought is unbiased by language, is also just as false. The reality is a continuum of tradeoffs controlled by the time we have, the examples we are confronted with, and the way our language changes.

Because language is actually an ensemble of models, it doesn't neccessarily face the problems which a single model generally presents. Of course, the mix of models doesn't give it infinite flexibility either, and some languages may include models which other languages can only approximate or import. Language change and collision provides fascinating stories about the adoption and reconciliation of models.

How do we switch between models?

If we have different models which we use in different contexts, what makes us switch between models? How do we identify and distinguish contexts? When do we know when to give up trying one model and switch to another? Is there a "master" model which manages all of the other models? Or is there some other way in which the conflicts between models get arbitrated?

From a design standpoint, the problem with a master model is that it will be the "weak point." Its limitations will apply to every model it manages. From a philosophical standpoint, the problem with the master model is the puzzle of the hommunculus. If models are interfaces between systems for purposes, which system is the master model for?

One strange answer takes up a point from The Case of Signs and Symbols. I proposed that a non-arbitrary sign from the point of view of an organism (like a cry of pain) might be an arbitrary symbol from the point of view of evolution. We could imagine a "master controller" for managing models which was defined on an evolutionary scale rather than a personal scale. This would not be a model we use, but a model we are for the system of our evolving species.

This goes a little too far. We can come a little closer by remembering the story of Rufus by the lake. Rufus, I reported, was aware of a conflict between two models and that awareness was expressed as frustration. And as I mentioned shortly afterwards, emotion plays a substantial role in arbitration between models. And the structure of our emotions, I expect is intimately tied to evolution.

Are there other ways to choose between models? Certainly. One popular approach in artificial intelligence is to associate models with particular contexts, using model A for context B and model C for context D. But how will we distinguish contexts from one another? Those judgements will also require models.

In human thought, much of how we identify contexts is through reminding which is deeply connected with emotion. So we return to emotion as a sort of arbiter between the models we use.

Thinking of emotion as an arbiter may make us uneasy. We tend to think of emotion as unreasonable and we certainly don't want an unreasonable arbiter. But there is a problem here. Any arbiter will have to seem unreasonable to whichever fixed model it arbitrates against. In this way, it may be that emotion is just a name we use for the arbiters between the models. And emotion itself may be a model but at a different scale than the models it arbitrates among.

Future computers will probably have to have something like emotion to deal with multiple models. They might not be like human emotions or even animal emotions, but there will have to be frameworks for mediation and arbitration between models. It is interesting to think about what the architecture for an "aware" computer would be. As I said earlier, computers are champs when it comes to reflection but are far behind when it comes to awareness. Most computer systems are complex systems with limited representations or simple systems (like most simulated neural networks) with diverse representations. The first sort of systems are not really able to be aware while the second sort of system is not (at least) complex enough to be aware in interesting ways.

But awareness is how we will make computers more reliable and flexible and so is among the most important technological questions before us. An aware computer might not be troubled by the Year 2000 problem or the inconsistencies of human language usage. Perhaps in thirty years, the idea of humans using computers without awareness will seem as dated as cars without seat belts do today.

Are models lies?

Models hide and highlight different aspects of what they are describing. Because of this, we may be suspect of the models which are used to reach particular conclusions, particularly if we disagree with the conclusions (as we understand them). Because they both constrain and empower us, the use and understanding of models is ethically complex. We cannot avoid using models, but the models we use are difficult to separate from our assumptions, our history, and the problems we think are important.

Models, we have seen, serve purposes for the systems (creature, person, organization, computer) which use them. But those purposes are generally implicit, typically invisible, and seldom criticized. Because of this, our choice of models may have ethical consequences which we need to consider. Economic models, for instance, always ignore certain aspects of individual lives in order to construct effective explanations and make useful predictions. However, the choice of which things are ignored by the models may have ethical consequences. Especially, once we are "optimizing" within the model, these concerns may have disappeared and though we may say we are seeking the "best solution" we are really seeking the "best solution given what our models ignore".

But thinking of models as "lies" is not very useful because we're stuck with them. Thinking and acting without models, interacting with the world directly, is not how the natural world works. The structure of our senses and the nature of our brains impose ancient models on us. Furthermore, thinking and acting without recent-model models, such as those from our language or culture or personal experience, is not how human understanding works. Because we are stuck with models, it is more useful to think about and among our models than to struggle to do without them.

However, it can be useful to try and do without any particular model, since that forces us to think with different models. This will certainly be awkward and require patience, but it is capable of paying off --- by yielding a new model --- in unique ways.

How do we learn or teach a model?

Models are basic to human understanding and a part of our success as a species comes from the models we inherit from the generations before us. How does this inheritance work? How do learn models and how do we arrange for our models to be learned?

No one belives any longer (if they ever did) that the purpose of schooling is rote memorization. At their best, schools prepare students to think carefully and to know deeply and only secondarily focus on what to think and what to know. Schools are about imparting models for the physical, emotional, and social worlds into which their students will emerge. But models are ways of looking at the world. It is difficult to introduce models without talking about various aspects of the world they are describing.

In the past examples, we have understand models in terms of three components: reference, inference, and convenience. We can use these components to think about how models move from mind to mind and to identify three levels of model learning:

Consider my treatment earlier in this book of The Case of the Inertial Frame. In kinematics (which is the part of physics for which the inertial frame is a central model), the formal language consists of mathematical language for positions, velocities, accelerations, force, weight, and mass. The inferential component of the model is mostly mathematical, but I (intentionally) minimized this inferential component for a variety of reasons. Instead, I focused on the convenience of the model for simplifying the description of situations and tried to explain how the model was used but not how the model worked.

Given my description, you could not solve many kinematics problems, but you should have been able to understand how the model of the inertial frame would make solving such problems easier. I tried to give you the practical learning without the formal learning and it worked as far as it goes.

Formal learning without practical learning is pretty useless for anything but impressing people at cocktail parties. Reading a calculus text but not doing the problems means that you can name and maybe even define the concepts and categories and methods, but that you don't really understand why they're useful or how they are applied.

Situated learning is another story. Part of how models function is by limiting the ways in which they refer to the world. By not having to describe everything, they are able to say more about the things they do describe. To teach the limits of models is to teach about the ways in which they refer and the ways in which they do not refer. Situated learning is not about learning to apply models to problems; it is about picking problems and models together out of actual situations.

Creating contexts for situated learning is hard work. It means thinking hard about the models we use, why we use them, and why we don't (or sometimes do) use others. It means admitting the limits to methods and models and introducing the unknown. It means encouraging people to ask questions we cannot answer.

There is an opportunity now, which some are seizing, to leverage changes in the structure of education brought about by revolutions in computers and telecommunications. There is the chance to change the way in which most of our species learns and to so transform the depth and capability of the way they think.

One is left breathless by the pace of technological and social change over the past three centuries. Part of the engine of this change has been the combination of population and education. To take one example among many, many thousands times more people understand calculus now than did three hundred years ago. Because of this, we are seeing many more fruits of this knowledge than we would have before. There are many similar models in different parts of life, which make the world better by being widespread.

Education --- in the best of senses --- has the potential for making the population explosion we are fearing into a population opportunity which will yield more than we can imagine. There is nothing more important.

Copyright (C) 1997, 1998 by Kenneth Haase
Sketch, not for citation or circulation
Back Contents Comments Next