Where we look at how multiple models act and interact in our minds, illuminating questions of awareness, consciousness, and emotion in the context of a multiplicity of models. Consciousness
The Awareness of Models
(Chapter IV)
Back Contents Comments Next

Most of us think we know what we're doing
Some of us really know what we're doing
But none of us know what "what we're doing" does

Michel Foucault

The previous cases have been exercises in consciousness. Each of the examples unpacked the hidden assumptions and structures of a model. Unpacked, it was possible to understand how the models worked and how they served the purposes of the humans and machines who employed them.

Each example of a model was also an example of how to understand a model. Given this way of understanding models, the remaining chapters attempt to address some old questions and open some new ones. Of course, the perspective I've been using is itself a model, which means that I shouldn't expect it to be useful for everything. But it will be useful for unravelling some knotty questions.

Consciousness is a part of human nature which everyone has supposedly experienced but no one has completely understood. Among other labels, it has been called a mystery, a myth, an illusion, a cultural invention, and a quantum mechanical phenomenon. Because it is still such a mystery, it is the realm in which some hope we might still find the essential difference either between ourselves and the animals or between ourselves and our computers. Because we don't understand it, we tend to identify it with the other things which we don't understand.

This chapter connects consciousness with the use of models in thinking. It proposes that consciousness involves a certain awareness of models which includes description of some of the assumptions and tradeoffs our models normally make. All models, we've seen, select from and transform what they are describing. Consciousness is the awareness of this selection and transformation. This does not mean that consciousness has some sort of priveleged access to the "real world." In order to see the assumptions and structure of our models, we only need to use a model which doesn't make those assumptions. This is what consciousness requires.

In order to discuss consciousness in this way, I'll start by talking about three other phenomena which are connected with but sometimes confused with consciousness. Responsiveness is the way organisms respond to their environment; awareness is a sensitivity to differences between different models of the same thing; reflection is the description of ourselves and our actions. Consciousness is our aware reflection on our own models.

Splitting some Differences

One of the properties of a good model is systematicity; when we translate from model to world and back again, we would like the consistency of the things we say and the distinctness of the things we talk about to be preserved. One of the plagues to systematicity is ambiguity. If we use one word to describe different things, systematicity is likely to break down unless there really isn't any difference between the different things to which we are referring.

To avoid this problem in discussing consciousness, I am going to split the word consciousness into a number of different words. Each denotes important differences in meaning even though common usage may often use the same word or words to refer to the different meanings interchangeably. And the meanings do have a lot in common, which is why it may be useful for everyday language to be ambiguous about them.

Each of the words I am using has its own external meanings, but rather than invent new words, I'm trying to reuse these. But throughout this book, and especially in this chapter, I am trying to be consistent in my use of these particular four terms, which are almost organized into a hierarchy:

Responsiveness is the function of responding to external inputs; it is what Aristotle ascribed to animals and we ascribe to living creatures in general. It includes phototropic plants, for instance, but more familiar instances are in animals.
Awareness is responsiveness together with "meta responsiveness" involving a reaction to the character of our reactions. Dogs and many other animals are aware in this sense. A dog can be aware, for instance, of what it is trying to do even though it is not actually doing it. Awareness requires multiple models   Reflection is the formal awareness of our actions and the redescription of them in other models. Humans, at least, are capable of reflection through language if nothing else. Reflection is about our actions but not about our models. In particular, it does not describe what our models are ignoring.
Consciousness is aware reflection on our models. Consciousness is the description of our models and their filtering and transformational function.

Responsiveness

Responsiveness is the ability and tendency of a system to respond to changes in its environment by some action or motion distinct from the change itself. A kicked ball and a summoned dog both respond to changes in their environment, but the dog's response is quite different from the event which triggered it.

As we read in Chapter 6, Aristotle's sponge was responsive in that it reacted of its own volition to an external stimulus by hunkering down when it was grasped. As we also learned, Aristotle's begonias were also responsive, but on a different (and normally imperceptible) time scale. Responsiveness is common to all living creatures and is part of the reason that living creatures are so darned successful. In colloquial terms, living creatures can roll with their punches.

Responsiveness requires some sort of model because responsiveness is selective and purposive. Of course, it may be a very simple model but it does have the aboutness (it is connected to some stimulus) and distinctness (it involves some response) connected to a purpose (the organism's survival or reproduction) which is enough for a model. Given that this is the case, we can ask our familiar questions even of this simplified model:

It does not take much to be responsive and modern technology has surrounded us with many responsive artifacts. When asked, children readily identify robots as "alive" based on their autonomous motion and responsiveness. Indeed, they tend to find robots more alive than plants.

But simple responsiveness is also limited for many reasons, not the least of which is its connection to a single model. Awareness, the next definition in my "splitting of differences," relies on the presence of multiple models of the same thing.

Awareness

Simple responsive systems tend to get stuck. The problem is that though they do respond to their environment according to purpose, they tend not to handle the many kinds of purposes required in a diverse world. Beyond responsiveness lies awareness which I will introduce with a story of canine frustration.

The stick is floating a meter into the water of the pond and Rufus (the dog) is on the pond's edge, frustrated. He has swum in this pond before, when a slowly sloping beach led him "accidentally" over his head, but the edge here is a sudden drop, rather than a gradual transition to swimming depth. His objective, a weathered stick, is just a meter away, but he cannot risk the watery precipice. He can reach and even stretch a paw towards it, but his fear of the sudden drop will not let him leap after it and return to shore.

He whines and makes noises I have never heard before, evidence of his frustration. He snarls at a passing dog, venting his frustration where it does not belong. And he picks at sticks along the shore, but they fail to satisfy. And he returns, torn between the nearness of his goal and the caution of his circumstance.

Rufus is frustrated. And Rufus is aware. Awareness has to do with the perception of our own patterns of responsiveness. Rufus's frustration is evidence that he knows what he wants and he knows that he is not getting it. The whines, the snarls, the unsatisfying search for "transitional" sticks, are all independent of getting the stick but evidence that the collision of two ways of responding (to the stick and to the drop) has not gone unnoticed.

Though all life is responsive, awareness is a much more limited attribute. Many creatures (and most computers) will continue endlessly in the same cycles of stimulus and response until terminally exhausted. Those with the gift of frustration can notice these cycles and --- sometimes --- escape them. It is strange to think of frustration as a gift, but it is what enables us to break out of the cycles in which we find ourselves enmired.

Awareness always requires multiple models. To be aware is to respond to the difference between our impressions and our world. Recognizing this sort of difference requires that we describe something in at least two different ways. Only by such juxtapositions can we tell that we are "not getting something."

Though I cannot read Rufus' mind, I would guess that one of the models was of his quarry "lying" a few paw-steps away on the dark surface of the pond. Another model described the scary drop below him. Each of these had its own patterns of reference, its own structure, its own associations in Rufus' mental processes. But because they are different, Rufus is sensitive to their conflict which on the one hand is pulling him forward and on the other hand pushing him back.

With only one of the two models, Rufus would have either gotten the stick (and been forced to swim) or never pursued the stick in the first place. And if the two models were really one --- with consistent logics where risk and reward were somehow consistently balanced --- he would have never had the problem of the conflict. But Rufus could be aware of the problem because he had two different models of the same thing.

Rufus' frustrated behavior indicated that there was some kind of reaction going on to his inner conflict. Whether his frustration was itself a model is not clear, but it is clear that Rufus was not "stuck" in either of his competing models. This is the advantage of awareness: it helps us not get stuck.

Rufus' advanced diving lessons had to wait. After five minutes of watching Rufus struggle and reflecting on the interpretation of his frustration, I tossed a second stick up the path to a point safely on dry land. Running after it, he left his frustration behind and we proceeded around the pond.

Computers Caught Unaware

Rufus' multiple models keep him from getting stuck. Even without my prompting, Rufus would have eventually run off to other activities. And multiple models are required for awareness because we can only realize when we are stuck in one model (e.g. "me and my stick") when we have another model (e.g "me fall in") to describe our situation.

Computers, on the other hand, are generally unaware, since they generally have only one model of any one thing. Even when computers do keep multiple models, the models are either not used by the computer at all (they're for people to look at), are of different things altogether, or are are other ways of accessing the same descriptions.

For example, take the way that computers represent dates. If we've solved the year 2000 problem in one of the standard ways, computer dates are represented by some number of seconds past some arbitrary date. In certain kinds of databases, this might be accompanied by the string which was originally specified (e.g. "4/3/98" or "3 Avril 1998") for human readability. The computer can also convert this number of seconds to months, days, years and even determine the day of the week ("Friday") or even the phase of the moon. Though this information is all useful, it does not give computers the kind of "awareness" of dates that human beings have.

Though adults think of dates in terms of the calendar, they also think of them in terms of the context of their lives. If I'm thinking of the date of some ancient email message I am seeking, I am more likely to think "I got it during some summer before I was married" than I am likely to think in calendrical terms. Having two or more models permits me (or a human filer or librarian) to connect "slices of the world" with different kinds of purposes. And computers don't generally do this.

Awareness has other advantages. Suppose some set of computer dates were damaged so that numerical order was confused. If the computer had other models of time (for instance, the order in which messages were read), this might permit it to identify the error in the numerical dates. But without the second model of the same thing, there is no way to identify the error. Multiple models allow for the correction of error but they also allow for the detection of error.

The reason that computers don't do this is the same as the reason we are today struggling with the Year 2000 problem. Designing a good model is a lot of work and getting models to interact is hard. But awareness, and the corresponding ability to recognize that we're stuck, requires having multiple models. Dogs (among many other kinds of creatures) have this advantage, but they are not able to make much use of it. Rufus, stuck at the water's edge, could not act on his frustration. There remains a missing ingredient.

Emotions and Awareness

It is significant that Rufus' conflict of models was expressed as behavior which I read as the emotion of frustration. Emotion often plays the role of an arbiter between different models. The shift between models is often driven by emotion rather than reason because reason always requires models in order to make judgments and so is useless when a choice between models is at issue.

Looking at the animal world, we see that distinct behaviors or patterns of responsiveness are often "activated" or "triggered" by basic drives such as hunger, sex, or fear. These behaviors are the simplest of models and these drives are the simplest of emotions. In their interaction, we may see the same patterns that play themselves out in our own bodies and minds with more complex models and richer emotions.

The problem with using reason alone to decide between models is that each model --- on its own --- selects and amplifies exactly those features which would make it the "logical" choice. Though we think of emotional choices as being "unreasonable," almost any arbiter would seem "unreasonable" from the standpoint of the models which are not selected. Only by changing the models it manages can an arbiter avoid the judgment of unreasonableness. And often, such changes are not possible.

Choices between models cannot avoid seeming unreasonable, but they can try to be methodical. The starting point of such methods is reflection and when that reflection includes the powers and limits of our models, it becomes consciousness.

Reflection

Reflection, as I am defining it here, involves the articulation of our own behavior. It involves statements like "I keep getting stuck here" or "I am tired today" or "I don't know which glass has more, but you should ask my brother because he has conservation already." Reflection requires its own model to describe our activities and responses. Awareness requires having two models of the same thing. Reflection requires having one model of the activities of another model.

While Rufus is certainly aware, reflection is probably beyond him. (Sorry, Rufus). While Rufus can get frustrated, he does not have any model of his frustration nor is his frustration a model of the conflict between his models. There is no "other system" (other than me, which is an interesting but tangential point) which is being affected by his external frustration for its own purposes, so it is not a model.

It may be impossible to separate reflection from some sort of language, since it involves the articulation of one model in another model. But reflection is limited because while it describes our situation and responses, it doesn't describe the structure and assumptions of its own descriptions and what they might be missing.

If computers are in need of remedial work on awareness, they are champs when it comes to reflection. The "stored program" computer introduced by von Neumann is essentially an architecture for reflection, where the description of action and the medium of action are both the memory of the computer. Computer programs which use models of computer behavior (either actual or projected) are routine. Reflection is at the core of interpreters (like the Java in your Web browser), compilers, communications subsystems, schedulers, and numerous other standard computer programs. These were all developed because the computer operations themselves were so complex that programmers did not want to have to work with them directly.

Unsurprisingly, there has been lots of work on computational reflection both in artificial intelligence and in areas like programming languages where the computer is working with descriptions of its own behavior at different levels. Reflection in this sense permits humans to design and interact with much more complex systems than they could otherwise. A graduate student writing an artificial intelligence program today is probably dealing with less complexity than his advisor's advisors had to deal with in their own first research programming. But because today's computer has decades of reflective development in it, the actual behavior of the computer is much more complex, as it does dozens of different tasks, accommodates many different kinds of inputs, and manages much more complex interactions with its peripheral devices.

But reflection is also limited, because it does not include "awareness." Reflection assumes a model of our actions and behavior, but not a model of our models. In particular, reflection does not neccessarily reveal the powerful assumptions which support and limit models. Reflection on a program's use of some model of dates will not identify its gaps or missing pieces nor explain the errors it encounters.

Consciousness

Consciousness is the awareness of our models. Like awareness, it requires a multiplicity of models. Like reflection, it involves a model of our own actions and thoughts. By combining reflection and awareness, consciousness includes the (partial) description of how a model selects and transforms what it is describing. Consciousness is the possibility of making explicit what had been implicit in our perceptions and actions.

This account resolves what seems paradoxical about accounts of human consciousness. Consciousness supposedly involves an awareness of how we are thinking, which should be quite abstract. However, the phenomenology of consciousness is remarkably concrete with its regard for "feelings" and "sense impressions" and "being there." However, if we take consciousness to be about models, we can only "understand" a model by being able to describe its patterns of access, inference, and reference. And this requires describing the concrete things it is referring to as well as the ways in which we use it. To understand models, we must be concrete.

While reflection simply requires a description of the models we are using, consciousness requires a superset of the represenation. We need to be able to say the things which the original model could say, but we also need to be able to say some things which the original model could not say. Consciousness is clumsy because it has to include so many of the things which our everyday models can ignore. If I pay attention to the placement of my feet on the treads of the stairs, I will have to walk more slowly. But the ability to pay attention in this way permits me to understand why I might slip or stumble and to guard against it.

While consciousness must be concrete, it does not neccessarily need to be particuarly physical or embodied. When an engineer is conscious of her use of complex numbers to represent signal and phase, she is not thinking of the physical measurements she might make to "ground" her model. Instead, she is thinking of flows of invisible electrons and their imperceptibly rapid patterns of oscillation, both of which are themselves models with their own assumptions. Consciousness is not the absence of assumptions made by models. Consciousness is the consistent relaxation of some of those implicit assumptions to allow other implicit assumptions to be made explicit.

In this sense, Rufus is probably not conscious though he is certainly aware. Rufus does not see the assumptions of his individual models but the fact that he has many does make him aware of their gaps. Rufus' chief ways of getting unstuck are assistance (mine), fortune, and abandonment, none of which requires a model on his part.

Likewise, computers are typically not at all conscious though (as I said above) they are often reflective. It is this lack of consciousness which makes computers so difficult and frustrating to work with. Whenever the pairing of computer and human needs to develop (to do a task or learn to do a task), the human needs to provide all of the consciousness.

We humans, fortunately, are occasionally conscious. We are also, on occasion, conscious for our computers, our children, or our pets. This consciousness allows us to manage and transform our models and to break our children and pets out of cycles they cannot see themselves. It is also the basis of our ability to be creative by making new models which make problems easier to solve, fit our purposes better, and fill the gaps in our models which we can see but do not understand. And it is to this creativity that we now turn.

Copyright (C) 1997, 1998 by Kenneth Haase
Draft, not for citation or circulation
Back Contents Comments Next