Thursday, November 3, 2011

Should the mind be extended?

A recent trend in cognitive science (as well as other areas of psychology) has been the call for an appreciation of the “extended mind.” This has been formulated in multiple ways, using different terms. For the present purposes I’m grouping all of these together because I want to discuss the basic idea....which is that our ordinary understanding of the mind as limited (or perhaps even more limited) by the boundaries of the skin. The term the extended mind has been used by Andy Clark and David Chalmers to describe this idea, and I cannot argue against the fact that they make a very convincing case.

Their argument is that to say that the cognitive processes of the individual person are contained by the skin (or in more extreme versions, by the brain) is to fail to appreciate the extent to which these processes continually extend beyond the skin. Clark and Chalmers give the example of a man with a memory disorder who has to use a notebook that he carries with him to remember basic facts. For example, when going to the MOMA, he uses this notebook to remember the address (53rd st). In contrast, the man’s wife has no memory disorder, needs no notebook, and can remember the address of the museum off the top of her head. Clark and Chalmers’ point is that while the processes of memory look different in the man and wife, these are actually superficial differences, and the basic processes are much the same. Even if we have no memory disorder, we can all think of situations in which we don’t remember knowledge internally, but keep it inscribed in the world in a way that allows for convenient, seamless retrieval. This applies to cases beyond those involving factual knowledge like the example above. Clark gives repeated examples of how people reorganize the environment to facilitate memory, e.g., when we are cleaning vegetable, we may put the already cleaned vegetables in a separate pile to aid our memory. Further examples of this kind of extended cognition can be found in the work of Sylvia Scribner and Jean Lave.

While there have been arguments that external and internal memory are actually importantly different processes, I want to address a separate aspect of these extended mind claims: the very idea of a boundary between mind and non-mind.

What is the nature of such a boundary? While it may actually be conceived in terms of a specific physical boundary (e.g., the skin), it’s actually the ascription of boundary-ness to a specific region of physical space. The idea of mental and non-mental is a product of our symbolic culture, rather than a preexisting thing in the world.

If we understand it in this way, the need to delineate it is understood differently. Rather than being an unavoidable distinction whose specific boundaries may be negotiated, the distinction becomes the result of the practical need for a boundary. It is therefore something in the world that has been constructed for a specific purpose, but need not be there for all purposes. If we want to discard it or disregard it, we may.

Seen like this, the idea of the extended mind may be called into question. Not because there has been solid evidence in favor of the mind stopping at the skin, but because making boundaries around the mind has no clear purpose when we are talking about cognitive processes. Rather than expanding the boundaries of the mind, a more practical option seems to disregard the idea of boundaries in the first place when talking about cognition. The availability of information in today’s world (at any given point in time) requires us to extend the boundaries of the mind in revolutionary and expansive new ways. If we are to take a historical approach, then the boundaries of the mind extend even further. Why should our mind not encompass other minds, even if these are the minds of people who have died long before we were born, but who somehow influenced us? For the sake of practicality, I would have to argue that the idea of a boundary for the mind is of little practical use for cognitive science.

With practicality in mind, it is useful to reexplore the traditional conception of the non-extended mind; the mind which stops at the skin. In spite of Clark and Chalmer’s convincing argument, this way of putting a boundary on the mind is, for practical purposes, almost essential. The value of this way of delineating the mind lies in the physical affordances of the body and the external world. Despite the fact that parts of the external world (like notebooks) can serve as extensions of our mind, they are inherently not connected to us physically. We may bring a notebook with us, or we may not. In contrast, we have no option of forgetting our arms or legs. By not including these things in our idea of a “person” or their “mind” we provide a built in way to remember that they must be talked about separately because they can be forgotten or misplaced. If we didn’t do this, our ability to conveniently coordinate interpersonal activity would be hindered because the “person” would never be a clear entity.

Thursday, October 27, 2011

epistemology and cognitive development

Cognitive developmentalists (cog devs) have, since at least the time of Piaget, taken an epistemological approach to understanding children. By epistemological, I mean that their investigations have been concerned with determining things like what children of different ages know, and the progression of things kids know in a given area, and so forth. (obviously behaviorists are not really included in this). This approach has made substantial contributions to a large extent because of how different children are than adults in terms of their thinking. Familiar types of activities that seem simple and obvious to adults are approached in completely different ways by kids. Inabilities in a certain area have been attributed to children not understanding things about that particular area.

Despite the simple production of "results" using this approach, researchers have done little more than open up previously unknown areas of investigation. Empirical results of (e.g.) children's numerical thinking have made it very obvious that children think about numbers and counting differently than adults do. Yet, it has been virtually impossible to move from negative statements about knowledge taht can be easily backed up by research, to positive statements that can be backed up and conclusively agreed upon. Instead, what ends up happening is that researchers make assumptions about the epistemological implications of a given behavior (what a certain behavior indicates that a kid knows), and draws conclusions on the basis of these generally questionable assumptions.

The assumptions are questionable because it is really hard to come to any conclusion about whether or not a kid "knows" something, for several reasons. First, behaviors are not proof of a certain form of knowledge. I may be able to count because I have been taught counting by rote, or I may be able to do it and understand its function and how to do it correctly based on sound mathematical principles. The obvious reaction to this problem is to verify abilities seen in one area with abilities in another. E.g., to determine whether a child who can count a row of objects really understands counting, they could be asked tocount to comapre two sets of objects, or to pass a conservation task. Although these demonstrations may make certain conclusions increasingly plausible, the same objection that applied to the initial findings apply to these.

The second difficulty with attributing knowledge to kids is that it's not at all clear what knowledge is. I can say that I know how to drive a car, and this may be meaningful and effective for many social purposes such as whether or not I can take a shift on a long driving trip. But this just means that "my knowing something" has certain practical implications, specifically that it allows my behavior in a variety of situations to be predicted. This sense of knowledge is often useless for children whose ability to perform successfully on related tasks is often baffling and surprising. For example, researchers have repeatedly demonstrated that children who can count a set of objects are unable to give out a requested number of objects when asked. This combination of ability and inability is particularly baffling to adults who can hardly imagine doing one without doing the other. It's not at all clear what kind of "knowledge" these children would have to have to explain this constellation of abilities.

In my opinion, the concept of knowledge doesn't refer to a well defined state, but rather serves a functional purpose to make claims about the sorts of activity that might be expected from a certain person, based on past events. These predictions are grounded in the system of established regularities of human interaction, most of which are not understood.

Based on this closer look at the concept of "knowledge" (this applies to its close cousin understanding as well), what is its use in cognitive development?

I would argue that the use of epistemological terms such as "knowledge" in cognitive development results from prevailing cultural practices that are in place for dealing with thinking. We use these epistemological terms to make sense of other people's behavior and thinking because they are socially useful for these purposes. Because of the terms' usefulness in these contexts, they may have become reified, or treated as more real than they actually are. Continued use over time may have obscured the fact that epistemological terms like knowledge are representations of cognitive processes, not reflections of them. The result is that people continue to use them for purposes where they appear to have little use.

Cognitive development is the perfect example of this phenomenon of the reification of concepts leading to confusion within the field. Instead of serving to make sense of the subject matter, these concepts instead dominate the subject matter, to the extent that the actual subject of study is often ignored, and replaced with an alternate object that more closely fits the concept. This can be seen when researchers are trying to determine (e.g.) whether or not children know that numerosity is a property of all number words. The idea that children possess this "knowledge" is forefront in the investigation, which is concerned with detecting its presence in children of a certain age. Investigators assess different aspects of children's behavior, and draw conclusions about whether or not they possess the knowledge in question. All the while, it is never considered what the very idea of "having knowledge" even means for the child. It is simply assumed that the proper object of cog dev research should be to detect the supposed entities such as knowledge that exist beneath the surface, influencing (in certain contexts) children's activity.

This approach seems to be ridiculous when we reflect back on the earlier consideration of what it even means to have knowledge. Knowledge, it was concluded there, is a practical way of making sense of thinking for certain purposes. As the research example shows, however, this has become inverted. Rather than being the tool for generating understanding, knowledge has become the goal of the investigation. The incredibly ironic result is that the investigators have a thinking child in front of them during the experiment, and yet they ignore this thinking because of their preoccupation with the concept of knowledge, which social interaction has reified into an object of study, rather than a way of representing knowledge.

What has happened here can be humorously illustrated with an example. Imagine a detective who has been hired to track down a burglar, and has been given a picture of the burglar as a guide for who to look for. The detective then goes out with the picture and begins drawing other pictures of the faces he encounters, which he then compares to the original picture, in the hope of finding the thief.

This method of tracking down the criminal is analogous to the approach taken in cognitive development. Just as the detective placed too much importance on matching his subjects to the picture, so too do cognitive development researchers forget that culturally received ways of representing cognition are not reflections of cognition itself. All too often they, like the detective overlook the actual thought processes that are the real subject matter of cognitive development, preferring instead to look for something that perfectly matches a particular representation of cognition.

Thursday, October 13, 2011

Thinking about thinking: a phenomenological look

A distinction can be drawn between (1) linguistic descriptions of the mental events and processes that we use to refer to firsthand experience of our own thinking, or of others thinking inferred from their behavior.

(2) The actual events/processes as they occur as phenomenological or biological realities.

I should say from the outset that, especially in light of the title of this blog, I am not making a point about the "map and the territory" in a simplistic sense. That is, I am not pointing out that a linguistic description of an event is different from the phenomenological/biological reality of that event. I am looking at a deeper form of this distinction which is commonly unrecognized between (1) the mental events that we claim are occurring when we try to reflect on our thoughts (to ourselves or others) and (2) the mental events that are actually occurring.

This can be shown best with an example. As a student, I have a password for my computer account at school, and I have a separate password for the blackboard website (an online forum for class discussions, document postings, etc). These passwords are arbitrary strings of numbers/letters and are identical except that the last character of my computer password is 5, versus 3 for the blackboard password. Because I use the computer password significantly more than the BB password, I am used to typing in that password, and the BB password is more of an exception. This means that when I go to type that password in, I must remember that it is not the normal computer password, but something else (I frequently type in the computer pword by mistake).

Today, I was logging onto blackboard and the user/pword box came up, and I managed to remember to type in the correct (BB) password. When I reflected on this, I was struck to discover that my inner linguistic representation of the mental events that had transpired differed from my actual perception of these mental events. This is how I would describe the process of what happened:

The password box came up, and I remembered that the pword for BB is not my normal computer password, and that I should remember to type in the correct password.

In contrast to this, the actual sequence of experience was more like this:

Password box comes up>generic experience of caution connected with typing in password>correct typing in of password.

In other words, the mental event that I would have described as having occurred in a specific, explicit, meaningful form instead occurred as the enactment of an undefined and generic cautious attitude. This attitude was not self-directed towards the situation, but instead occurred within my consciousness of the situation, and by virtue of this could only have been directed towards the situation, that being the only relevant context to which its meaning could apply.

In addition to lacking an inherent reference towards the situation that it ended up aiding, the attitude also lacked the specific meaning that I attributed to it in my verbal reflection. What went through my head was not "you need to remember that your password is wdfg3 not wdfg5," but only a generic attitude of cautious meaningfulness. This attitude was sufficient to bring about the effect of writing the correct password, but that's only because of the way that my attention was focused. My attention gave a specific form to a generic attitude that could have taken other forms in other contexts.

If realizations have the generic form that I am claiming, why do people describe them as ungeneric and as having a meaning of their own? My answer would by people speak about the mind in a way that doesn't accurately reflect the underlying structures and processes. This has been shown in a variety of instances, particularly over the last 50 years. For example, people commonly make a sharp distinction between perception and action that doesn't reflect their actual deep interrelation. In a somewhat similar way, we apply a structure to our thoughts and actions that is not necessarily reflective of their actual nature. The fact that these practices exist is proof of their utility in the situations where they have been developed. Unfortunately, the practices may not work when they're applied to new situations which may demand that our characterizations of thought processes bear a closer resemblance to the underlying structure of biological/phenomenological events.

Any sophisticated study of cognition is a situation in which our "common sense" way of talking about thought processes is detrimental to our attempts of understanding. We may observe people's activity and describe their thought processes in commonsense terms, but these descriptions do not match what's actually going on. The commonsense characterization of thought processes does not reflect the systemic nature of the mind in which certain realizations (or as cog dev researchers like to say, "principles" may be embedded within contexts of activity, and don't necessarily exist as independent, linguistic principles. This may be the case, or it may become the case, later in life. But it does not have to be the case, and I am arguing that it often isn't.

To go beyond this, cognitive scientists must find a way to characterize cognition as something that contains both linguistic/propositional and non-propositional knowledge. The distinction between explicit and implicit knowledge is not sufficient because even there, implicit knowledge is conceptualized as "hidden propositions."

One could object to this by saying "Yes, but the linguistic representation is a convenient way of making sense of the knowledge. A representation cannot be the thing represented. A little corruption (or artistic license) is only the result of the inherently metaphorical nature of all representations." I agree with this in principle, but would argue that it doesn't apply here. If we fail to make the distinction between propositional/implict knowledge a truly important one, then we have no conception of what language is with regards to the mind. To cast implicit (non-propositional) knowledge in propositional terms is to ignore its defining characteristic, and to blur the distinction between cultural and non-cultural ways of thinking. Metaphorical, artistic license is likely to be a necessity, but it cannot be done in a way that prevents the most central aspect of a given concept from showing through.

To restate this in terms specific to the problem at hand: We tend to describe the component thought processes that comprise the cognitive portion of any activity as being individually coherent, propositional entities. To do so is to overlook the fact that human cognition is coherent and propositional in terms of its totality, not in terms of its component parts. The component parts each interact to generate activity that may be described in totality with propositional linguistic terms. This does not mean that the individual parts have these characteristics.

Wednesday, October 12, 2011

Bringing Cultural Constructivism to Bear on the Practice of Psychology

The fields and theories of cultural psychology, sociocultural psychology, activity theory, and certain types of cognitive science and anthropology have in common a basic theory of cultural constructivism. This holds that the meanings that we see in the world reflect certain symbolic distinctions that have been arrived at and given forth meaning through social interaction. This view can be more clearly understood by summarizing its opposite: the concepts that we use to make sense of the world (especially people's activity) reflect actual parts of the real world, and are therefore the only appropriate/available ways we can make sense of the world.

What I will refer to generally as cultural constructivism has made great strides in making sense of the world, especially differences between people's ways of thinking. I believe that its most important application may be psychology itself. By applying theories of cultural constructivism to psychology, we can examine how the concepts that psychology uses have been developed for specific purposes (often outside of psychology as a formal discipline) and have a specific utility. More importantly, we can learn to create new concepts that may be more appropriate for the demands of the fields, since these are different from everyday life contexts where existing concepts have been developed.

The field of cognitive psychology, and particularly cognitive development shows most clearly the need for new concepts. Work over the last two decades has focused on the question of "what children know" and when they know it. For example, "when do children know about objects?" "when do children understand the distinction between fantasy and reality?", etc. These inquiries have resulted in an abundance of empirical findings showing both proficiencies and deficits in young children's understanding. The problem is that the empirical findings (which are often compelling) are interpreted within a conceptual framework drawn from normative adult ways of thinking and conceptualizing others.

THE ADULTIFICATION OF CHILDREN'S KNOWLEDGE
The use of this conceptual framework results in a characterization of developing knowledge in terms of what is known or unknown. This is problematic for two separate reasons. First of all, it overlooks the very real possibility of qualitative differences in knowledge. By this I mean that the difference between children's and adult's knowledge is not a matter of one knowing things that the other doesn't. Children's knowledge about a given area is not a subset of adult knowledge, but is qualitatively different and cannot be expressed in terms of some combination of adult concepts.

While characterizing children's knowledge in terms of adult concepts may account for empirical findings, and allow for behavioral predictions, researchers run the risk that the important parts of children's knowledge are not characterized, or are characterized in misleading ways. This issue is has come up in anthropology and cross-cultural psychology and is conceptualized in terms of the difference between forced-etic and emic descriptions.

UNCONCEPTUALIZED DIFFERENCES IN KNOWLEDGE
The second problem with the use of the known/unknown dichotomy is that it conceals the many different ways in which something may be known or unknown. For example, let's consider someone who "knows how to build a car," which we will take to mean someone who, when presented with the appropriate materials and asked to "build a car" will do so. For many practical purposes this may be sufficient, since it may distinguish between people who will and will not produce a car in a given situation.

However, if our purpose is not to have a car, but to understand the knowledge required to build a car, the characterization is unclear. A person who "knows how to build a car" might be a severely handicapped person who has been trained (perhaps via conditioning with rewards) to construct a car by following a specific set of procedures using specified materials. Alternatively, we might be dealing with a retired mechanic turned hobbyist with extensive knowledge of cars and how they work. Clearly, these two cases are different. What the former mechanic knows allows for a flexible approach to construction, one that adapts to unforseen challenges, such as the having the wrong sized part. The handicapped person might be less flexible. Their knowledge may not allow them to adapt to unforseen problems that come up. These problems may bring construction to a halt, or may be ignored because their significance is not recognized.

Poorly conceptualized differences in ways of knowing extend beyond procedural knowledge. Consider the knowledge that "heroin is a destructive drug that is ultimately not worth taking." Thinking that this is the case does not imply that a person is not a drug user, as heroin users may be well aware of the negative effects of their drug use, and wish to stop using. At the same time, ex-drug users may attribute their continued abstinence to this knowledge.

EXPLANATION
The problem above results from the use of an epistemological system that has emerged to satisfy certain needs in the human social world. This problem is not inherent to the epistemological system, but results from its use in contexts where it is not appropriate. When dealing with practical matters in familiar contexts, such as "will the car get built," characterizing someone as knowing or not knowing [how to build cars] is a useful. The fact that it glosses over different ways of knowing is irrelevant for these purposes.

What appears to have happened is that the relativity of these distinctions to certain purposes has been unrealized and the concepts have been reified within psychological research. Researchers studying cognition have assumed that received ways of characterizing cognition reflect the actual nature of psychological processes (or are the ideal way to study these processes). As a result, they have blindly shaped certain problems to fit these characterizations, even when this leads to problems like the two described above.

To do better work, psychologists must examine their conceptions and ways of making sense of phenomena, recognizing that these are simply cultural constructs. The logical next step is the formulation of better characterizations that reflect the phenomena at hand. In the next post, I will look at a further effect of applying the theories of cultural constructivism to psychology. In short, I argue that culturally received ways of talking about knowing reflect types of knowing that are specific to cultural activity and don't apply to non-cultural activity, such as that seen in infants and animals.

Wednesday, March 9, 2011

Reasons for doing things

Research has been done in psychology showing that while people can readily supply reasons for their behavior, these appear to be incorrect and created after the fact. In other words, these post hoc rationalizations are nice, believable stories that we come up with based on what we've just done.

The question that follows this very interesting finding is, "Well, what are the actual reasons for people's behavior? How can we discover these?" Presumably if the reasons why people think they do what they do are wrong, then there must be an accurate explanation somewhere. The question is, what is the nature of this "actual" explanation?

Some might be inclined to give an alternative intentional, psychological reason as the "true cause" of the behavior in question. I believe that such an approach is incorrect. Attributing intentional psychological causes to behavior, e.g. "I did it because I selfishly wanted you all to myself!" implies that these intentional causes exist unconsciously, hidden beneath our incorrect explanations. This implies a hidden, yet unified self that exists below the surface, which I find troubling, not because its "scary" but because it doesn't really make a lot of sense.

A much more reasonable option is to see any given action as the product of many different unconscious motivational cues coming together at any one point in time (in cases where our actions are not entirely and intentionally consciously guided--in these cases our understanding of why we did something is probably accurate).

Following this, the idea of someone doing something for any reason only exists on the level of linguistic-semiotic sense making. It's not that we're unaware of the causes of some of our actions. It's that if one asks for the "actual cause" of any given action, this is a question that often makes not sense on the linguistic-semiotic level.

Friday, February 4, 2011

Insanity

Is it possible that insanity (or simply the tendency to come to believe unreasonable things) is actually an adaptation that allows people to adjust to new states of affairs that may be less desirable without becoming miserably unhappy? In the same way that the potential for genetic mutations is a crucial part of evolutionary processes, a tendency for brain pathology might be a way for the brain to adapt to a new set of surroundings that might not automatically be fit for a happy adaptation...

Tuesday, January 25, 2011

Human Rationality

I've recently become somewhat obsessed with the idea of "rational human action." Before I go any farther down what I think will be a contentious road, I will first say that I am by no means denying the amazing rational abilities of human beings, but as the reader might be beginning to suspect, I am calling the extent of their operation into question. It's my view that the rational abilities that we do have produce such impressive results that we jump to the conclusion that our behavior is entirely rational, as well as to the other questionable conclusion that non-rational behavior (or incompletely rational behavior) is useless.

My conclusions here are primarily based on my own experiences navigating my own life, and because of that they are certainly biased, and I'm willing to admit that I am definitely "less rational" than most other people in my social sphere. Yet, the extent of my own irrationality is so great that even if other people are substantially more rational than I am, they are still necessarily quote irrational.

But all of this is getting ahead of an important preliminary point: What is rationality in the first place? By rationality, I mean the type of propositional thinking that we use to navigate both the immediate physical world in front of us, as well as the symbolic world that we navigate when we make plans for the future. This rationality is similar to what I've called in the post below this the "logic of experience." For example, walking down the street, there is a light pole directly in front of me, and open sidewalk to the left. Rationality is the process that involves the integration of the various meanings present before me-the relation between the lightpole, the open sidewalk, my current goal/activity of walking in a specific direction in a specific part of the environment. Each of these has a variety of meanings and possible meanings, and my rationality is the process that weaves these meanings together, enabling me to acheive my goal within the constraints and possibilities of the conditions confronting me. As I mentioned above, rationality is not just used in the concrete physical world, but also the purely symbolic world. Planning my day, I take into account various factors such as what I want to do, what resources I have at my disposal, how much time I have, etc. Even though these various factors aren't in front of me, I can manipulate them symbolically to make symbolic plans.

These kinds of rational activity occur all the time. They might be called the "bread and butter" of our experience, and they occur consciously and unconsciously. But not all that we do is rational for various reasons. For one thing, we don't always take all that we know into account when we're making a decision. We might forget factors or constraints that would have otherwise affected what we would choose to do. In addition, the meanings of the various components that go into our decision making process are not completely stable. Just as the effect of someone yelling at 90 dB in a small room depends on the shape/size of the room, or the resulting bounce of a 200 lb man landing on a trampoline depends on other activity on the trampoline, so too does our evaluation of different possibilities depend on the other activity going on in our mind.

What accounts for the difference between this "less rational" functioning and the more clearcut rational thinking described above (e.g., I move to the left to avoid a light pole because I can't walk through it)? The difference seems to be the strength of the constraints. In the first examples, the constraints press tightly onto the decision making process, making for only one clear path of action. In the second example, the constraints allow for more flexibility; there isn't just one good course of action. Our ability to think through those types of situations is influenced by how much we are able to bring to mind that is relevant to our decision, even though, as I said, the weight of the factors we bring forth is partially a function of our mental state at the time (e.g., although I have a constant susceptibility to being sunburned, my susceptibility to taking this fact into account changes depending on whether I'm feeling reckless or health conscious).


There's another limitation to human rationality that I have not yet dealt with that I will mention briefly. So far, I have spoken about how people act rationally/irrationally in simple cause>effect situations. The rationality of a given decision depends on the framing of the relevant aspects of the decision (i.e., what is taken as cause and effect), as well as the knowledge involved in making the decision. A woman's decision to drive to work is rational if she must go 20 miles in 20 minutes, and owns a car. If she knows that a crucial bridge was washed away in a storm, blocking her path, her decision to drive to work is no longer rational (though it would have been if she had not known about the washout).

In the above example, the distinction between rationality and irrationality is clearcut and obvious. In the real world, things are often much more complicated. In the real world, the difference between knowing and not knowing something covers a large gray area in which certain things are to different degrees implied, but not explicitly obvious. Some effects of our actions are usually predictable, whereas others only show up under close examination. Still, no amount of examination is capable of showing us the complete implications of our actions, and in this way our rationality is limited.

Complexity and Human Life

For this post (and hopefully several future posts), I'm going to focus on the relation between the ideas of chaos and complexity and human life, looking at how the principles of the former are manifested in the latter. I aim to show that the essence of adaptive, living systems is most easily seen in this relationship.

To start off, complexity and chaos will be talked about here in a slightly simplified and non-mathematical way that is most in line with my own understanding. While a more thorough understanding may allow for more analytical possibilities, the types of things I want to look at here are at a basic level and don't require this in depth analysis. I'm going to be looking at the ideas of chaos and complexity as exemplified by the phenomena of sensitivity to initial conditions. This is commonly understood with examples like the "butterfly effect."

If we take a look at human life, we get some validation for the butterfly effect. Small events that happen to each of us can have very large consequences. There is a multitude of examples of this, from a chance lottery win, to getting stuck in traffic and missing a flight that would later crash, and so on. Yet, these sequences of small event>huge consequence are not all the same. On one hand there are those examples like the one's just mentioned in which a drastic consequence is the result of seemingly arbitrary conditions. Yet there are just as many events in our lives that follow the small event>drastic outcome pattern which are less arbitrary. A man who happens to see his wife doing something that implies infidelity divorces his wife and breaks up a family. Or the president of one country makes a brief highly offensive remark to another leader, sparking a war. In these two examples, huge consequences follow small events, but their unfolding is not arbitrary, but rather an understandable sequence of human action.

This shows that human beings (and this is certainly true for other animals) incorporate the chaotic dynamics of complex systems into the processes of daily life. While this is not surprising since organisms are complex chaotic systems, it is impressive that living systems are able to make sense of and then utilize the complex chaotic processes for their own gain.

Making sense of a chaotic system is not an easy task, and in most situations in which people try to do this, they fail. Predicting the stock market, the weather, or international affairs is only possible in the short term, and sometimes not even then. Our failures in doing this seem to suggest that in those situations in which we are able to predict the complex dynamics of chaotic systems, the dynamics of these systems must be set up in a predictable way that at least partially makes sense to us.

The situations in which human beings and other animals do make sense of the behavior of complex chaotic systems all involve the prediction of one's own or another animal's behavior (or they are trivial short term predictions like predicting a thunderstorm based on approaching clouds). The basis for such prediction lies in the experientially derived logic of one's own experience. From living in the world, an organism is affected by a range of emotional states coupled to their acts. The unfolding of this coupled affective>active>affective>active sequence yields patterns over time that the animal uses to know itself, or to know others (in those cases when an animal can know others, as humans can). In other words, there are patterns in this affective>active sequence that comprise the "logic of experience." The logic of experience comprises a predictive schema for understanding the interlinking between the affective, motivational, and active dimensions of experience.

The logic of experience is learned from firsthand experience, and although it makes sense of physical processes in the world (along with their affective dimensions), it is a different logic than the logic of physical processes, or "folk physics" (which is also learned from experience, but lacks an affective dimension). The logic of experience allows us to predict processes in systems that are so complex that they would endlessly baffle our capacity for folk physics (imagine trying to predict another's actions, even in the most trivial of circumstances based solely on the interaction of physical particles in their body and the surrounding environment). This logic is, in essence, an evolved mechanism of making sense of chaos.