Thursday, October 27, 2011

epistemology and cognitive development

Cognitive developmentalists (cog devs) have, since at least the time of Piaget, taken an epistemological approach to understanding children. By epistemological, I mean that their investigations have been concerned with determining things like what children of different ages know, and the progression of things kids know in a given area, and so forth. (obviously behaviorists are not really included in this). This approach has made substantial contributions to a large extent because of how different children are than adults in terms of their thinking. Familiar types of activities that seem simple and obvious to adults are approached in completely different ways by kids. Inabilities in a certain area have been attributed to children not understanding things about that particular area.

Despite the simple production of "results" using this approach, researchers have done little more than open up previously unknown areas of investigation. Empirical results of (e.g.) children's numerical thinking have made it very obvious that children think about numbers and counting differently than adults do. Yet, it has been virtually impossible to move from negative statements about knowledge taht can be easily backed up by research, to positive statements that can be backed up and conclusively agreed upon. Instead, what ends up happening is that researchers make assumptions about the epistemological implications of a given behavior (what a certain behavior indicates that a kid knows), and draws conclusions on the basis of these generally questionable assumptions.

The assumptions are questionable because it is really hard to come to any conclusion about whether or not a kid "knows" something, for several reasons. First, behaviors are not proof of a certain form of knowledge. I may be able to count because I have been taught counting by rote, or I may be able to do it and understand its function and how to do it correctly based on sound mathematical principles. The obvious reaction to this problem is to verify abilities seen in one area with abilities in another. E.g., to determine whether a child who can count a row of objects really understands counting, they could be asked tocount to comapre two sets of objects, or to pass a conservation task. Although these demonstrations may make certain conclusions increasingly plausible, the same objection that applied to the initial findings apply to these.

The second difficulty with attributing knowledge to kids is that it's not at all clear what knowledge is. I can say that I know how to drive a car, and this may be meaningful and effective for many social purposes such as whether or not I can take a shift on a long driving trip. But this just means that "my knowing something" has certain practical implications, specifically that it allows my behavior in a variety of situations to be predicted. This sense of knowledge is often useless for children whose ability to perform successfully on related tasks is often baffling and surprising. For example, researchers have repeatedly demonstrated that children who can count a set of objects are unable to give out a requested number of objects when asked. This combination of ability and inability is particularly baffling to adults who can hardly imagine doing one without doing the other. It's not at all clear what kind of "knowledge" these children would have to have to explain this constellation of abilities.

In my opinion, the concept of knowledge doesn't refer to a well defined state, but rather serves a functional purpose to make claims about the sorts of activity that might be expected from a certain person, based on past events. These predictions are grounded in the system of established regularities of human interaction, most of which are not understood.

Based on this closer look at the concept of "knowledge" (this applies to its close cousin understanding as well), what is its use in cognitive development?

I would argue that the use of epistemological terms such as "knowledge" in cognitive development results from prevailing cultural practices that are in place for dealing with thinking. We use these epistemological terms to make sense of other people's behavior and thinking because they are socially useful for these purposes. Because of the terms' usefulness in these contexts, they may have become reified, or treated as more real than they actually are. Continued use over time may have obscured the fact that epistemological terms like knowledge are representations of cognitive processes, not reflections of them. The result is that people continue to use them for purposes where they appear to have little use.

Cognitive development is the perfect example of this phenomenon of the reification of concepts leading to confusion within the field. Instead of serving to make sense of the subject matter, these concepts instead dominate the subject matter, to the extent that the actual subject of study is often ignored, and replaced with an alternate object that more closely fits the concept. This can be seen when researchers are trying to determine (e.g.) whether or not children know that numerosity is a property of all number words. The idea that children possess this "knowledge" is forefront in the investigation, which is concerned with detecting its presence in children of a certain age. Investigators assess different aspects of children's behavior, and draw conclusions about whether or not they possess the knowledge in question. All the while, it is never considered what the very idea of "having knowledge" even means for the child. It is simply assumed that the proper object of cog dev research should be to detect the supposed entities such as knowledge that exist beneath the surface, influencing (in certain contexts) children's activity.

This approach seems to be ridiculous when we reflect back on the earlier consideration of what it even means to have knowledge. Knowledge, it was concluded there, is a practical way of making sense of thinking for certain purposes. As the research example shows, however, this has become inverted. Rather than being the tool for generating understanding, knowledge has become the goal of the investigation. The incredibly ironic result is that the investigators have a thinking child in front of them during the experiment, and yet they ignore this thinking because of their preoccupation with the concept of knowledge, which social interaction has reified into an object of study, rather than a way of representing knowledge.

What has happened here can be humorously illustrated with an example. Imagine a detective who has been hired to track down a burglar, and has been given a picture of the burglar as a guide for who to look for. The detective then goes out with the picture and begins drawing other pictures of the faces he encounters, which he then compares to the original picture, in the hope of finding the thief.

This method of tracking down the criminal is analogous to the approach taken in cognitive development. Just as the detective placed too much importance on matching his subjects to the picture, so too do cognitive development researchers forget that culturally received ways of representing cognition are not reflections of cognition itself. All too often they, like the detective overlook the actual thought processes that are the real subject matter of cognitive development, preferring instead to look for something that perfectly matches a particular representation of cognition.

Thursday, October 13, 2011

Thinking about thinking: a phenomenological look

A distinction can be drawn between (1) linguistic descriptions of the mental events and processes that we use to refer to firsthand experience of our own thinking, or of others thinking inferred from their behavior.

(2) The actual events/processes as they occur as phenomenological or biological realities.

I should say from the outset that, especially in light of the title of this blog, I am not making a point about the "map and the territory" in a simplistic sense. That is, I am not pointing out that a linguistic description of an event is different from the phenomenological/biological reality of that event. I am looking at a deeper form of this distinction which is commonly unrecognized between (1) the mental events that we claim are occurring when we try to reflect on our thoughts (to ourselves or others) and (2) the mental events that are actually occurring.

This can be shown best with an example. As a student, I have a password for my computer account at school, and I have a separate password for the blackboard website (an online forum for class discussions, document postings, etc). These passwords are arbitrary strings of numbers/letters and are identical except that the last character of my computer password is 5, versus 3 for the blackboard password. Because I use the computer password significantly more than the BB password, I am used to typing in that password, and the BB password is more of an exception. This means that when I go to type that password in, I must remember that it is not the normal computer password, but something else (I frequently type in the computer pword by mistake).

Today, I was logging onto blackboard and the user/pword box came up, and I managed to remember to type in the correct (BB) password. When I reflected on this, I was struck to discover that my inner linguistic representation of the mental events that had transpired differed from my actual perception of these mental events. This is how I would describe the process of what happened:

The password box came up, and I remembered that the pword for BB is not my normal computer password, and that I should remember to type in the correct password.

In contrast to this, the actual sequence of experience was more like this:

Password box comes up>generic experience of caution connected with typing in password>correct typing in of password.

In other words, the mental event that I would have described as having occurred in a specific, explicit, meaningful form instead occurred as the enactment of an undefined and generic cautious attitude. This attitude was not self-directed towards the situation, but instead occurred within my consciousness of the situation, and by virtue of this could only have been directed towards the situation, that being the only relevant context to which its meaning could apply.

In addition to lacking an inherent reference towards the situation that it ended up aiding, the attitude also lacked the specific meaning that I attributed to it in my verbal reflection. What went through my head was not "you need to remember that your password is wdfg3 not wdfg5," but only a generic attitude of cautious meaningfulness. This attitude was sufficient to bring about the effect of writing the correct password, but that's only because of the way that my attention was focused. My attention gave a specific form to a generic attitude that could have taken other forms in other contexts.

If realizations have the generic form that I am claiming, why do people describe them as ungeneric and as having a meaning of their own? My answer would by people speak about the mind in a way that doesn't accurately reflect the underlying structures and processes. This has been shown in a variety of instances, particularly over the last 50 years. For example, people commonly make a sharp distinction between perception and action that doesn't reflect their actual deep interrelation. In a somewhat similar way, we apply a structure to our thoughts and actions that is not necessarily reflective of their actual nature. The fact that these practices exist is proof of their utility in the situations where they have been developed. Unfortunately, the practices may not work when they're applied to new situations which may demand that our characterizations of thought processes bear a closer resemblance to the underlying structure of biological/phenomenological events.

Any sophisticated study of cognition is a situation in which our "common sense" way of talking about thought processes is detrimental to our attempts of understanding. We may observe people's activity and describe their thought processes in commonsense terms, but these descriptions do not match what's actually going on. The commonsense characterization of thought processes does not reflect the systemic nature of the mind in which certain realizations (or as cog dev researchers like to say, "principles" may be embedded within contexts of activity, and don't necessarily exist as independent, linguistic principles. This may be the case, or it may become the case, later in life. But it does not have to be the case, and I am arguing that it often isn't.

To go beyond this, cognitive scientists must find a way to characterize cognition as something that contains both linguistic/propositional and non-propositional knowledge. The distinction between explicit and implicit knowledge is not sufficient because even there, implicit knowledge is conceptualized as "hidden propositions."

One could object to this by saying "Yes, but the linguistic representation is a convenient way of making sense of the knowledge. A representation cannot be the thing represented. A little corruption (or artistic license) is only the result of the inherently metaphorical nature of all representations." I agree with this in principle, but would argue that it doesn't apply here. If we fail to make the distinction between propositional/implict knowledge a truly important one, then we have no conception of what language is with regards to the mind. To cast implicit (non-propositional) knowledge in propositional terms is to ignore its defining characteristic, and to blur the distinction between cultural and non-cultural ways of thinking. Metaphorical, artistic license is likely to be a necessity, but it cannot be done in a way that prevents the most central aspect of a given concept from showing through.

To restate this in terms specific to the problem at hand: We tend to describe the component thought processes that comprise the cognitive portion of any activity as being individually coherent, propositional entities. To do so is to overlook the fact that human cognition is coherent and propositional in terms of its totality, not in terms of its component parts. The component parts each interact to generate activity that may be described in totality with propositional linguistic terms. This does not mean that the individual parts have these characteristics.

Wednesday, October 12, 2011

Bringing Cultural Constructivism to Bear on the Practice of Psychology

The fields and theories of cultural psychology, sociocultural psychology, activity theory, and certain types of cognitive science and anthropology have in common a basic theory of cultural constructivism. This holds that the meanings that we see in the world reflect certain symbolic distinctions that have been arrived at and given forth meaning through social interaction. This view can be more clearly understood by summarizing its opposite: the concepts that we use to make sense of the world (especially people's activity) reflect actual parts of the real world, and are therefore the only appropriate/available ways we can make sense of the world.

What I will refer to generally as cultural constructivism has made great strides in making sense of the world, especially differences between people's ways of thinking. I believe that its most important application may be psychology itself. By applying theories of cultural constructivism to psychology, we can examine how the concepts that psychology uses have been developed for specific purposes (often outside of psychology as a formal discipline) and have a specific utility. More importantly, we can learn to create new concepts that may be more appropriate for the demands of the fields, since these are different from everyday life contexts where existing concepts have been developed.

The field of cognitive psychology, and particularly cognitive development shows most clearly the need for new concepts. Work over the last two decades has focused on the question of "what children know" and when they know it. For example, "when do children know about objects?" "when do children understand the distinction between fantasy and reality?", etc. These inquiries have resulted in an abundance of empirical findings showing both proficiencies and deficits in young children's understanding. The problem is that the empirical findings (which are often compelling) are interpreted within a conceptual framework drawn from normative adult ways of thinking and conceptualizing others.

THE ADULTIFICATION OF CHILDREN'S KNOWLEDGE
The use of this conceptual framework results in a characterization of developing knowledge in terms of what is known or unknown. This is problematic for two separate reasons. First of all, it overlooks the very real possibility of qualitative differences in knowledge. By this I mean that the difference between children's and adult's knowledge is not a matter of one knowing things that the other doesn't. Children's knowledge about a given area is not a subset of adult knowledge, but is qualitatively different and cannot be expressed in terms of some combination of adult concepts.

While characterizing children's knowledge in terms of adult concepts may account for empirical findings, and allow for behavioral predictions, researchers run the risk that the important parts of children's knowledge are not characterized, or are characterized in misleading ways. This issue is has come up in anthropology and cross-cultural psychology and is conceptualized in terms of the difference between forced-etic and emic descriptions.

UNCONCEPTUALIZED DIFFERENCES IN KNOWLEDGE
The second problem with the use of the known/unknown dichotomy is that it conceals the many different ways in which something may be known or unknown. For example, let's consider someone who "knows how to build a car," which we will take to mean someone who, when presented with the appropriate materials and asked to "build a car" will do so. For many practical purposes this may be sufficient, since it may distinguish between people who will and will not produce a car in a given situation.

However, if our purpose is not to have a car, but to understand the knowledge required to build a car, the characterization is unclear. A person who "knows how to build a car" might be a severely handicapped person who has been trained (perhaps via conditioning with rewards) to construct a car by following a specific set of procedures using specified materials. Alternatively, we might be dealing with a retired mechanic turned hobbyist with extensive knowledge of cars and how they work. Clearly, these two cases are different. What the former mechanic knows allows for a flexible approach to construction, one that adapts to unforseen challenges, such as the having the wrong sized part. The handicapped person might be less flexible. Their knowledge may not allow them to adapt to unforseen problems that come up. These problems may bring construction to a halt, or may be ignored because their significance is not recognized.

Poorly conceptualized differences in ways of knowing extend beyond procedural knowledge. Consider the knowledge that "heroin is a destructive drug that is ultimately not worth taking." Thinking that this is the case does not imply that a person is not a drug user, as heroin users may be well aware of the negative effects of their drug use, and wish to stop using. At the same time, ex-drug users may attribute their continued abstinence to this knowledge.

EXPLANATION
The problem above results from the use of an epistemological system that has emerged to satisfy certain needs in the human social world. This problem is not inherent to the epistemological system, but results from its use in contexts where it is not appropriate. When dealing with practical matters in familiar contexts, such as "will the car get built," characterizing someone as knowing or not knowing [how to build cars] is a useful. The fact that it glosses over different ways of knowing is irrelevant for these purposes.

What appears to have happened is that the relativity of these distinctions to certain purposes has been unrealized and the concepts have been reified within psychological research. Researchers studying cognition have assumed that received ways of characterizing cognition reflect the actual nature of psychological processes (or are the ideal way to study these processes). As a result, they have blindly shaped certain problems to fit these characterizations, even when this leads to problems like the two described above.

To do better work, psychologists must examine their conceptions and ways of making sense of phenomena, recognizing that these are simply cultural constructs. The logical next step is the formulation of better characterizations that reflect the phenomena at hand. In the next post, I will look at a further effect of applying the theories of cultural constructivism to psychology. In short, I argue that culturally received ways of talking about knowing reflect types of knowing that are specific to cultural activity and don't apply to non-cultural activity, such as that seen in infants and animals.