Wednesday, July 29, 2009

Systemic Awareness

I came up with this new term in the hopes that it will be a useful tool in making sense of a subject that I've been dwelling on for some months now.  The subject in question basically has to do with the limits to our intentionality.  

We have wishes and desires, and knowledge that relates to these.  The fact that we are able to use our bodies to do things that we want does not mean that our bodies will align themselves precisely with our wishes.  Picture a drug addict who is fully aware of the damage that his addiction is doing to himself and others, yet who continues to use drugs.  Even though he knows on some level that he does not want to continue to use drugs, certain systems within his body "want" him to continue to use.  

I understand this phenomenon as being caused by a partial lack of systemic awareness.  When a system contains a single mechanism for perceiving and responding to an environmental cue adaptively, that system has achieved a minimum level of systemic awareness.  The system has become an adaptive entity when adaptive mechanisms at the level of the system itself have emerged.  These mechanisms require three interrelated parts: receptors (to sense), effectors (to act) and the appropriate connections between them.  

The need for the appropriate links between sensors and effectors can be seen within the human body:  If I were told that I was about to be bombarded with bacteria (at a higher level than usual), I would be unable to mobilize my immune system to prepare for the onslaught.  The immune system does have the ability to "work overtime", but this ability cannot be elicited by my knowledge of my current state.  This is a case of a lack of systemic awareness.

For another example, we may jump to a higher level.  The ecosystem on the planet earth is being driven away from its current state as a result of the effects of human life.  Considered as an organism itself, humanity has some knowledge of what is happening, yet it is unclear if the appropriate steps will be taken in time to prevent disaster.  If disaster is averted by collective action, this  will most likely be the result of humanity having developed the appropriate mechanisms of perception and action.

I think that we are in the position to learn some valuable lessons about how to prevent climate change by looking at the ways that adaptive chains of perception and action occur in the human organism.  It's important to point out that the mechanisms of bodily action that perform (e.g.) action x are not necessarily mobilized when a different level of organization in the body knows that action x is needed.   So, if humanity is going to deal with the climate crisis in a way that bears any resemblance to the way that other biological organisms deal with crises, we ought not to expect that the solution will come from a mere reversal of every process that has been causing climate change.

Tuesday, July 21, 2009

I think that a shortcoming of much of the literature espousing a "systems" perspective is weakened by one point:  There is a persistent failure to acknowledge that people in general realize that, for lack of a better phrase, "everything affects everything else."   

The importance of systems theory does not derive from this realization, but rather from founding a way of thinking based on systems rather than discrete entities.  This subtle difference was, for me, the biggest impediment towards appreciating this perspective, which for me is epitomized by Gregory Bateson.

Knowledge, Adaptation and Meaning

I've been thinking recently about knowledge and adaptation.  About six months ago I started getting into Piaget, as well as others who may be labeled "evolutionary epistemologists".  The result of this encounter was that I created a strong link between knowledge and adaptation-specifically the idea that adaptations constitute knowledge about the environment.  

Recently, I've been thinking about the difference between knowledge in its commonsense-sense and adaptation.  I still think there's a strong link between knowledge and adaptation, but I don't think that the two should be considered synonymous.  I think it's incorrect to say that knowledge is inherently adaptive.  Rather, knowledge allows for the spontaneous and flexible creation of adaptive behavior.  For example, if I hear that a certain bridge that I must take on my way to work has washed out, I will use this knowledge to take a different route.  The knowledge about the washed out bridge is not in itself adaptive, but it does allow for adaptive behavior.

What follows is meant to clear up the relation between adaptation, meaning and knowledge.  My goal has been to get to the pure essence of these things-inasmuch as that is possible.  I believe that in our day to day lives, knowledge, meaning and adaptation are tightly and intricately interwoven in incredibly complex ways.  Nevertheless, I feel that what follows sheds light on what would otherwise be just a mess.

What is it about knowledge that allows for it to be used in the creation of adaptive behavior if it is not intrinsically adaptive?  I believe that knowledge can be understood as something that is predictive (or believed to be predictive) about sensory-motor activity.   Returning to the example of the washed out bridge, the knowledge about the washed out bridge is a prediction about possible sensory-motor activity-specifically the sensory motor activity that will occur when one is at the bridge.  
So far I have explained knowledge as something that is predictive (or believed to be predictive-in the case of false knowledge) about potential sensory-motor activity.  This leaves unresolved the main question of how knowledge may be used adaptively.  To resolve this issue, we have to assume that sensory-motor activity itself has meaning.  By meaning I mean that we are able to interpret sensory-motor activity as it relates to our livelihood (and especially our survival).  

Friday, July 3, 2009

What's so special about natural intelligence?

To be blunt, I think that a lot of the mainstream arguments for why human beings are not like digital computers miss the point.  I'm not one to pontificate about this since I am just an amateur, but it is only recently (in the last 6 months) that I have been exposed to some really solid arguments on this topic.   In this post, I want to single out three works that really get at the meat of the issue, which I'll talk about first.

All three of these books emphasize what I think is the crucial and criminally overlooked distinction between artificial and natural intelligence.  The essence of this distinction is that, in computers, the meaningfulness of different symbols is specified externally by the programmer.  In animals, the meaningfulness of a specific part of the environment is the outcome of the developed structure of  the animal itself.  The animal has developed within the environment in ways that make specific aspects of that environment meaningful.  This is a subtle point that I resisted at first.  Another way of putting it is to say that NOTHING (or very little) is meaningful to a computer, because the syntactic operations that occur therein have been specified by a human, and are therefore only meaningful to the human.  Content on computers may be part of meaningful loops, but those loops always include a human being.
I should mention that there is an exception to this.  Feedback loops within the computer that are monitered and are important to the computers functioning are certainly like parts of the human body.  However, these are not typically the reason why people equate computers with human, and the same types of feedback loops can be found in many devices, like thermostats.

As I said above, three books/articles touch on this issue (or related issues) in much better ways than I am capable of. They are:

One of Peter Cariani's articles on semiotic systems.  In this article he touches on the Peircean ideas of secondness and thirdness, stating that all syntactic operations on computers only attain thirdness when a human observer interprets them.  In other words, the computations solved by the computer are not meaningul to the computer, and that is the crucial fact that separates computers from people.


The Embodied Mind by Varela, Thompson and Rosch.
I will most likely make a posting devoted to this fantastic book.  This is one of the most carefully written, clear (once you get used to the subtle language used by the authors), and deep books I have ever read.  The amount of ground covered is astounding, and the authors give a good introduction to the incredible developments done in the field of cybernetics that preceded cognitive science, and foresaw many of its shortcomings by decades.


Bright Air, Brilliant Fire by Gerald Edelman.
This is a fantastic book that touches on biological findings and incorporates them into a view of cognitive science that does not ignore experience.  Edelman is a fantastic writer who explains the basics of neuroscience with a clarity and logical organization that is immensely satisfying to those with very little knowledge of biology.

There are many other sources for ideas similar to those that I have touched on here. This is by no means a thorough list.  Instead, my point is to show that there is a crucial difference between computers and people that is all too often ignored, or not spoken of clearly.

Objective Knowledge

I will begin this by saying that I love Wikipedia.  I am proud to say that I loved it from the moment that I found it (although I first found WordIQ and was excited about that). 

With Wikipedia (and more recently sites like Wolframalpha), those of us who use the internet have more readily available access to a huge store of knowledge that tries to be as independent of particular subjects as possible.  That is, the content of Wikipedia is supposed to be objectively verifiable-not the stuff of opinions, feelings and whatnot. 
Of course, all knowledge is from the standpoint of an observer.  That the peony is the state flower of Indian is knowledge to someone-as all knowledge is.  It cannot exist outside of a someone who knows it.
That being said, there is some knowledge that is more "objective" and some that is more "subjective".  These two terms work very well in a functional sense.  For example, that Cazenovia, New York is a beautiful town without compare, and that it is a town with less than 7000 people both constitute knowledge from the standpoint of an observer.  However, I would have to admit that only the latter is objective.  The second bit of knowledge is something that we can derive from the use of specially formed cultural tools (the concept of number).  These tools work in such a way that they yield knowledge that is the same from observer to observer-provided that they have the cultural tools.
Wikipedia and other "knowledge sites" make use of this kind of knowledge exclusively.  It could be argued that the dominance of this kind of knowledge will lead to the downfall of the other "softer", more subjective kinds.  I feel that in fact the reverse will happen-hopefully.  The dominance of objective knowledge will lead to a more widespread realization of the nature of subjective knowledge (which is actually more like feelings than "knowledge").  In other words, the rise of objective knowledge will serve to conceptualize subjective knowledge as its foil.  I really hope this is what happens anyway.

Self Description

In a post a few days ago, I stated that I would comment in a future post on ways to conceptualize (to make models (maps) of) the human ability to understand ourselves.  For lack of a better way of phrasing this: From the standpoint of naturalized epistemology, how can we conceptualize self reflection?  

I'm not going to act like I know how to proceed here, but I would suggest that the work of Jesper Hoffmeyer and others holds an important key.  Hoffmeyer talks about DNA as a form of self reference (see Signs of Meaning in the Universe among other sources).  The DNA within an organism is about that organism.  
From this perspective, can we talk about our own conscious self reflection (or our creation of biological models of ourselves) as a higher order reflection?  Can we inform our understanding of this reflection with what we know about DNA?  Are there structural or functional analogues between these two types of self reflection?

THOUGHT EXPERIMENT #1: Amelia

This is a follow-up from the last post-it may make the most sense to read that one first, but I will make every effort to make this one stand on its own.  
I should note that I'm always annoyed by thought experiments that don't seem to be plausible.  The "zombie" thought experiment in particular annoys me.  I just don't think that a point can be made if the basis of the thought experiment itself is not possible.  This thought experiment is potentially implausible as well.  However, I think that it's acceptable in ways that the "zombie" was not.

The thought experiment is this:  Imagine an incredibly intelligent person, Amelia, who grew up completely isolated from an understanding of the shape of her body and has no familiarity with models of life as we know it-yet she is conscious and intellectually endowed to an amazing extent.   

Suppose that Amelia happens upon a detailed description of a human body.  This description would include everything: how all of the interrelated parts worked and sustained themselves, etc.  The diagram would have enough detail so that IF someone had the intellectual capacity (and Amelia DOES), they could look at the diagram and understand how the human body worked from a molecular level up to the highest levels of organization.  Of course, the diagram would have to also include very detailed information about the environment in which the creature lived.  In fact, it's unclear what (if anything) about the universe could be bracketed off as irrelevant to understanding the diagram.
Amelia would read the diagram and be able to understand precisely how humans could survive based on their physiological makeup in relation to the environment.  She would understand how the autonomic functions controlled the visceral functions of the body and how the somatic nervous system worked with the cortex to enable actions in the world.  Yet, she would have no understanding that this was a model of something like her body-a body that she has no understanding of (assume that it has been hidden from her view somehow).

My question is: What would emerge from this understanding?  It seems to me that even if Amelia started off in her study of this model with no understanding of life (I know how ridiculous this sounds), fully understanding how the human in the model functioned and emerged in phylogeny and ontogeny would lead to that understanding-it would lead to grasping the meaningfullness of human life.  It would lead to understanding how certain visual patterns on the retina would cause a reaction-and if that reaction was understood-then some semblance of it as a MEANINGFUL reaction would have to be understood.  
I know that I have given no definition for "meaningful".  I'm using the term as a way of touching that quality that the different parts of our experience have.   The way that-to a drinker-a bottle of gin is different from a bottle of vodka, even when the drinker knows nothing of the chemical differences between the two substances.

Overall, the question that I aim to get at with this thought experiment is this:  If, from looking at the model, Amelia could understand how it worked as a dynamic system, would she have to know that the system generated experience?
OK, I am not going to focus here on creating a model of reflection-that's on the backburner for now.

Instead, I want to comment on one of the underlying sources of motivation for this blog: the relation between the physiological structure of the body and lived, embodied experience.

Here I'm just going to skip any kind of introduction and dive right into the heart of the matter: What is the relation between conscious experience and the (dynamic) structure of the body?

There certainly IS a relation, that much is undeniable.  It's not as if the model of our body bears no decipherable relation to our conscious experience.  On the contrary, our model of the body has eyes that relate to our vision, ears that relate to our hearing, etc.  Furthermore, these things are not just isolated, but instead interconnected by nerve cells in the brain.  This last bit can account for the coordination between our different senses.  Already it's clear that there is a relation between our body and our experience-a relation that makes sense in at least some respects.

All of this is slightly tongue in cheek, of course.  Is it even reasonable to assume that an embodied species could make a model of its body that could not meaningfully correspond to experience?  How could a model that didn't correspond even come into expstence.  Is it possible to imagine a creature that had a body, yet could not even make the slightest steps towards understanding the relation between their body and their experience?  
Before things get out of hand we should take a moment to ask ourselves: Does our model of our body make sense of our experience to any real extent?  Take vision for instance.  Our model makes sense of vision to the extent that it models our eyes as light receptive organs.  To that end, the model helps us understand vision as something that results from the eyes' sensativity to light and its connection to the rest of the nervous system.  Yet at first glimpse that says nothing of experience.  Rather, it commits the homuncular fallacy.  We accept a model of how the eyes works that just pushes the problem of visual experience further back into the brain.

The incredibly ambitious point that I'm getting to is this:  What kind of model could possibly account for experience?  I mean a model that one could read about and comprehend relatively easily that would make lived experience make sense in much the same way that evolutionary adaptation makes sense of life on earth (but not necessarily lived experience).  

Already I've shown the insufficiency of explaining, e.g., visual experience by focusing on just the visual system.  That approach could generate a research program that studied the visual system in great detail, and the end result would be an account of a biological mechanism that was receptive to changing light patterns in such a way that the patterns received on a sensor (the retina) would be somehow deconstructed and give way to patterns of neuron firing.   That tells us a lot about the design of an optical sensor.  It tells us nothing about visual experience-well, maybe something about how we may come to connect meaning to specific things in our visual field.  
Still, it's obvious what's missing:  We can explain how we are able to do certain visual tasks-e.g., how we can recognize, categorize, act, etc.   In fact, all life processes viewed from the third person perspective are comprehensible-certainly not in detail, but at least theoretically.  But there's always that missing crucial ingredient-experience.  What is it about all of these things that causes experience?  In the next post I will attempt to address this with a thought experiment...