Or putting these arguments in another way:
Saturday, July 02, 2005
A Note on Ostrich Psychology
"We will laugh at the extraordinary stupidity of the crowd, my Kepler. What do you say to the main philosophers of our school, who, with the stubbornness of vipers, never wanted to see the planets, the moon or the telescope although I offered them a thousand times to show them the planets and the moon. Really, as some have shut their ears, these have shut their eyes towards the light of truth. This is an awful thing, but it does not astonish me. This sort of person thinks that philosophy is a book like the Aeneid or Odyssey and that one has to search for truth not in the world of nature, but in the comparisons of texts."Carola Baumgardt, Johannes Kepler; Life and Letters, Philosophical Library, New York, 1951, p.86
Before the telescope, astronomy was all too easy, and truth shadowed every metaphor. Look at the moving stars, and one could conjure the earth moving through the firmament on the back of a turtle, the sun and stars orbiting a stationary earth, or even more exotic cycles that moved the sun about the earth and the stars about the sun. Copernicus offered a final rejoinder, and posed the idea of the earth and planets orbiting the sun in perfect circles. Of course, to make the orbits work out right, the planets were hypothesized to make loop the loops in their courses. A complex clockwork to be sure, but a workable contraption nonetheless.
These versions of celestial mechanics as respectively worked out by the Mayans, Claudius Ptolemy, Tycho Brahe, and Nicolai Copernicus were mathematically rigorous, made predictions that were accurate and sound, but were based on realities that were dispelled by a simple telescope. Or at least so it seemed to Galileo Galilei. But what Galileo did not recognize is that reality is a multi-layered thing. Increase the discerning power of your instruments and your mathematics, and time and motion become relative (as in Einsteinian relativity), increase them further, and they become indeterminate, and do not even exist (as in quantum physics).
So what is reality? It is all dependent upon a perspective that is determined by the tools you use and the problems you wish to pose. If your problems are resolved by God’s revelation, you don’t have to look any further, and you can safely turn up your nose at Galileo’s offer. Realities are all over the place, so why not choose the ones that fit you best, and leave the others to ivory towered professors?
It is because understanding demands it.
Understanding is nothing more than having a proper explanation for a phenomenon. But often, one explanation won’t do. For example, I can explain a headache as a throbbing sensation, as an inflammation of cranial blood vessels, or as the activity of neurotransmitters, blood cells, and viruses. But to understand a headache, I must be able to integrate and use all of these explanations. Through the combination of these perspectives into one explanatory scheme, one can make complex events like headaches seem simple, and most importantly, be able to develop and choose from the therapies and procedures that provide a remedy for headaches. Thus, by understanding what headaches feel like and where they come from, one can dismiss spurious causes like demonic possession and lunar gravity, and develop and employ procedures that truly bring relief.
Understanding requires the integration of perspectives that are wedded to the resolving power of the tools we use, and the language that we employ to describe what we see. Thus I can hold my hand in front of my face and subjectively describe what I see, but I can at turns use an X-ray device to see inside the hand and a microscope to see its molecular structure. No single explanation has any privileged status, since each one depends upon the questions you want resolved. More importantly, each explanation is at root metaphorical in nature, and implicates but does not require the use of more complex data languages that map in detail the workings of blood vessels and infectious agents. Thus, any layman can talk about fever symptoms, circulatory systems, and infectious agents without having to master the technical language of a specialist.
The integration of different levels of thinking, from the subjective (I hurt) to the behavioral (my tummy is acting up) to the molecular (I have a stomach virus) allows us to understand our world, and most importantly, to discount extraneous explanatory metaphors that would otherwise be accepted if only one level was considered. Because I can employ simple metaphorical explanations that describe and integrate every level of a phenomenon, I can dismiss explanations that have no similar integrative or semantic power. Thus, by grounding a concept to lower level observations, the intellectual cobwebs of universal turtles, looping suns, and evil demons are swept away.
The progress of science and the import of scientific revolution are critically dependent upon the development of new tools and their accompanying methodologies that improve our perceptual resolution of the facts of the world. The first revolution of the physical sciences came first because the tools came easy, and so a little tube affixed with polished glass easily unveiled the secrets of the universe. Equally so, the successive revolutions in the biological and physical sciences from Darwin to Einstein came about in tandem with the development of new tools that laid bare the actual physical processes that underscored life, matter, and the realities of time and space.
However, the workings of the human mind however are not so transparent. Indeed, until recently, the mind has remained obstinate and impervious to the probing instrumentalities of science. In the last twenty years, with the development of the instrumental counterparts to Galileo’s telescope, the nascent science of cognitive neuro-psychology has taken flight. The newfound ability to observe and measure the actual neural processes that underlie behavior has marked the development of new theories of learning, emotion, and other cognitive processes that are thoroughly rooted to a clear vision of how the brain actually works. This movement to neurological realism is in its incipient stages, but ironically it lends not complexity but simplicity to the science of psychology. A common consensus emerging in cognitive neuroscience is that complex behavioral phenomena such as emotion, learning, perception, problem solving etc. derive from the concerted activity of very simple neural processes. The ability to derive simplicity from complexity of course is the hallmark of the successive revolutions in science. From the invention of simple principles (E=mc2) to the employment of equally simple metaphorical schemes (survival of the fittest), the immense diversity of nature has been mapped to and derived from simple and basic principles.
Like the obstinate prelates that Galileo scorned, most modern psychologists cannot believe or ignore the possibility that such concepts as consciousness, emotion, value, and virtue itself can effortlessly emerge from rudimentary processes. Just as in the geo-centric universe of old, they would have a true understanding of the human mind evade our grasp because of the seeming complexity and indeterminacy of human behavior, and be not worth our grasp because such an understanding would be dehumanizing, or at least unnecessary. So they don’t consider it, refuse to debate it, and make neuro-psychology into an arcane subset of psychology that cannot correct for and inform their own theories that mount the human mind on universal turtles. So spurious mental processes like intrinsic motivation, psychic energy, flow, memetics, etc. are continuously foisted upon an uninformed public as truth, when they have no more claim to reality than the myriad variations of the cosmic clockwork that kept prelates and astrologers so busy until Galileo rudely intervened.
Explanations must ultimately be anchored not to the consequences of behavior, but to the multivariate facts of behavior. Just as predicting where and when an eclipse may occur does not permit you to explain how eclipses occur, so too does demonstrating the efficacy of a psychotherapeutic or other procedure fail to lead irrevocably to a determination of the actual mental processes that underlie behavior. All that prediction can give you is a correcting influence to the scope of processes that you can no more than infer. But like theories of the solar system, inferred processes can multiply in an endless retrofit to the surface facts at hand, and create a psychobabble that fills countless journal articles and pop psychology books, but are ultimately more deserving of ridicule than respect.
The inability to recognize the facts of behavior as represented by its different metaphorical representations from the molar (subjective experience) to the molecular (neural activity) is destructive to understanding, because understanding requires an integration of the different ways we register what we see. Without it, psychology is condemned to represent little more than a Balkanized squabbling of children, with psychologists of every persuasion and every school talking past one another rather than to one another. By trivializing or refusing to consider the perspectives enabled by different methodologies, whether they represent a subjective interview, an operant contingency, or a PET scan of the brain, psychology will forever represent an inchoate muddle that obfuscates rather than explains the human mind.Otherwise, you might as well have your head in the sand.
Or putting these arguments in another way:
A Note on Level Confusion
Consider this proposition. Water is composed of two parts hydrogen, one part oxygen, and one part wetness. What’s wrong here? Nothing really, unless you are a chemistry student and you want to make a prediction about the chemical properties of water. Wetness of course is an aspect of water. The universally accepted metaphors that describe the subjective aspect of a phenomenon, or qualia, are valid events. Indeed, we couldn’t make sense of our world unless we accepted the facts of color, smell, pain and pleasure, and consciousness. However, although the personal language of subjective qualia helps you find water fountains, avoid rainstorms, and so forth, if you include that language in a chemistry experiment, you are bound to fail.
Language solves problems, but mixing languages makes problems. Thus, if you mix H2O and wetness or H2O and quantum mechanics, simple problems in chemistry become either unsolvable or intractable. By mixing two sets of symbols that are individually maximized to solve separate sets of problems, you create confusion rather than clarity. Scientists solve this problem by insisting on one data language per subject matter. Because there are many different subject matters in science, scientists study them better through their use of specialized languages that can best fit the different classes of problems that each one presents. Science indeed may be characterized as problem-solving using a specific symbolic language tailored to a specific set of problems.
The most essential aspect of problem solving is to choose the correct language and the procedures entailed by that language that fits the problem at hand. Thus I use the subjective qualia of a rainy day to determine whether to bring an umbrella to work, the language of physics to determine things such as water flow and pressure, or the language of chemistry to determine how water will mix with oil, or why it won’t.
But how do we go about choosing the right language? It all depends upon how well we understand or can explain our world. Understanding is not problem solving, but denotes merely the ability to choose the language and its entailed method that best fits the problem you want to have solved. By simplifying or parsing the symbolic languages of each level, and noting their real distinctions, one may easily switch between languages, and choose the one we want to use given the problems that are at hand. Thus if I go to doctor, I can talked about headache pain, but switch to a language about nerves, disease agents, and the like to more effectively focus on prospective therapies.
Understanding also prevents different data languages from being commingled. Thus when someone says his headache feels like demons dancing in his head, we know implicitly that there is no underlying headache process that entails an intricate demonic sock-hop because we know already the general neural processes that make up headaches. In effect, by knowing human physiology, this level of knowledge corrects for surplus meanings that on a higher level would have us visiting the friendly exorcist rather than physician.
Ironically, a purely scientific pursuit entails problem solving, but it does not necessarily entail understanding. For example, a neuro-psychologist may know the neurochemistry of learning, but be totally clueless as to the subjective responses of people who interpret that neurochemistry as painful or pleasurable. Likewise, a humanistic psychologist may note the subjective reactions of people, but scarcely understand the principles of learning that produce those feelings.
Generally, it is acceptable to be ignorant of other levels if the problems that you are concerned about are the only ones that matter, and if the methodological language you use gives you an adequate level of prediction and control. However, if that language doesn’t quite provide this, it is always tempting to create inferred processes that refer to lower level processes, and act as fudge factors to make it easy for you to defer considering more rudimentary and underlying facts. In theory creation, the postulation of inferred processes represent an intermittent commingling of data languages, or level confusion. In their best guise, they are temporary placeholders until the real processes are observed. However, in their worst form, their ephemeral reality becomes elevated to the status of a real process, with unfortunate implications to the progress of science. For example, the chemistry and physics of the 18th and 19th century did not have instrumentation with the resolving power to note the actual processes behind combustion and the transmission of light, so entities such as phlogiston and ether were inferred as fudge factors that made the explanations ‘work’. Of course, these variables also had the unintended aspect of short circuiting any further inquiry into the true nature of combustion and light, since it was easy to take the further step and consider them real. Likewise, pop, humanistic, and social psychologists note that motivation does not occur because of the rational consideration of ordinary objects (e.g. cars, boats, vacations in Bermuda), hence they are quick to postulate separate processes of ‘intrinsic’ and ‘extrinsic’ motivation, and unique mental states such as ‘flow’, ‘psychic energy’, and the like. Even psychological movements that eschew the postulation of inferred processes succumb to the same problem. For example, behaviorism maps behavior to clearly defined contingencies of reinforcement or reward, yet because contingencies map but poorly to involuntary responses or reflexes such as salivation, emotion, and the like, it becomes easy to attribute this difficulty to the existence of different learning processes called operant (or Skinnerian) and respondent (or Pavlovian) conditioning that respond to entirely different procedures.
However, contemporary research on the neuro-psychology of learning casts doubt on all of these convenient distinctions. In actuality, there are no separate neural processes that distinguish intrinsic from extrinsic motivation, or respondent from operant conditioning. Indeed, ‘learning’ derives from the activity of very simple neural processes, and the divisions between types of learning are as artificial and specious as the dancing demons in our patient’s head. Nonetheless, its easy to gloss over this inconvenient fact, particularly if a bit of imprecision is acceptable to your peers, not to mention a naïve audience, and if a lot of surplus terminology is a shining badge of your psychological acumen.
It is fortunate that unlike the social sciences, practitioners of the physical sciences are not comfortable with or tolerant of imprecision. If they were not, then physicists would be all too happy to imprecisely predict the motions of the solar system by putting it on the back of a cosmic turtle, or having the sun and planets rotate about the earth. But science takes this obsession with precision even further, and happily upends well-tested theories that reach even near perfect precision. For example, Newton’s laws predict with a great degree of accuracy the motions of the planets and the stars. However, at speeds approaching the speed of light, the predictive power of Newtonian physics breaks down, and the universe must be cast under the perspective of Einsteinian relativity.
Similarly, humanistic and social psychology uses the metaphors of pain, consciousness, need, self concept etc. with some success to predict behavior, and behaviorists similarly tout the predictive power of a contingency based language. Of course, the predictive power of such analysis is still somewhat in the turtle with the universe on board stage, yet the glib answer to this complaint is that the human mind is a rather complex thing, and is replete with myriad levels and interactions that are more akin to a Super-Mario Nintendo game than to an orderly Copernican sky. But this is nonsense. The complexity of human behavior is not to be denied, but it is no more complex than other aspects of the human condition that are more clearly defined, such as the processes of respiration, infection, and locomotion. The true problem is that psychology is made much more complex through the common postulation of inferred processes that suggest hidden physiological processes that are completely spurious. Hence, in spite of the fact that neuro-psychology deflates the separate mental processes of intrinsic and extrinsic motivation, respondent and operant conditioning, psychic energy and the like as the psychic soap bubbles they are, such processes continue to be generally accepted in psychology, with more of them coined each day.
In science, the true measure of a scientific revolution is not the formulation of a new and complex mapping of the ways of the world, whether it be how life evolves, the universe began, or the nature of space and time. It is rather in the creation of distinctive and simple descriptive metaphorical schemes that are mutually corrective and that can be in turn incorporated in the common vernacular. For example, the processes behind infection are as complex as the processes that underlie any psychological phenomenon, yet the metaphorical notion of microscopic infectious organisms is simple enough to be understood by a child. Because we can render the complex processes behind infection in a simple way, the metaphors we use in our subjective description of an infection (e.g. it hurts like little demons are in my head) are clearly understood as metaphors, and as unrepresentative of any real Satanic processes. So if we have headaches, we go to a doctor and get medicine, and are less inclined to seek out and burn witches who cast headache spells.
Multiple metaphorical ways of looking at the world do not make for more complexity, but less, and indirectly provide greater utility and predictive power through their ability to exclude spurious entities that like phlogiston and ether direct our attention from true causes. The countless arguments in psychology about the superiority of one research method compared to another miss the point of what true understanding entails. The question is not a matter of the superiority of reductionism vs. ‘holism’ or some other antagonism between research schemes, but rather how such methods may correct for unfortunate directions their language may take them. We are not just atoms, nerve impulses or feelings, but are all of them. To be a science, psychology must accept and integrate the pluralistic methods that in toto allow us to perceive the world, and by doing so, allow us to understand it.
Cognitive Linguistics and Contemporary Psychology
The astute reader will note that the above arguments are not entirely original, yet nonetheless reflect an entirely new way of conceptualizing the scientific method. This approach, which is derived from a cognitive linguistic analysis of the metaphorical structure of language, recognizes that language itself is not value neutral, and can skew interpretative results to meet the nonconscious implications of the language itself. This can only be remedied by the ability to shift between and integrate different methodologies and perspectives that correct for the disparate conclusions suggested by separate linguistic frameworks. Cognitive linguistics, as a subset of ‘2nd generation cognitive science’, recognizes that philosophy must begin with an empirically responsible cognitive science based on the assumptions of convergent evidence from different methodologies. This position, as exemplified by the work of the cognitive linguist George Lakoff and his colleagues, is sadly not shared in the mono-methodological universe that governs most psychology research. This has resulted in psychologists commonly talking past one another rather to one another, with the tragic result of a blossoming of a babel of incommensurate theories that render psychology into a science of almost stupefying complexity and endless redundancy.
In 1999, Lakoff’s major work on cognitive linguistics, ‘Philosophy in the Flesh’, laid bare the inherent inconsistencies and shortcomings of the linguistic roots of the philosophical and psychological traditions that have underscored Western thinking for over two thousand years. By demonstrating the common metaphorical and methodological commitments of separate schools of philosophical thought, Lakoff demonstrated that an effective criticism of any philosophical construct can be made prior to an examination of its theoretical commitments. In other words, the success of any scientific inquiry is entailed by the convergent methodologies it engages or does not engage, and not through its theoretical results alone. But of course, Lakoff can speak for himself:
"In applying a method, we need to be as sure as we can that the method itself does not either determine the outcome in advance of the empirical inquiry or artificially skew it. A common method in achieving this…. is to seek converging evidence using the broadest available range of differing methodologies. Ideally, the skewing effects of any one method will be canceled out by the other methods. The more sources of evidence we have, the more likely this is to happen. Where one has five to ten sources of converging evidence, the probability of any particular methodological assumption skewing the results falls considerably."
"Thus certain commitments are required for an empirically responsible inquiry. They include:
The Cognitive Reality Commitment: An adequate theory of concepts and reason must provide an account of the mind that is cognitively and neurally realistic.
The Convergent Evidence Commitment: An adequate theory of concepts and reason must be committed to the search for converging evidence from as many sources as possible.
The Generalization and Comprehensiveness Commitment: An adequate theory must provide empirical generalizations over the widest possible range of phenomena.
We need assumptions like this to minimize the possibility that the results of the inquiry will be predetermined. The assumption that we should seek generalizations over the widest possible range of data does not guarantee that we will find any generalizations at all; nor does it determine the content of the generalizations found…. It is only when such assumptions are applied to a broad range of data of many sorts using many different convergent methodologies that these ‘results’ appear." (P. 80)
"For cognitive science to be appropriately self-critical, it must repeatedly critique its own conception of cognitive science, of empirical testing, and of scientific explanation. There is no way out of this problem, but this does not mean that every theory, method, or concept is equally good or that it is all merely a "matter of interpretation." The cognitive sciences must rely on stable converging evidence from a number of different sciences, methods, and viewpoints. Only in this way can an empirical approach minimize the problem, so well documented by Thomas Kuhn, of a scientific theory defining what counts as evidence in such a way as to guarantee the truth of the theory in advance."
"Cognitive science should be based on an appropriately self critical methodology, one that makes minimal methodological assumptions that do not determine a priori the details of any particular analysis. Only if this condition is met can a cognitive science of philosophy be appropriately critical of philosophical theories." (P342)