Saturday, October 9, 2010

The Amoral and the Immoral

One night recently, I was watching “The Daily Show with Jon Stewart” and was introduced to a writer named Sam Harris who was plugging his new book, The Moral Landscape. The discussion indicated that the topic of the book is Mr. Harris’ claim that science can be used to produce or discover moral facts or principles, and should replace or supersede religion as a source of moral understanding.

My immediate response was, “Bunkum!” or some less nice words to that effect.

I haven’t read the book, nor have I read any of Mr. Harris’ previous works, though I have learned by visiting his website that he is one of those people who believe religion is bad, science is good, and the two are opposed, and that he has received praise from, among others, Richard Dawkins.

From what I read on Mr. Harris’ site, it appears that he agrees with Dr. Dawkins that religion is an unnecessary excresence on human history and we would all be much better off if it would just go away. (Dawkins, of course, has expressed the opinion (in The God Delusion) that religious belief only persists because of bad parenting, and if “we” could just stop people from propagating these erroneous beliefs, religion would indeed just go away, and we could all go forward with an ideal life in a science-ruled world. More about that later.)

I also learned from a column that Mr. Harris wrote for the Huffington Post that he is dismayed by his observation that many scientists agree with many religious believers (including me) in concluding that science simply is not equipped to deal with moral principles: It can study what people say and do about morality, but it can’t say what is or is not truly moral.

This is in fact the most serious roadblock that the pro-science crowd has found to its agenda of eliminating religious belief and basing all social, political and personal life on scientific principles. My impression – and I must reiterate that it’s based on the one interview and a fairly speedy reading of the online sources – is that Mr. Harris has written his new book precisely in order to try to knock down this obstacle and clear the way for the Golden Age of Scientific Rule.

I don’t plan to read the book itself because I think I have better things to do with my time than waste it reading something I already know is an exercise in futility. That may sound narrow-minded, but in fact it’s based on a thoroughly rational appraisal of the prospects. As it happens, there’s an airtight and surprisingly simple argument:

1. “Nature,” by which I mean the aggregate of physical data that modern science restricts itself to studying, is inherently amoral. There is no moral good or bad in the physical cause-and-effect processes that materialist scientists insist are the sum total of what the universe is. Ultimately, it’s all random.

2. “Rationality,” by which I mean in this instance the use of more-or-less-formal logic, is also inherently amoral. Logical analysis says nothing about whether a conclusion is morally good or bad, only whether that conclusion is based on a valid argument.

3. “Science,” then, if defined as the application of rationality to natural phenomena, is inherently amoral: Its objects of study and its manner of study offer neither moral content nor moral analysis  (AIAO: Amorality In, Amorality Out).

Thus, if a scientist is proposing moral principles or advocating a course of action as morally positive, he or she must be basing this proposition or advocacy on something other than science. In practice, of course, the moral principle generally is inserted into the discourse at the beginning as an assumption. (Harris seems to be assuming that a scientific morality would somehow be “more moral” than one based on religion, because science is better than religion as an understanding of reality.)

Canadian philosopher Charles Taylor has addressed these issues in considerable detail in Sources of the Self and has shown that the adherents of the atheistic/materialistic/secular-humanistic worldview(s) are unable to account, using their own logic, for the moral principles they espouse. Their moral imperatives exist as part of our Western cultural legacy, having entered the cultural stream from religious sources, but are treated as “self-evident” because the proponents of this view can't allow themselves to acknowledge the original religious source.

In general, what the atheist/secularist crowd espouses are the “Enlightenment” values of individual liberty and humanitarianism. And certainly there’s nothing wrong with them as values. But based on their own materialist-rationalist principles, the atheists-secularists can’t explain why these things are worth valuing.

Absent such an explanation, it becomes easy for some people to conclude that they aren’t truly worth valuing. This then allows them to proceed to behave with disregard for others’ liberty or well-being, as in Social Darwinism, Objectivism, Straussianism, etc.

Harris’ approach apparently relies at least in part on “human flourishing” as a yardstick of value, but Taylor has already shown the inadequacy — indeed, the danger — of that standard.

Obviously, how one defines “flourishing” has a major effect on what one wants to propose as a moral good. If “flourishing” means mere physical well-being, for instance, the argument must tend toward the kind of hedonistic, consumeristic society we already live in, and which so many of us find objectionable on various levels, including the environmental and the spiritual.

Which raises another objection: Obviously, if one automatically rejects religion as a moral source, one is rejecting spirituality as a moral value. So any moral system one constructs on that basis will offer no satisfaction for anyone who believes in the reality of spiritual rewards. And it will automatically denigrate any system or society that does accord value to spirituality, while overrating a system or society that ignores spiritual value or meaning and looks instead at physical well-being as a standard.

Of course, the pro-science crowd delights in detailing the many abuses that have been committed in the name of religion, and there certainly is no denying that terrible abuses have occurred, and continue to occur. But the advocates of science as a standard are far less inclined to take note of the rather unencouraging track record of science and scientists on moral issues in the relatively short time they’ve had the upper hand.

Individuals pursuing an amorally conceived science have, notoriously, placed their work at the disposal of morally dubious governments such as those of Nazi Germany and the USSR. (And one might note that the USSR was ruled according to an atheistic-materialistic ideology, which didn’t prevent it from killing as many as 60 million people (Solzhenitsyn’s estimate) in programs of collectivization, forced migration and forced labor.)

Then there are the morally dubious projects of governments regarded in the West as more legitimate, such as the recently revealed deliberate infection of 696 men and women in Guatemala with syphilis by U.S. researchers in the 1940s. Add that one to the Tuskegee experiments, the eugenics projects in which women were sterilized based upon their race and class, the CIA experiments in mind control using LSD and God knows what else, and let us not forget the atomic bomb, poison gas and biological warfare.

None of these things could have proceeded without the willing participation of scientists. What it all ultimately demonstrates is the obvious fact that the amoral includes the immoral.

No doubt, the researchers in all these projects argued that their work helped save American lives, thus serving a “greater good.” This is precisely why utilitarianism is worthless as a moral source: In the pursuit of the “greatest good for the greatest number,” everything depends on who decides what the “greatest good” is and how they decide it, and how much evil they’re willing to inflict on the lesser number. A less "scientific" view of morality might propose that inflicting horrible suffering on even one person is wrong.

Scientists go where the funding is, of course. When the funding is provided by the government, they do the work the government wants, such as creating weapons of mass destruction. Today, of course, they mostly are placing their work at the disposal of profit-seeking corporations, sometimes because that’s where the government funding (i.e., yours and my tax dollars) is being funneled. That’s one reason why the pharmaceutical industry has grown so huge.

And here is an example of amorality serving amorality. The “science” of economics — according to some of its practitioners, generally those who are viewed most favorably by large corporations — informs us that one reason governments must not try to regulate business is because doing so injects moral considerations into markets that will “flourish” best by operating unimpededly according to “nature.”

As Taylor’s work shows, modern science and the worldviews it has most strongly influenced are geared toward the control and exploitation of Nature, including human nature. And there have been many people in the past couple of centuries who sincerely believed they were part of a movement toward the overall improvement of human life through that type of manipulation. And improvements obviously have been made by some measurements, though there also have been obvious losses.

But for every selfless philanthropist or courageous existentialist (a la Camus’ Dr. Rieux, admittedly a fictional character), there have been multitudes of social-Darwinist, para-Nietzschean scoundrels and bullies whose only interest in science is determining how it can help them increase their wealth and power.

Time and again, the resistance to such people and their bogus ideologies has come from people motivated by religious belief — because it’s only because of such belief that we can arrive at a point of view that sees something better or higher than the things of this physical world.

I don’t believe religion, or religious aspiration, can be eradicated. Unlike Dr. Dawkins, I don’t think it’s a purely cultural-educational phenomenon. I think it’s a basic constituent of human nature, because the divine is a basic formative and ordering principle of reality.

But it does worry me that there are people who believe it can and ought to be eradicated, people who are involved in creating drugs and machines that can do great harm to our minds and souls, and who have considerable clout with our lawmakers and sociocultural opinion-shapers.

In the latter days of the Soviet Union, the authorities found it expedient to classify dissidents as psychologically aberrant rather than politically unorthodox, and to confine them in mental hospitals instead of labor camps. It seems to me that the biggest difference between here and there, now and then, is that in the United States we’re letting ourselves be persuaded into self-medicating ourselves into irrelevance, into letting “the system” decide what’s best for everyone.

When we live in a world where resistance to abuse or stupidity can be “diagnosed” as “oppositional defiant disorder,” we really need to think carefully about what we value and how we can know what is truly good or evil.

Thursday, September 2, 2010

2B1

I’ve written a lot in this blog about my belief in the fundamental connectedness of people, of living beings in general, of things in general. And I suspect it has been a waste of time. There are only two likely reactions anyone might have to this notion at this point in history: “Duh, who didn’t know that?” or “Are you insane?”

If you look at the world around us right now, it certainly doesn’t look like what one might call “an organic whole.” The level of social fragmentation and conflict appears to be historically high and increasing, as does the level of conflict between human beings and Nature. No one seems to be able to agree about anything, especially in reference to how we might solve any of these problems –we can’t even agree what the problems are – but everyone seems to be ready to fight to the death to push the solution they like. It’s a situation I’ve taken to summing up like this: Where there’s a will, there’s a won’t.

Philosophically, theologically, ecologically, there’s widespread acknowledgment that everyone and everything is interconnected; that, indeed, all is one. But there’s also widespread antipathy toward that idea, widespread efforts to divide and conquer, to impose some form of absolutism or exclusivism, which means the conversion or eradication of everyone who believes in anything else: My way or the highway.

Even among people who say they believe in the kinship of all humans, the unity of existence, you don’t find many who behave accordingly. On the contrary, mostly they’re just promoting another absolutist/exclusivist ideology and contributing to the general fragmentation.

Now, if I suggest that the real solution to this problem involves each person looking inward and disengaging from mass culture and mass thinking, it might seem as if I’m promoting an even more intense degree of disintegration. After all, everyone else seems to think the answer is for everyone to unite, to join up, to enlist in some movement or other. But that’s just an invitation to choose sides in the war of exclusivisms.

Real unity begins at home, so to speak. People who are fragmented inwardly cannot bring about any kind of world except one that is likewise fragmented. Conversely (contrapositively, actually), a unified world can be brought about only by people who are personally unified.

This is, of course, the overall message of Plato’s Republic (see esp. 443d-444a), and it is a theme that has remained constant in the Western tradition from that time to the present. Plotinus, for example, reiterates:
“Know Thyself” is said to those who, because of their selves’ multiplicity, have the business of counting themselves up and learning that they do not know all of the number and kind of things they are, or do not know any one of them, not what their ruling principle is or by what they are themselves. (Enneads VI.7.41. Trans. A.H. Armstrong, Loeb Classical Library/ Harvard University Press.)
The message remains fundamental right through the Middle Ages to the Renaissance, as evidenced by a statement of the alchemist Gerhard Dorn (quoted several times by Jung): “Thou wilt never make from others the One that thou seekest, except there first be made one thing of thyself.”

And of course it’s a basic principle in the synthesis offered in the 20th century by Gurdjieff and Ouspensky: “First of all, what man must know is that he is not one; he is many. He has not one permanent and unchangeable ‘I’ or Ego. He is always different.” (P.D. Ouspensky, The Psychology of Man’s Possible Evolution, First Lecture.)

The tradition is, of course, full of advice and techniques for the individual to attain self-unification, but the overall idea is presented beautifully in my favorite passage from Plato’s Phaedo:
Now we have also been saying for a long time, have we not, that, when the soul makes use of the body for any inquiry, either through seeing or hearing or any of the other senses — for inquiry through the body means inquiry through the senses — then it is dragged by the body to things which never remain the same, and it wanders about and is confused and dizzy like a drunken man because it lays hold upon such things?
Certainly.
But when the soul inquires alone by itself, it departs into the realm of the pure, the everlasting, the immortal and the changeless, and being akin to these it dwells always with them whenever it is by itself and is not hindered, and it has rest from its wanderings and remains always the same and unchanging with the changeless, since it is in communion therewith. And this state of the soul is called wisdom. (Plato, Phaedo, 79c-d; trans. by Harold North Fowler. Available online at http://www.perseus.tufts.edu/.)
In today’s world, in which we are barraged 24 hours a day by stimuli from our immediate environment and even more from our expansive electronic environment; in which we imagine ourselves constantly “connected” with our friends, family and business associates by our wireless devices and other kinds of electrical umbilical cords; in which we turn our attention incessantly from one outrage to another, from the latest missing child report to the latest natural disaster to the latest celebrity scandal to the latest political uproar to the latest phony “reality” show development to the most recent “friend” update on our favorite social networking site – each one of our “interests” is just one more fragment of our soul torn off and sucked into the diffuse cloud that constitutes what we imagine to be our identity.

Strange as it may sound, the cure for this condition – and it is truly a sickness, of the soul – is to care less, to care about fewer things, to stop wasting our attention and our life-energy on things that don’t matter and which we can do nothing to change, and to focus on the one thing that is truly within our power to alter for the good: our own minds.

Wednesday, August 4, 2010

ΦΙΛΟΣΟΦΙΑ

I want to adopt – or probably more accurately, to instigate – a convention for the use of the word “philosophy.” What I’m proposing is to use a capital P to refer to what was discussed and practiced in ancient Greece and successively into the Roman period, and a small p for everything else.

There are, I think, good reasons for doing this. It was after all the Greeks who created and defined the term φιλοσοφια to describe what they were doing. It would be reasonable to reserve the use of the word to references to that particular set of activities and to find some other word to refer to activities that are significantly different, even if the latter are in some way derived from or imitative of the former. However, there is no generic term other than “philosophy” for the numerous schools and tendencies of thought that have arisen since the fall of Rome, so I’m proposing that we recognize the distinction by treating the word as a proper noun for Philosophy properly so-called and not so for everything else.

And there most definitely is a significant distinction between what was practiced and taught in the Hellenic and Hellenistic schools of Philosophy and the so-called philosophy that came to be practiced and taught after and outside them. The two began to diverge even during the heyday of Philosophy when Christian apologists and other religion-focused groups (the so-called Gnostics and Hermetists or Hermeticists, for example) began to appropriate ideas, terms and practices from the Philosophical schools in an attempt to give themselves intellectual respectability.

This disingenuous tendency is apparent already in the writings of Philo of Alexandria, whose project was to allegorize the Hebrew scriptures into an exposition of Platonic doctrine and Moses into a philosopher, and then to claim priority and superiority for the Mosaic writings. The Gospel of John goes Philo one better by rendering the narrative of Jesus’ life as an exemplification of philosophical theology that trumps both the Philosophers and the Jews.

By the third century, this project was in full swing in Alexandria, where Clement and Origen both embraced Philosophy with both arms. If one accepts their self-explanation, Philosophy was true but incomplete, and the Christian revelation was precisely what was needed to complete it. On a somewhat less tendentious reading, Christianity was indeed a “revealed” religion, with all the raw, emotional, even self-contradictory details that accompany any visionary breakthrough. It needed some intellectual reinforcement to make it palatable – indeed, to make it understandable – to a theologically sophisticated audience.

It’s from this point, and for these reasons, that the tendency grew to regard philosophy as “the handmaid of theology,” which prevailed until the Renaissance. The Christian (and later the Muslim) theologians happily pilfered or cherry-picked the findings of a millennium of hard-won Philosophical teaching to augment their own arguments, while denigrating the Philosophers themselves and ultimately shutting down their schools and sending them into exile.

When the grip of the church(es) over intellectual life finally began to ease, a large part of the legacy of the domination of theology over philosophy played out as a rejection of theology in all its forms by the newer thinkers. In particular, the new philosophers from Erasmus and Ficino forward seem to have focused exclusively on the content of philosophical discourse while ignoring or rejecting the Philosophical practices that had formed the basis of monastic life and Medieval mysticism.

That the Philosophical schools were in fact the model for Christian monasticism is at this point undeniable.

The late Pierre Hadot established with impeccable scholarship the fact that Philosophy was at least as much a “way of life” as a project of research into the principles of existence, and that the two were inseparable:

… there can be no question of denying the extraordinary ability of the ancient philosophers to develop theoretical reflection on the most subtle problems of the theory of knowledge, logic or physics. This theoretical activity, however, must be situated within a perspective which is different from that which corresponds to the idea people usually have of philosophy. In the first place, at least since the time of Socrates, the choice of a way of life has not been located at the end of the process of philosophical activity, like a kind of accessory or appendix. On the contrary, it stands at the beginning … Philosophical discourse, then, originates in a choice of life and an existential option – not vice versa. … The [ancient] philosophical school thus corresponds, above all, to the choice of a certain way of life and existential option which demands from the individual a total change of lifestyle, a conversion of one’s entire being, and ultimately a certain desire to be and to live in a certain way. This existential option, in turn, implies a certain vision of the world, and the task of philosophical discourse will therefore be to reveal and rationally justify this existential option, as well as this representation of the world. (Pierre Hadot, What Is Ancient Philosophy? trans. Michael Chase, p. 3. The Belknap Press of Harvard University Press, Cambridge, Mass., 2002.)
In the same work, Hadot goes on to describe a general curriculum of the Philosophical schools, running from ethics to physics to psychology to theology, which can be found consistently across schools as far apart in their “theoretical” teachings as the Platonists and Stoics. He also enumerates a number of what he calls “spiritual exercises” that likewise are found across the Philosophical spectrum, such as meditating on the inevitability of death and cultivating a “cosmic” view of life. And unsurprisingly, he shows how these exercises continued to be essential in the writings and practices of the early Christian monastics.

Hadot, as befits an academic scholar, proceeds cautiously in these matters, drawing almost entirely on the writings of the philosophers themselves and rejecting suggestions that Philosophy was in any way a development from, for example, shamanism. (He does, however, accept in a sort of postscript that the Philosophical schools might have received some influence from India, specifically from yoga.) However, with no academic reputation to protect, I can range a bit further.

What I would point out first is that the writings of the “Desert Fathers,” as found in the Philokalia, for example, clearly show that the early Christian monastics’ understanding of “metaphysical anatomy,” so to speak, is precisely that of the Philosophers, and in particular that of the Platonists: the lower, “irrational” soul consisting of a desiring (“appetitive”) and an emotional (“incensive”) component; the rational soul (διανοια, dianoia, or discursive, dualistic reasoning faculty); and the νους (nous, the unitive, intuitive, holistic understanding). Moreover, the goal of monastic practice is precisely the same as that of the Platonist schools: to extricate the practitioner from over-involvement in physical reality and “turn” him or her toward the “higher,” truer reality of Form and, ultimately, God.

As Plotinus states quite specifically (Ennead 1.1.3), one goal of Philosophy is to turn the soul away from its desire for matter and turn it toward reason and the beyond-reason of nous. How is this done? In one important passage (VI.7.36), he writes:

We are taught about it by comparisons and negations and knowledge of the things which come from it and certain methods of ascent by degrees, but we are put on the way to it by purifications and virtues and adornings … [emphasis added]

And in another key passage, Plotinus writes:

How can one see the inconceivable beauty which stays within the holy sanctuary and does not come out where the profane may see it? Let him who can, follow and come within, and leave outside the sight of his eyes and not turn back to the bodily splendors which he saw before. … Shut your eyes, and change to, and wake, another way of seeing, which everyone has but few use. (I.6.8; emphasis added. Both translations are by A.H. Armstrong in the Loeb Classical Library edition, published by Harvard University Press.)

What we are seeing here are obviously references to some form of what we call meditation today. In Plotinus’ time, the Latin words meditatio and contemplatio referred to two different modes of mental activity, meditatio being “discursive” and dualistic, contemplatio being non-dualistic and unitive. Those terms remained standard in the Western Church until the 20th century. In Greek, the equivalent of contemplatio is θεορια (theoria); various words and phrases (e.g., συννοια, sunnoia) refer to introspection generally and may be translated as “meditation.”

The point, now that I’ve finished with the footnotes and glosses and flourishes, is that a school of Philosophy, in its practices, bore much more resemblance to, for example, a Buddhist monastery than to a modern university’s Department of Philosophy. And not just in its practices, but equally in its goals. No matter how much lip-service modern educators may pay to the idea of turning out “well-rounded” individuals (and in fact they’re advocating that approach much less these days than they did a few decades ago), what they’re really about is preparing people to become cogs in the machinery of the modern global economy.

From that point of view, absolutely the last thing our educational system wants to do is turn people away from physical objects and desires and toward a real, heartfelt understanding of the oneness and wholeness of reality. But that’s the one thing each of us needs if we – and our world – are to be healed and whole.

Monday, July 19, 2010

A Sense of Proportion

Considering the high value we seem, as a society or culture, to accord rationality as a standard for thinking and behavior, one might expect that we have a very clear understanding of what the word means. But when I started looking into this issue a few years ago, I discovered very quickly that almost no one can give a clear account of what constitutes rationality.

For most people, rationality is one of those “I can’t define it but I know it when I see it” concepts. It floats around in our world, we hear it in various contexts, and we form impressions of it based on how we hear it being used. But no one defines it when they use it, they just assume that everyone knows what they mean, and we who hear them likewise assume that we know what they mean.

When I started asking, “What is rationality?” and “What does it mean to be rational?” naturally the first place I looked was in dictionaries. What I found there was unhelpful. In the first instance, I was told that rationality is “reason” or “reasonableness,” and of course I quickly discovered that “reason” is rationality.

I also learned that both “reasonable” and “rational” are synonyms for “sane,” and obviously that “unreasonable” and “irrational” are synonyms for “insane.” This means the stakes in the game of deciding what’s rational and what isn’t are pretty high: If I can label you and/or your ideas “irrational,” I automatically win, and you go into a padded cell.

Since no one seemed to be interested in defining rationality in a clear and precise way, I turned to the etymology of the word to see if that would offer any clues. And not surprisingly, this led me straight back to ancient philosophy.

The word “rational” obviously derives from the Latin word ratio, originally meaning “reckoning” or “calculating” but also having the same meaning as the mathematical term “ratio,” which refers to a numerical relationship. A month is one-twelfth of a year, for example: 1/12.

The Latin word, in turn, was a translation of a Greek word, because it was the Greeks who first articulated these kinds of relationships. The word that the Greeks used to name a statement about this kind of mathematical relationship is a familiar one: logos.

Modern Christians are familiar with logos because of the famous prologue to the gospel of John. But the standard translation of logos as “word” overlooks the history and wide range of meanings of this multifarious word. At the time John’s gospel was written, logos had a 300-year or more history as a technical philosophical term. It meant, among other things, a saying or aphorism, an axiom, an account or explanation, and most importantly for our present topic, it meant “a proportion” – in other words, the same thing as ratio. And this is why the words “rational” and “logical” are essentially equivalent: because they both refer to proportionality.

Whether you say it in Latin, Greek or English, a ratio or a proportion is a comparison of or relationship between two things: between a month and a year, for example. And this is the root-concept of rationality: the comparison or relating of things to other things.

Under the influence of Aristotle, we have come to understand logic and rationality in terms of statements about reality. Indeed, a significant part of philosophy in the 20th century turned away from attempting to understand reality as such and focused instead on the structure and coherence of statements about reality. But if we look at the fundamental meaning of rationality and logic, we can see that these terms need not apply only to what we say about reality, to “well-formed formulas” about the universe.

On the contrary, any system of comparing and ranking things is, by definition, rational or logical. For example, we can judge our sensory experiences by how pleasant or unpleasant we find them: Getting laid is more fun than a sharp stick in the eye. Or we can rate and rank experiences according to how they affect us emotionally: Praise feels better than criticism.

What we call rationality today, however, focuses exclusively on the kind of verbal formulations I mentioned above. This approach compares statements about reality with each other and ignores the kind of experiential logic we obtain from perceptions and emotions. In general, it labels personal experience as “too subjective” to be worth considering.

I’ve just described, from one point of view, three of the four “psychological types” defined by C.G. Jung: the sensing, feeling and thinking types. Anyone who has taken the Myers-Briggs Type Inventory will have some familiarity with these notions: Some people approach the world primarily through their senses, some through their emotions and some through their verbalized thinking processes.

Anyone who has studied the Jungian types or the Myers-Briggs typology derived from Jung will understand that differences of type can lead to all sorts of misunderstandings and miscommunication. Most obviously in our society, people who overvalue verbalized logic tend to dismiss the “lower” kinds of thinking that are based on sensation or emotion. People who lead with their brains, so to speak, are frequently contemptuous of those who lead with their perceptions or emotions.

But those “intellectuals” are also the ones who are most likely to be surprised when, for example, their spouses desert them or their children hate them because of their emotional sterility, their lack of empathy, their insistence on principles over relationships, or just their lack of a sense of fun.

It’s the hegemony of the “thinking” type, of course, that has put our society in its present position where any plausible-sounding argument must be given consideration, no matter how wrong it feels on other levels.

Jung’s types, I think, correspond quite neatly with the ancient Greek – in particular the Platonist – understanding of the inner human being. Plato and his followers believed that we have an “irrational soul” – consisting of an “appetitive” (sense-desiring) and an “incensive” (emotionally motivating) part – and a “rational soul” focused on the logos. (The Greek word for the reasoning faculty was dianoia, literally “dual mind,” which highlights their understanding of the fundamentally dualistic nature of rationality.)

The fourth of Jung’s types is the “intuitive.” This is a word that is subject to serious misunderstandings, not least because there is a sort of industry that has cropped up in recent years that purports to teach people how to use their intuition. Jung defined intuition as the propensity to understand the wholeness of a situation all at once, without analysis. It’s not “gut feeling,” which is more like the sensing function, nor is it that vague stomach-turning feeling that if I do this, someone who’s important to me might disapprove; that’s emotion.

Intuition, rather, is Jung’s version of what the ancient philosophers called the nous, another untranslatable term. To the ancient philosophers, however, it’s clear that this was the “highest” part of the human being, the direct link to the divine. Unlike the “rational soul,” which must analyze things step-by-step and part-by-part, the nous grasps the whole as a whole, non-dualistically.

And this is the great error of rationalism: It ignores the importance or the meaning or even the existence of the wholeness of anything and everything. And by denying the validity of other kinds of understanding, it obstructs our wholeness as humans. A whole human being has access to all the resources of the soul and spirit, from sense-perception to emotional judgment to verbal analysis to that mysterious opening through which inexplicable insight flows. Closing off any of these ways of understanding reality is an act of self-amputation.

Saturday, July 17, 2010

Name-Calling

Are you rational? Am I? Is anyone? Have you made judgments about who or what in our society represents rationality or its opposite?

These are important questions because of the high value we place on rationality in modern society. Indeed, if we can label something as “irrational,” we end the discussion: If you’re irrational, or what you believe is irrational, I can safely disregard anything you say; I don’t have to make any further argument.

This attitude reflects what I’ve referred to as “hyper-rationalism,” by which I mean the belief that the only way to understand anything in an accurate way is by means of rationality, whatever that is. It’s an attitude that has been with us for a fairly long time now, arising in the late 17th century and really gaining the upper hand in the 18th – the so-called “Enlightenment” or “Age of Reason.”

Those names reflect the widespread self-understanding of the noted thinkers of those times, their belief that they had advanced immeasurably from their predecessors of only a few years before: the religious “enthusiasts” and “fanatics” who had made the Reformation such a vicious, bloody mess. (The novel Q by “Luther Blissett” conveys a vivid impression of the violent craziness of the period, despite having been written by a committee.)

But in more recent times, we’ve seen how the application of an insistent rationalism can produce results that are equally or even more catastrophic; for example, in the Soviet Union, or in the whoring of science in general to political (think atom bomb) or economic (think psychological pharmaceuticals) purposes and interests.

We continue, however, to regard rationality as the supreme standard for judging ideas, worldviews, lifestyles; everything, in fact. I’m not convinced we really believe in it at this point, but we’ve found it useful for a long time as a justification for imperialism, both domestic and foreign, so it’s a bit hard to let it go.

What I mean by that is the claim that northern Europeans and their descendants in North America are the most rational people on Earth (which is justified ipso facto by the fact that we’ve succeeded in gaining control over almost everyone else), while all those other, darker-skinned or intellectually pre-modern (i.e., religious) people are “emotional” or “politically volatile” or just plain “backward,” so it’s incumbent upon us to guide them toward a better understanding of reality, a better approach to politics, and of course to a more efficient and remunerative exploitation of the natural resources buried under their feet.

This is precisely the attitude that led to the United States’ policies of “gunboat diplomacy” and support for “banana republics” (Latin American nations that were bribed and/or threatened into becoming nothing more than plantations for gringo landlords). We told ourselves that “those people” were incapable of creating stable societies and governments for themselves, so they needed our help; but our “help” consisted of making sure they never created stable societies or governments for themselves, because then they might be able to enforce demands for realistically adequate compensation for the labor and resources we were exploiting.

The same sort of thing has gone on in other parts of the world, of course; for example, the Middle East, where American and British oil companies (including one called British Petroleum) were interfering (and browbeating their own governments into interfering) with the political process in such countries as Iran and Iraq. The average American may not recognize the name Mossadegh any more than Allende, but the people of Iran and Chile haven’t forgotten.

So now we hear Americans asking, “Why do they hate us?” And the answer we get from the mainstream media and the corporate-owned talking heads who pose as experts, is simply, “Because they are irrational, backward people.” When in fact a rational examination of the past behavior of the United States and the United Kingdom is certainly likely to provoke distrust at the very least.

I’m quite fed up, frankly, with the tossing around of “rational” and “irrational” as labels, without deeper examination of such claims. Beyond international politics, the “clash of civilizations,” it’s also a factor in the so-called “culture wars” within the U.S.: “New Atheists” such as Dawkins and Hitchens happily hurl the “irrational” label at all religious believers willy-nilly, in the same way political and economic hegemonists use it to label the countries they want to dominate.

There are two angles of attack against this (mis)use of the idea of rationality. One is of course to argue the specifics, i.e., that the particular idea(s) or people labeled as rational or irrational are inaccurately so labeled. The other is to argue that rationality/irrationality as a concept is wrong, or at least so seriously misunderstood as to be useless. If the latter view is correct, it presumably makes it likelier that the former criticism will also be true.

It probably will come as no surprise that I believe the latter view is indeed correct. I’m convinced that almost no one who upholds rationality as the sine qua non of belief, thought, behavior, has any clear idea of what rationality really is. So what is it?

Thursday, July 15, 2010

It's Always Now

I’ve been seeing a lot of news stories lately in which people (or politicians, if they qualify) are claiming that one thing or another is part of “God’s plan.” For example, the oilwell blowout that’s destroying the Gulf of Mexico is part of “God’s plan,” according to some. And various candidates for elected office are claiming that they’re running because it’s part of “God’s plan” for them personally.

Oddly enough, these kinds of statements are being made by self-professed Christians. I had thought Christianity was a monotheistic religion, but apparently I was wrong: According to probably the most rigorous monotheist ever, Plotinus, God doesn’t plan, and saying that He/She/It does plan is saying that God is multiple.

Instead, Plotinus says, God causes reality to exist by a timeless, eternally instantaneous, simultaneous, spontaneous sort of explosion of creative goodwill.

Frankly, the idea of God planning things is pretty silly. First, you have to imagine that God doesn’t know precisely what’s going to happen; instead, the all-knowing deity must form an intention to make something happen, then decide what is going to happen, and only then actually make it happen.

It’s only from the point of view of time- and space-limited beings (e.g., humans) that one thing appears to follow another, and thus that one thing appears to cause another. Through a kind of back-fitting, we thus imagine that an omniscient God knew ahead of time that a given phenomenon was going to be the cause of a certain effect; in other words, that God “planned” it that way.

This way of thinking posits that God has “foreknowledge” of events and thus gives rise to all the arguments about predestination and free will. But it’s actually an act of anthropomorphism: We’re imagining a God who “sees” things from a human-like perspective and needs to control, manipulate and micromanage like a power-drunk CEO.

In fact, there can be no “fore” knowledge if there’s no before or after; as I like to say, “It’s always now.”

One implication of this difference of perspective that I haven’t heard discussed much: From our time- and space-bound point of view, there’s a lot that’s “not here” or “not yet,” and this is precisely what enables humans to practice dishonesty on each other, if they’re so inclined.

For example, I could offer to sell you some shares in a gold mine, promising that there is in fact a mine where I say it is and that it will in fact produce gold when I start digging there. Or I could tell you that nasty little brown-skinned people are tunneling into your garden and planning to steal all your goodies and ravish your wife and children, and you need me to stop them.

From your time- and space-restricted perspective, you might not be able to verify what I’m saying, so you might just take my word for it based on your desires or predispositions. But from the point of view of what Meister Eckhart called the “eternal now,” everything is present. So no one can deceive God.

Plotinus and Plato (and Eckhart and lots of other people) taught that the “highest part,” so to speak, of the human being exists in that “eternal now,” but our fragmented, matter-focused way of life keeps us so distracted that we’re disconnected from it — unaware, in fact, that any such part of ourselves exists.

The whole point of real philosophy (and true religion, which is the same thing) is to transform ourselves so as to (re-)connect with that highest, timeless part, which is in fact the true self and the central unity of the self and the one part of the self capable of knowing God. So to put it bluntly, anyone who claims to know “God’s plan” doesn’t know God.

Monday, July 12, 2010

Control Thyself

I don’t know if it has always been this way (I suspect it has), but people today do seem to have a tendency to want to control the world around them by dictating what other people should do.

I’ve spent a lot of time driving, possibly more than average, and I’ve observed myself and others while doing this. We aren’t just driving, we’re also judging every other driver we encounter. Ultimately, each of us is thinking, “Why can’t everyone just drive the way I drive?” But the sad fact of the matter is that they are driving the way we drive; they’re driving along thinking, “Why can’t everyone just drive the way I drive?”

We bring the same attitude, the same self-justified view of things, to life in general: “Why can’t everyone just live the way I live?” And the problem is the same: Everyone does live the way we live: self-absorbed and thinking we know better than everyone else.

The philosophical attitude is of course somewhat different: We should examine ourselves first, and only when we’re sure we’ve gotten it all figured out (which of course, philosophically speaking, is never) should we turn our attention to telling other people what to do.

This approach tends to strike people as unrealistic, impractical, other-worldly. And those criticisms are true. The truly philosophical life contributes nothing to the growth of the economy, the advance of science and technology, the expansion of human domination over nature. On the contrary, it tends to heap scorn on the pursuit of such things, to call attention to the impermanence, and thus the emptiness, of all achievements along these lines.

This leaves philosophers open to the criticism of being anti-humanistic: Science and medicine and so on have improved the human condition immeasurably over the centuries, and surely no one can claim that there’s no value in this improvement. There’s a certain amount of exaggeration in this claim of progress, but there’s also a certain amount of truth: People do live significantly longer today than they did even a century ago, and anyone who wants to argue that this isn’t an improvement is going to have a hard time convincing anyone.

At the same time, however, we seem to have paid, and are continuing to pay, a steep price: in consumption of the Earth’s resources, destruction of the environment and deterioration in our social, political and spiritual circumstances.

There are lots of rhythms in life. In human life in particular, there’s an upbeat, an inhalation, a rising tide in our youth as we grow and go out into the world to make our mark, raise families, change the institutions and situations into which we’re born. And there’s a downbeat, an exhalation, an ebbing tide in our later years as we seek to preserve and conserve what we’ve learned and what we’ve found worthy of valuing, and to protect what we’ve acquired.

There will always be a tension between these movements, and likely a swinging of the pendulum between one and the other. What seems likely to be most harmful, most likely to render us unable to keep our social world going, is the belief that we can freeze the pendulum at some point in its swing, to believe we can say, “This much freedom, this much exploration, and no more.” And that applies to the carved-in-stone principles of science as much as it does to the conventions of bourgeois society or the commandments of religion.

It’s precisely at the point when we think we have it all figured out that the stuff we don’t know comes up behind us and clubs us on the back of the head.

Tuesday, July 6, 2010

The Wall and the Gate

One of the criticisms invoked against philosophers in ancient times (by Christians, for example) was that the very idea of philosophy implied that they could never reach their goal. The word “philosophy” itself suggested this: It means love of or friendship toward wisdom, not the actual possession of wisdom. From at least Socrates on, the philosophers themselves seemed to acknowledge this open-endedness, denying that they themselves were “wise.”


The famous story of the Delphic Oracle’s pronouncement on Socrates, and his interpretation of it (as portrayed by Plato), seems to support this view. On being asked who was the wisest of men, the oracle replied, “None is wiser than Socrates.” This dictum, delivered presumably from the god Apollo, took Socrates by surprise, because he was convinced that he possessed nothing that he could convince himself was real and true knowledge.

And that was precisely the point. All other men thought they knew things that were true and important and wise, but they truly knew nothing. As a result, they didn’t seek to learn, but instead were content to rest in their false certainty.

Socrates, in contrast, understood that he knew nothing true and important and wise, and continuously sought to learn whatever he could of such things. This made him, in the view of Plato and pretty much all subsequent thinkers, the epitome of a philosopher: a seeker of wisdom.

We are accustomed today to thinking of wisdom as knowledge of a certain kind, and this appears also to have been the most widespread view in ancient Greece. In Plato’s dialogues, Socrates’ most implacable opponents were the Sophists, a group of generally itinerant teachers of what we would call today public speaking, persuasion, marketing, personal presentation. Their special forté was teaching the skill of arguing both sides of an issue with equal effectiveness; in other words, how to win an argument, regardless of the truth.

The Sophists’ claim was that they taught wisdom (“sophia” in Greek), and Socrates’ (and Plato’s) counterclaim was that they taught nothing of the sort – and indeed that wisdom could not be taught. But what the philosophers offered in opposition to the Sophists was not a different version of wisdom but instead a different way of thinking about what wisdom is. And their way of thinking about wisdom was in some sense an end in itself: To think about wisdom, what it might be, how to acquire it, is better than to believe one has it.

Then as now, people mostly preferred to believe, or hope, that they could pay a Sophist and in return receive the knowledge they needed to succeed (by whatever yardstick success might be measured). The career of the Roman statesman Cicero provides an object lesson in how the single-minded pursuit of sophistic learning could in fact pay off in the accumulation of wealth and the acquisition of power. But the philosophical question is whether such a life is truly good.

The alternative the philosophers offered was a life of constant self-examination with no tangible reward, and their refusal to admit that wisdom is something definable and teachable continued to be a point of attack by their enemies even after the Sophists had faded from history. Christian apologists took up the argument: The philosophers can only “seek” wisdom, but we know we “have” wisdom because God Himself gave it to us through divine revelation – and we have the divine books to prove it, providing us with a complete and final truth. Our task thus is not to find the truth but just to understand and live according to the wisdom that has been packaged and delivered to us so neatly.

There’s a Sufi story I ran across at some point, I forget exactly where, that seems to me to have some relevance here, though I might be wrong.

Once upon a time, there was a king who ruled over a city where farmers and tradesmen and so on came to sell their goods and services.

This king was genuinely determined to be righteous and virtuous. One day, walking through the marketplace in his city, he heard the sellers of vegetables and livestock and metalware and ceramics and fortune-telling and love potions and so on touting their products and performances, and he was appalled by the dishonesty of it all: Everyone, it seemed, was exaggerating or distorting or otherwise misrepresenting what he or she had to sell.

The king went back to his palace and thought about what he had seen and heard, and he decided that he would find a way to force everyone who came into his city to tell the truth. That night, he sent his soldiers to build a gallows next to the gate by which the farmers and tradesmen entered the city.

At dawn the next day, when all the peddlers and farmers and so on were lined up at the gate to come into the city, the king stood on the wall and addressed them. “All honest men are welcome in my city,” he said, in his archaically gender-specific way. “But dishonest men are never welcome here. So to guarantee the honesty of all who enter here, I have built this gallows. If you want to come into my city, you must answer one question. If you tell the truth, you may come in and do your business. But if you lie, you will be hanged from this gallows. Now, who wants to come in?”

Naturally, everyone hesitated. But after a moment, an old man stepped forward. The king saw him and said, “All right, granddad, where are you going?”

The old man answered, “I’m going to hang on your gallows.”

As far as I know, the king may still be standing on that wall and trying to figure out what he should do with the old man, because whatever he does, he will make himself a liar.

Saturday, July 3, 2010

Those Who Can't

Quite a few years ago now, I spent some time as a college philosophy major. I realized pretty quickly that I had seriously misunderstood what that meant. I was a young man looking for answers, trying to understand the core of what life was about so that I could live accordingly. And while the great philosophers – some of them, anyway – offered those answers and attempted to explain that core, the academic approach was to regard all their ideas as if they were merely moves in some endless intellectual game: Plato says this, but Aristotle denies it, Spinoza offers this, but look at what Kant says instead – my rook to your queen’s pawn, my club to your heart.

Worse, as I later learned, the academic teachers of philosophy had for generations misrepresented the teachings of ancient philosophy. Ironically, this misrepresentation arose from the writings of scholars of the 17th and 18th centuries who truly admired Plato and Aristotle, but in an idiosyncratic and conditional way: extolling the philosophers as the originators of rationalism, but condemning them for failing to maintain the kind of hyper-rationalism they themselves wanted to practice and spread.

Another element not to be disregarded was the tendency – and not just among academics – to believe that “newer” automatically means “better.” For teachers of philosophy, this translates into the belief that the speculations of Hegel or Wittgenstein or Heidegger or Foucault must be ever more complete, more scientific, more true, than those of Plato, Aristotle or Epicurus, because we have built upon, we have surpassed, their groping attempts to explain reality. In a word, philosophy, like everything else, has “evolved.”

Finally, and most damagingly, we have the triumph of the belief that “learning” is a noun, not a verb; that knowledge is a sort of commodity to be acquired and traded in measurable chunks. It’s likely that this view was inevitable once the bureaucratization of education began within industrialized society, because it enables the creation of standardized curricula and lesson plans and all the rest of the apparatus required to turn schools into factories (sorry, “manufacturing plants”). What this meant for philosophy departments, as for all others, is that the professors taught the curriculum – in other words, the entrenched misunderstandings, misreadings, biases, tendencies – and not the subject.

The subject of philosophy is, of course, wisdom; or rather, the seeking of wisdom. Not surprisingly, professors of philosophy have for several centuries shied away from attempting to teach such things, perhaps in largest part because they are so open-ended. What they teach is not philosophy, how to “do philosophy,” how to be a philosopher, but what different philosophers have said and how to quibble with it. The measure of how far the professors are removed from the actual doing of philosophy is the fact that while every one of the teachers and textbooks I encountered in my time as a philosophy major happily defined the word “philosophy” as “love of wisdom,” not one ever tried to explain what “wisdom” might be.

What I took away from my experience as a philosophy major was the belief that Western philosophy had absolutely nothing to offer to a seeker of the kind of core understanding of life I mentioned earlier. It took me a lot of years and a long roundabout trip through Eastern religion and Western occultism and mysticism to realize that I had been completely misled. When I finally returned to ancient Western philosophy, to the philosophers themselves and not the professors, I discovered that what I had been looking for in the first place was always there.

Wednesday, June 23, 2010

A Dog's Life

One summer evening a few years ago, I was sitting in my backyard unwinding with a bottle of Warsteiner after a day’s work when something struck me that has stuck with me ever since. Our backyard at the time was a rectangle surrounded by a chain-link fence, and as I sat there I could see through to the other backyards, which were also rectangles surrounded by chain-link fences.

What struck me was how much the houses and yards along our block resembled the kennel where my wife and I had boarded our two dogs not long before. It was one of those nice kennels, where each dog had a nice big cage inside and an opening to a nice individual fenced-in “run” outside. We felt good about leaving our dogs there for a week because they were free to go in and out, they weren’t as confined as they would have been in one of those old nasty kennels where they had to sit in a cage all day waiting to be walked.

What we had done, of course, was judge the kennel according to our human standards. Without realizing it, we had boarded the dogs at a kennel that essentially was modeled on our own living space: a box in which we felt safe and sequestered when that was what we wanted, and an attached open area where we could go out and be “in” nature when that was what we wanted, but safely marked off from our neighbors’ parcels of ground. We were imputing to our dogs the same kind of need for a well-defined freedom that we felt for ourselves.

Now, it may seem invidious that I’m comparing an average American suburban home to a dog kennel, but I don’t mean it that way. On the contrary, the comparison really depends on the fact that we love our “companion animals” and want only the best for them. The point is simply that we conceive the “best” for them in the same terms we conceive it for ourselves: as having a certain kind of private, personal space in which we are free to do what we want, when we want.

If there is anything invidious in this, it’s the contrast between this rather limited – one might almost say compromised – version of freedom and the “Freedom” with a capital F that people make such a fuss over in the sphere of public discussion and action. It’s perhaps a little hard to reconcile the Freedom that people have fought and died for with the freedom to have a barbecue and burn tiki torches.

Still, the two kinds really aren’t totally unrelated. Where they are related is in the understanding that you and I have a right to do whatever we want to do in our personal spaces. (Within reason, of course: If my neighbor is committing sex crimes or torturing puppies in the house next door, I need to interfere with him doing that.) This is precisely why the ownership of a home is the core of the American Dream: because my home is a space where I can exercise my sovereignty as an individual, and of course individual sovereignty is what America is all about.

Freedom also involves, of course, the freedom to work at the job one chooses so as to be able to afford a home. And for some fortunate few, their work itself provides the kind of fulfillment we all seek, while for others work is just a means to obtain the kind of personal space we need to practice whatever else gives us that fulfillment (“I work to live, I don’t live to work”).

I happen to live in a kind of middle space in this regard: As a journalist, I sometimes am lucky enough to wander into a story that actually does some kind of good for others, and that’s about as rewarding as it gets. But I also have an inner life that I pursue in the privacy of my home that gives me some satisfaction even on those days when my job totally sucks.

I imagine a lot of us are in somewhat the same situation, doing what we can in our careers to give something to the world, and/or seeking in our “leisure” hours to cover whatever we feel as a lack in our spiritual or psychological lives. This sort of thing is, I believe, exactly what Thomas Jefferson had in mind when he wrote that we have an “unalienable right” to “the pursuit of happiness.”

What I find regrettable in our society in regard to these things is the widespread tendency to confuse means with ends. It appears that many of us expect to find fulfillment in the acquisition of the personal space and its accoutrements, rather than the use. There’s a bumper sticker that sums up the attitude: “He who dies with the most toys wins.” Many of us seem to believe that it’s the mere having of a home, or the size of the home, not the life lived inside it, that matters most. Once they have it, what are they supposed to do with it?

It appears that for many, the “pursuit of happiness” within one’s private space or in public means eating as much, drinking as much, owning as much, playing as much as one can, with no thought for the consequences to oneself or the world at large. Such an attitude is truly tragic, because it focuses on the most ephemeral things the world has to offer and leads people away from the sources of real, lasting happiness.

Our consumerist economic structure of course encourages this sort of belief and behavior, and the recent shakiness of that structure is a warning about its unsustainability – as if further warning were needed on top of our repeated energy crises, our “obesity epidemic,” our high crime rates and all the other social ills that are so obviously traceable to our society’s tendency to want more, more, more.

As much as I would like to see increased regulation of businesses, I would be the last person to suggest that we impose further restrictions on people’s private behavior. “An ye harm none, do what thou wilt” strikes me as a pretty good ethical principle. The challenge is getting people to understand the “harm none” part, especially in a world in which we seem to have moved from the idea that “all men are created equal” to a belief that “individual sovereignty” means every man is entitled to be a king. Regrettably, it appears that the king everyone wants to be is this one:

Monday, June 21, 2010

Where There's a Will There's an Excuse

Looking back at the stuff I’ve written since I reactivated this blog a few weeks ago, it struck me that a casual reader might get the impression that my thought processes are pretty chaotic. I could claim that I’ve deliberately been picking random topics as a way to enable “emergent order” to work its magic on my muddled thoughts in the same way it’s supposed to account for the existence of order in physical processes that are alleged to be random in their underlying dynamics. But just as I believe that the order in our cosmos is there from the beginning, I also want to claim that there has been method in my madness all along.


One of the nagging questions about human beings, one that gets asked over and over again under all kinds of circumstances, is this: How could anyone do that? We hear about some awful, horrible thing that has happened, something that seems to violate every rule as we understand the rules, and we wonder how or why another human being could behave in such a grossly and grotesquely wrong way: committing serial murders, genocide, child-rape, conning old people out of their life savings, condemning miners to unmarked graves in unsafe coal pits, feeding children toxic chemicals with their formula, aiding and abetting dictators just to get at the minerals buried under their subjects’ homes, etc. etc. etc.

Frankly, I don’t think the answer is as difficult or mystifying as people seem to believe. Let’s start here: Socrates said (according to Plato) that no one does evil willingly. And Aristotle said, famously, “All beings by nature desire the good.” People do what they do because they believe, rightly or wrongly, that what they’re doing is good – if not for the world at large, at least for themselves.

And people are able to convince themselves that very bad things are actually very good things. Even a psycho- or sociopath may have some inkling that society in general disapproves of the bad, terrible, awful things he or she wants to do, but there’s always a way to claim that “I am right, you (all) are wrong.”

Because we all live in a constructed reality, each of us in his or her own constructed reality: an intellectual or psychological bubble built with the materials at hand, personal, social, political, intellectual, what have you.

As I pointed out here, it’s impossible for a human being to have a complete picture of the universe as it really exists at any moment. As a result, we're forced to go through life with an understanding of the universe and our place in it that is, and will always remain, largely hypothetical. The nature of reality forces us to fill in a lot of blanks with our best guesses, which often are supplied to us by those around us.

That gives us wide latitude to indulge whatever predispositions we bring to the table, whether from personal or social conditioning or out of the fundaments of our souls. In essence, we learn to construct arguments in support of whatever it is we want to believe, whatever we want to do.

We can make anything fit, if we just put our minds to the task: skimping on safety equipment in mines and on oil platforms so as to keep our costs low and our profits high, for instance; selling drugs (“prescription medications”) that ravage people’s bodies or minds, because we can whip out a “clinical study” that shows that 51 percent of the test subjects felt slightly better after swallowing our pill, and only 10 percent had “adverse reactions;” forcing the migration of indigenous people or just chewing through the ground beneath their feet because they didn’t understand the value of what was down there and weren’t exploiting it like we can; or “she said no but I could see she really meant yes.”

There does remain some fairly widespread agreement, even in our fragmented world, about what’s right and what’s wrong. Unfortunately, it seems more and more as though the people who share that agreement are the least able to do anything about it. The social, political and economic predators not only have clawed their way to the top, they’ve embedded their self-justifications at the heart of our society, to the point where demanding that a (foreign) corporation compensate people for the catastrophic damage it has caused through its utterly unconscionable activities can be characterized by a “people’s representative” as a form of extortion.

This is exactly what I mean about living in a “bubble”: Anyone who could see British Petroleum as the victim in the current catastrophe is living in his imagination, not reality. Man may be, as Aristotle said, a rational animal, but he’s very talented at putting his rationality to work in the service of what pleases him most, no matter how destructive or downright disgusting that may be.

Monday, June 14, 2010

Results May Vary

I’ve written previously about some of what I consider the shortcomings of orthodox financial economics. In general, it appears to me that economic theory is largely detached from reality in much the same way astronomy was before Copernicus and Galileo.

One instance of this detachment from reality is the “random walk” theory of financial markets. This model of market behavior was developed in the 1960s and still holds sway in the retail research departments of most U.S. brokerage firms. It’s based mainly on this:



What the chart shows is the Dow Jones Industrial Average from its creation in 1896 to the present (monthly closing prices; my apologies for the unreadability). The greenish line is a “linear regression,” that is, the straight line that is the best fit to the actual data points. Linear regression was a cutting-edge tool back in the ’60s, when computing power was rather limited. But there’s an unargued underlying assumption in applying it to a market, which makes the “random walk” theory an exercise in circular logic.

The assumption is that the graph of a stock index like the Dow in a way “wants” to be a straight line but can’t manage it. What economists actually say is that the market “seeks equilibrium,” by which they mean that it wants to go up continuously and at a consistent rate. But instead, there are “perturbations” that cause “fluctuations” above or below the idealized rate of gain. Those fluctuations are by nature random (hence “random walk”) and therefore are unpredictable.

Thus, investors shouldn’t try to guess when the market is going to have one of those “perturbations” – in other words, they shouldn’t try to practice “market timing” – but instead should simply buy stocks and hold them for the long term, because the underlying trend always goes higher. And here the logical circle is closed.

Since the 1980s, the random walk theory has come under increasing criticism, and many economists outside the Wall Street houses acknowledge that it’s not a realistic model. Many still want to cling to some variation of randomness, however, and have developed hybrid models that include some degree of self-recursiveness along with the randomness, giving us things like the exotically named GARCH model: “generalized auto-regression with conditional heteroskedacity.” However, scientific studies have shown that these models have zero value in predicting market movements.

Another argument in favor of “buy and hold” investing is the claim that the stock market consistently over the years has provided an average annual percentage return that is higher than other kinds of investments. But this depends very much on how you calculate the annual return. The usual figure is 8 percent, compared with half that return or less from interest-bearing investments such as bonds.

It’s true that if you take the closing price of an index like the Dow on a given day and calculate the percent change from the same day the year before, and you average that calculation over the past 100 years, you’ll get a figure something like that 8 percent number. But the calculation doesn’t bear any resemblance to how people actually invest: You can’t “buy the Dow” every day and sell it a year later.

I’ve constructed a model that I think represents more accurately how people really invest. I had to make some simplifying assumptions, but I believe the result is still more indicative of the kind of average returns investors can expect.

Here’s the idea: Let’s suppose that a worker sets up a program in which he or she invests a set percentage of his or her income each month. This program continues for 30 years, at which point the worker retires and cashes out. I’ve also assumed that the worker gets a cost-of-living raise at the beginning of each year, based on the nominal inflation rate (based on the Consumer Price Index) for the previous year. (That’s something few workers are actually seeing today, so the model may actually overstate the investor’s returns somewhat.)

The following chart shows the average annual return a worker would have received by following this investment strategy:


The time scale (if you can read it; right-click to open it bigger in a new window) indicates the date upon which the worker began the monthly investment program, and the vertical scale shows the percentage return at the end of 30 years for a worker who started investing on the date shown. For example, a worker who started an investment program in the very first month that Charles Dow calculated his industrial average in 1896 (i.e., the very beginning of the blue line) would have earned an average annual return of 2.86 percent over the next 30 years.

The very highest average return, 18.11 percent, would have been earned by a worker who began a monthly investment program in December 1969 and cashed out in December 1999. Obviously, the average annual return for someone who started 10 years later and cashed out last year would have been quite a bit less, just 3.92 percent at the low point in February 2009.

Worse yet, people who started investing in late 1901 to mid-1903 would have lost money, as would almost everyone who started in 1912. Perhaps most surprisingly, anyone who embarked on this kind of program in the late 1940s to mid-1950s -- which we're used to thinking of as boom times -- would have earned a fairly paltry annual return of about 2 percent or less when they cashed out in the late 1970s-early 1980s -- which were not so booming.

Overall according to this scenario, investors who have cashed out to date have earned an average annual return of 5.05 percent or a median annual return of 4.44 percent. Those figures aren’t all that much above the long-term average or median returns on interest-bearing investments. As I said earlier, the results may overstate the average returns because of my assumption about annual salary increases. In addition, the numbers don’t include any taxes or transaction costs such as brokerage commissions.

However, the real lesson of this exercise isn’t about long-term average returns, it’s about the wide variability of the real-time returns. What it boils down to is that even for a long-term investor, your results still depend entirely on when you start investing and when you cash out. If we relate that to our entry into the career world, it shows how much the decision is out of our hands: We can’t choose what year we’re born. Whether we like it or not, we’re all market-timers.

Saturday, June 12, 2010

The Human Factor

Before I moved back to Petersburg in the fall of 2008, I had been working for five years for the Post and Courier in Charleston, S.C., as assistant business editor. In that capacity, I was asked to write a business blog, which I did for about six months before I accepted a buyout and left.

There was a sort of cognitive dissonance between me and the bosses about that blog: what I was writing turned out not to be what they had expected me to write (and I wasn't even doing any of the cosmic stuff then). So all of my posts there were taken offline pretty quickly after I left.

That's a shame, because personally I think some of them were pretty good, although that might just be my memory playing tricks on me. But one of them in particular I want to try to reconstruct, because the point it made was one that needs to be remembered.

I'm sure at some point or other, all of us have seen a sign at a business that says, "Our people are our most important asset." It's a nice sentiment, but if there's any truth at all in what it says, it's purely symbolic. Under "generally accepted accounting principles," people are not an asset.

If you look at actual corporate financial statements, you won't find "people" listed anywhere. But if you know where to look, you can find where they're hidden. It's not on the statement of assets. On the contrary, people show up on the income statement as a cost of doing business. And if the company owes money to its employee pension fund, that shows up on the statement of liabilities.

As a result, when business slows down (or goes down the toilet), it's a no-brainer for the MBAs and CPAs and other bean-counters to look at the financial statements and think, "Hey, here's a quick and easy way to make the numbers look better: Fire some workers and dump the pension plan."

If people really were treated as an asset in some way - if companies were required to account for the potential cost of training replacements, for example - then it wouldn't be such an easy decision to fire them en masse. That's because anytime a company has to write down the value of an asset, the writedown has to be reflected on the income statement as an expense. So laying people off wouldn't automatically make "the bottom line" look better.

It's ironic, or something, that businesses do account for their "property, plant and equipment" as assets, but not the employees who actually make those things work to put out products or provide services, and in general to create profits. In the real world - the world beyond the spreadsheets and trial balance ledgers and forecasting models - people do actually have value.

Friday, June 11, 2010

On Edge

The twisted-universe model I wrote about last time clearly must involve curved space. The idea that space can be curved is pretty familiar by now, mostly because of Einstein’s idea that gravity results from a bending or curving of space by a mass of matter. That seems to be the consensus these days about how gravity works; the most popular alternative, that gravity is somehow transmitted from one mass to another, suffers somewhat from the failure so far to detect any of the “gravity waves” this theory would require.

I always had a hard time figuring out how gravity could have an effect on space, which essentially has no properties of its own to be affected. (Under relativity and quantum mechanics, empty space seems to be occupied by a “quantum vacuum” that does have some properties, but that’s not the same thing as space.) However, when I started thinking about the properties of my twisted space, I realized that any space does have one property: dimension. Which means that the only kind of change you can make to a space is a change of dimension.

As I explained last time, giving a spherical universe a half-twist through the fourth dimension raises the dimension of the universe as a whole to 4. The dimension at any point within the universe would appear to a normal physical observer to be 3, but in fact it would be slightly more than 3; we’ll say it’s D=3+(n<1). And if you add up that n<1 for all the points (at some arbitrarily selected but uniform scale, such as a light year or a parsec) in an orbit of the universe, the sum will be 1.

Now, even though this cosmos is finite, it’s still very large. So the n<1 – which I’m going to declare a fundamental universal constant, and call Ü, mostly because I like umlauts – is going to be very small, almost ininitesmal. The 3+Ü that exists at any point would then be the natural dimension of space in this bent universe of mine.

This whole idea of a non-integer dimension is exactly what Benoit Mandelbrot means by the term “fractal” that he coined to describe objects with a “fractional dimension.” And he has demonstrated that a very wide variety of objects are fractals, which means that many (perhaps most) of the objects we think of as, say, three-dimensional are in fact three-plus dimensional. The more complex the object, the higher the fractional excess; so a big, many-branched tree would be “more fractal,” if you will, than, say, a bowling ball.

But it seems likely that the fractional dimension of even a fairly smooth 3D+ object, like a planet, would have to be greater than the near-infinitesmal value of Ü. And as a result, the planet would “stretch” the dimension of the space around it, causing the kind of contour that Einstein’s conjecture associates with gravity. This would be true even with very small objects, which would account for the kind of “clumping” that scientists believe took place in the early universe, leading eventually to the formation of galaxies and so on.

There’s something else that the structure of the twisted-universe model might help account for that has puzzled me for a long time. I think some visual aids may help here.



We often hear airplane pilots talking about flying “straight and level.” But they’re actually doing anything but. What they’re really doing is flying at a constant altitude above the Earth’s surface. Because the Earth’s surface is curved, however, the plane’s actual path is also curved, as shown above. What would happen if an airplane really flew “straight and level” looks like this:



Now, the thing I’ve wondered about for years is this: We all know that the speed of light is a sort of universal speed limit, that nothing can go faster without violating all sorts of natural laws. But I’ve always wondered why it’s precisely the speed it is, 299,792.5 kilometers per second, or about 186,000 miles per second.

We’re used to the idea, again thanks to Einstein, of light traveling a curved path around massive, high-gravity objects. But since I’m supposing here that all light must travel a curved path in a curved, twisted universe, the “normal” path of light would look something like this:




Obviously, the curvature of this “universe” is highly exaggerated, but it illustrates how the rays of light in a sense “flow” along the contour of the space. What I’m going to suggest is that the speed at which light (and of course other forms of radiation) travels is actually determined by that contour or curvature, because if it travelled at a higher speed, this would happen:


What this would actually mean is hard to say. It might mean that the energy disappears into the fourth dimension, or it could even mean that it exits the universe, whatever that might entail.

Mention of the fourth dimension brings up one final point I want to make before closing this largely pointless expostulation. You’ll remember the Möbius strip from last time:


In looking at this illustration, I want you to imagine that the strip is actually transparent, because what we’re talking about here is empty space, not paper. So there’s really nothing separating point A from point B, or C from D, except space; or rather, except the twisted structure of this space. But the separation is nevertheless complete and inviolable: The only way to get from point A to point B is to go around the strip; you can’t go through it.

Why not? Well, if you travel around the strip from A to B, you’re in effect adding up Ü units, or in a sense travelling uphill dimensionally. By the time you reach point C, you’re in a dimension that’s 0.5 higher in relation to A, and when you reach B, the space you’re in is a full 1 dimension away from A. It’s still 2+ÜD from a local point of view (in the illustration; in the twisted universe, it would be 3+ÜD), but A is 3D from the perspective of B (4D in the real universe), and vice versa. So naturally, there’s no way to perceive one space from the other, much less to go there directly.

This is precisely what constitutes the boundary or “edge” of the universe, this dimensional barrier. And what that means is that every point in the universe is on the edge of the universe.

Somehow, that reminds me of the famous Hermetic saying quoted by Giordano Bruno and Pascal, among others: “God is an intelligible sphere whose center is everywhere and whose circumference is nowhere.” In the twisted universe, the circumference is everywhere, but I’m not sure whether there’s a center anywhere.

Wednesday, June 9, 2010

A Twisted Universe

Last time, I argued that the overall shape or structure of the universe is unknowable, an argument that might (in light of Goedel’s incompleteness theorems) be unprovable. But despite perfectly good reasons to abandon the goal, I’m now going to present an argument in favor of what I believe may be a novel model of the cosmos.

First, what do we mean by the words “universe” or “cosmos?” The start of the Wikipedia entry for “universe” seems to me to sum it up nicely, especially because it says pretty much what I hoped it would say: “The Universe comprises everything perceived to exist physically, the entirety of space and time, and all forms of matter and energy.”

That second clause, “the entirety of space and time,” raises a point that I think a lot of people overlook: The universe is not in space, rather, all space is in the universe. This means the universe cannot have a spatial boundary: You can’t travel to the edge of the universe and find that the universe ends while space continues. As a result, the shape or structure of the universe must be such that if you travel through it in a straight line, you never reach a point where the universe is on one side and something else is on the other.

It seems to me that there are only two ways this can be possible. The first is if the universe is infinitely large. This would certainly allow for a never-ending trajectory, but it also creates serious difficulties.

First, given the equivalence between space and time, it seems necessary that an infinitely large universe also be infinitely old. Even if there were no inherent reasons for rejecting an infinite universe (and I think there are), this requirement that it be infinitely old would directly contradict the prevailing scientific cosmological model, the Big Bang theory, which holds that the universe is “only” about 13 billion years old.

Also contradicting the Big Bang theory would be any requirement that matter/energy be consistently distributed throughout an infinite space. Such consistency is required if we aren’t to allow different regions of space to have drastically different properties; in other words, if we do want to guarantee that universal laws really are universal. But if we distribute matter/energy similarly throughout an infinite universe, it’s clear that we must have an infinite amount of matter/energy to distribute.

Since matter/energy can be neither created nor destroyed, an infinite universe must have contained the same amount – i.e., an infinite amount – of matter/energy from its beginning. In other words, a less-than-infinite volume of space must have held an infinite quantity of matter/energy. This strikes me as very unlikely. In short, I don’t see any possibility that an infinite universe could have had a beginning in time (without divine agency), because it must already have been infinite at its beginning, something that contradicts the entire basis of the Big Bang theory.

But if an infinitely large universe must also be infinitely old, then one would expect certain physical phenomena to have advanced long ago to their extremes. Most obviously, the second law of thermodynamics requires that the total entropy of the universe increase over time. If it has already been increasing for an infinite time, then one might reasonably expect that entropy would long ago have reached its maximum, known as “heat death,” in which no free energy is available to cause motion or sustain order. Clearly, that is not the case.

So I’m convinced that infinity is just not on, which means we need to find another structure “such that if you travel through it in a straight line, you never reach a point where the universe is on one side and something else is on the other,” as I said earlier.

To explain what I think is the best alternative to infinity, I’ll start with an object that may be familiar to many readers:




Möbius strip (Adapted from Wikipedia)



This is the famous Möbius strip, discovered by German mathematician August Ferdinand Möbius. It’s just a strip of paper with the ends glued or taped together, with the important requirement that the paper be given a half-twist before gluing or taping, but it has a number of interesting properties, which many of you are undoubtedly familiar with.

For one, if you take a pen and draw a line along the length of the strip, the line will return to its starting point, having covered both “sides” of the paper, without lifting the pen off the paper at any time.

The standard explanation for this is that mathematically (topologically), despite all appearances to the contrary, the strip has only one surface. But there’s another way of explaining it:

If you took a strip of paper and stuck the ends together without the half-twist, the only way to draw a line on both sides of the paper would be to lift the pen from the paper and move it around to the other side. In other words, the pen would have to leave the two-dimensional surface of the paper and travel through three-dimensional space to the other side.

But the half-twist turns the Möbius strip as a whole into a three-dimensional object, and so the tip of your pen does in fact make that trip through 3D space as it makes its “orbit” of the entire strip, without ever leaving the (apparently) 2D plane.

If you consider the twist as spread evenly over the whole of the strip, then in effect the dimension of the surface at any one point is slightly more than 2D; it has a dimension of 2.00…n, with the magnitude of the fractional part at any point depending on how long the strip is. When you add the fractional parts over the whole length, the sum will be 1.0, which, when added to the nominal 2.0 dimension of the surface, makes the dimension of the object as a whole 3.0.

Now, suppose we take a three-dimensional object – a sphere, for example – and give it a half-twist and join its ends together. Obviously, this isn’t something we can actually do in three-dimensional space – a sphere has no “ends” in 3D space. (By the same token, a 2D being couldn’t make a Möbius strip.) But I hope that by bootstrapping up from the example of the Möbius strip, we can conceptualize the result of this procedure.

As was the case with the Möbius strip, the half-twist raises the dimension of the object as a whole by 1.0, in this case to 4.0. And as with the Möbius strip, the dimension at any point within the sphere will appear to be unchanged. But in fact, it will be 3.00…n, with the fractional part again adding up over a full orbit of the space to 1.0, making the total dimension 4.0.

And again as in the case of the Möbius strip, a straight-line trajectory in any direction will eventually return to its starting point, having traversed both “sides” of the sphere. But mathematically, just as the Möbius strip has only one surface, this “Möbius sphere” has only one volume, though it would appear to a four-dimensional observer to have separate “inside” and “outside” volumes, just as a three-dimensional observer (you or me) sees an “inside” and an “outside” surface on the Möbius strip.

A space of this kind would satisfy the requirement that “if you travel through it in a straight line, you never reach a point where the universe is on one side and something else is on the other.” In fact, in this twisted universe, no matter which direction you travel, there will always be as much of the universe in front of you as behind, and as much above as below. In other words, for an ordinary (physically constituted) traveler, such a universe is likely inescapable.

There are a number of other implications of this form as a cosmological model, some of which I find rather odd. But this is already a rather lengthy post, so I’ll have to take up the ramifications next time.

Tuesday, June 8, 2010

The Vast Unknown

As we know, there are known knowns: There are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don't know we don't know.

- Former Secretary of Defense Donald Rumsfeld, news briefing, Feb. 12, 2002
Some people in the media ridiculed the above statement when "Rummy" said it back in 2002. And as a response to questions about the Iraq War, it certainly had some inadequacies. But from a purely epistemological point of view, it's not entirely lacking in merit. Certainly, "we know there are some things we do not know," and I would add that there are some things we cannot know.

I mean this in a strictly rational, scientific sense (at least for now): There are some things that are inherently unknowable. Heisenberg's principle of uncertainty (or indeterminacy) provides a famous example: In quantum physics, it's impossible to know simultaneously the position of a particle and its velocity; the more precisely you measure one property, the less you know about the other. And Werner Heisenberg, the physicist who first formulated this principle, called attention to the fact that this uncertainty isn't just a result of insufficiently precise measuring tools or processes, it's a fundamental property of quantum systems; that is, of matter as such.

Equally fundamental, and possibly even more so, are Kurt Gödel's incompleteness theorems. Without going into great detail (partly because trying to do so would probably give me a severe headache), these theorems prove that many formal logical systems cannot prove all of their own possible axioms and/or cannot prove their own consistency. One reading of the implications is that if the result of a formal logical proof is something we can label as "knowledge," Gödel's theorems show that there must remain some things that are "unknowable" in this sense.

These examples may seem somewhat nitpicky or irrelevant: Surely it doesn't matter much in the larger scheme of things if we can't precisely locate every particle of matter/energy in the universe or if we can't absolutely prove or disprove every possible statement.

But what if one of the inherently unknowable things is the largest scheme of things itself - the universe? In other words, what if there's an inherent unknowability at the smallest scale, the microcosmic, and also at the largest, the cosmic, and an unprovability about any guesses we might make as a substitute for direct knowing?

Within my limited and decidedly math-impaired understanding, this does appear to be precisely the case.

When we look up at the sky on a clear night, we can see millions of glowing objects in the sky - and not one of them is actually located where it appears to be. The reason, of course, is that it takes time for light to travel through space, and during the time the light is traveling from its source (a star, galaxy or planet, for instance) to our eyes, we and that source are moving. Even our nearest neighbors in space are far away enough for there to be a time lag, and thus a displacement, between their emission of light and our reception of it: It takes light about 9 minutes to travel from the sun to the Earth. And obviously, the farther away the emitter is, the longer the time lag and the larger the spatial displacement become.

What this means is that the picture our perceptions (even as we extend them through technology such as telescopes) give us of the universe is inevitably geocentric. We can adjust the picture to some extent, in effect creating a mental or conceptual map of the real current locations of celestial bodies, and this procedure obviously works well enough for us to send space probes to the Moon, Mars and so on. But as the distance involved increases, so must the uncertainty of our conceptual map.

In other words, any model we propose for the structure of the universe in its entirety will always have a major theoretical component. It's likely that we are safe in supposing that the natural physical laws that operate within our zone of certainty will also operate outside that zone, so it's fairly safe to hypothesize that the most distant regions of the universe will be like ours in a general, qualitative sense. But we cannot know the precise structure or appearance of those regions, or of the universe as a whole, in "real time," that is, as they are at any one moment.

One implication of this unknowability is that we (and by "we" I mean all intelligent beings who happen not to be blessed with supernatural omniscience) may ultimately have to accept multiple, and possibly mutually contradictory, models of the universe. As long as a conceptual model doesn't conflict with natural universal laws, insofar as we can understand those, and does account as much as possible for the phenomena that we can observe directly, we probably must accept that it's "true," even if there exist one or more alternative models with equal claims to be "true."

In some sense, we already do live with multiple universes - that is, with multiple explanations of the structure and origin of the cosmos - though there's considerable argument about which, if any, are true. And maybe a realistic understanding of the limits of certainty should prompt us to be a bit less fiercely argumentative about our varying understandings.

Perhaps to add fuel to the fire, or perhaps to help diffuse it, I'll be writing next time about an alternative model that as far as I know - and obviously, that can't be very far - hasn't previously been proposed and is very unlikely to be provable.