One summer evening a few years ago, I was sitting in my backyard unwinding with a bottle of Warsteiner after a day’s work when something struck me that has stuck with me ever since. Our backyard at the time was a rectangle surrounded by a chain-link fence, and as I sat there I could see through to the other backyards, which were also rectangles surrounded by chain-link fences.
What struck me was how much the houses and yards along our block resembled the kennel where my wife and I had boarded our two dogs not long before. It was one of those nice kennels, where each dog had a nice big cage inside and an opening to a nice individual fenced-in “run” outside. We felt good about leaving our dogs there for a week because they were free to go in and out, they weren’t as confined as they would have been in one of those old nasty kennels where they had to sit in a cage all day waiting to be walked.
What we had done, of course, was judge the kennel according to our human standards. Without realizing it, we had boarded the dogs at a kennel that essentially was modeled on our own living space: a box in which we felt safe and sequestered when that was what we wanted, and an attached open area where we could go out and be “in” nature when that was what we wanted, but safely marked off from our neighbors’ parcels of ground. We were imputing to our dogs the same kind of need for a well-defined freedom that we felt for ourselves.
Now, it may seem invidious that I’m comparing an average American suburban home to a dog kennel, but I don’t mean it that way. On the contrary, the comparison really depends on the fact that we love our “companion animals” and want only the best for them. The point is simply that we conceive the “best” for them in the same terms we conceive it for ourselves: as having a certain kind of private, personal space in which we are free to do what we want, when we want.
If there is anything invidious in this, it’s the contrast between this rather limited – one might almost say compromised – version of freedom and the “Freedom” with a capital F that people make such a fuss over in the sphere of public discussion and action. It’s perhaps a little hard to reconcile the Freedom that people have fought and died for with the freedom to have a barbecue and burn tiki torches.
Still, the two kinds really aren’t totally unrelated. Where they are related is in the understanding that you and I have a right to do whatever we want to do in our personal spaces. (Within reason, of course: If my neighbor is committing sex crimes or torturing puppies in the house next door, I need to interfere with him doing that.) This is precisely why the ownership of a home is the core of the American Dream: because my home is a space where I can exercise my sovereignty as an individual, and of course individual sovereignty is what America is all about.
Freedom also involves, of course, the freedom to work at the job one chooses so as to be able to afford a home. And for some fortunate few, their work itself provides the kind of fulfillment we all seek, while for others work is just a means to obtain the kind of personal space we need to practice whatever else gives us that fulfillment (“I work to live, I don’t live to work”).
I happen to live in a kind of middle space in this regard: As a journalist, I sometimes am lucky enough to wander into a story that actually does some kind of good for others, and that’s about as rewarding as it gets. But I also have an inner life that I pursue in the privacy of my home that gives me some satisfaction even on those days when my job totally sucks.
I imagine a lot of us are in somewhat the same situation, doing what we can in our careers to give something to the world, and/or seeking in our “leisure” hours to cover whatever we feel as a lack in our spiritual or psychological lives. This sort of thing is, I believe, exactly what Thomas Jefferson had in mind when he wrote that we have an “unalienable right” to “the pursuit of happiness.”
What I find regrettable in our society in regard to these things is the widespread tendency to confuse means with ends. It appears that many of us expect to find fulfillment in the acquisition of the personal space and its accoutrements, rather than the use. There’s a bumper sticker that sums up the attitude: “He who dies with the most toys wins.” Many of us seem to believe that it’s the mere having of a home, or the size of the home, not the life lived inside it, that matters most. Once they have it, what are they supposed to do with it?
It appears that for many, the “pursuit of happiness” within one’s private space or in public means eating as much, drinking as much, owning as much, playing as much as one can, with no thought for the consequences to oneself or the world at large. Such an attitude is truly tragic, because it focuses on the most ephemeral things the world has to offer and leads people away from the sources of real, lasting happiness.
Our consumerist economic structure of course encourages this sort of belief and behavior, and the recent shakiness of that structure is a warning about its unsustainability – as if further warning were needed on top of our repeated energy crises, our “obesity epidemic,” our high crime rates and all the other social ills that are so obviously traceable to our society’s tendency to want more, more, more.
As much as I would like to see increased regulation of businesses, I would be the last person to suggest that we impose further restrictions on people’s private behavior. “An ye harm none, do what thou wilt” strikes me as a pretty good ethical principle. The challenge is getting people to understand the “harm none” part, especially in a world in which we seem to have moved from the idea that “all men are created equal” to a belief that “individual sovereignty” means every man is entitled to be a king. Regrettably, it appears that the king everyone wants to be is this one:
Showing posts with label economics. Show all posts
Showing posts with label economics. Show all posts
Wednesday, June 23, 2010
A Dog's Life
Labels:
consumerism,
economics,
economy,
freedom,
happiness,
home ownership,
materialism
Monday, June 14, 2010
Results May Vary
I’ve written previously about some of what I consider the shortcomings of orthodox financial economics. In general, it appears to me that economic theory is largely detached from reality in much the same way astronomy was before Copernicus and Galileo.
One instance of this detachment from reality is the “random walk” theory of financial markets. This model of market behavior was developed in the 1960s and still holds sway in the retail research departments of most U.S. brokerage firms. It’s based mainly on this:
What the chart shows is the Dow Jones Industrial Average from its creation in 1896 to the present (monthly closing prices; my apologies for the unreadability). The greenish line is a “linear regression,” that is, the straight line that is the best fit to the actual data points. Linear regression was a cutting-edge tool back in the ’60s, when computing power was rather limited. But there’s an unargued underlying assumption in applying it to a market, which makes the “random walk” theory an exercise in circular logic.
The assumption is that the graph of a stock index like the Dow in a way “wants” to be a straight line but can’t manage it. What economists actually say is that the market “seeks equilibrium,” by which they mean that it wants to go up continuously and at a consistent rate. But instead, there are “perturbations” that cause “fluctuations” above or below the idealized rate of gain. Those fluctuations are by nature random (hence “random walk”) and therefore are unpredictable.
Thus, investors shouldn’t try to guess when the market is going to have one of those “perturbations” – in other words, they shouldn’t try to practice “market timing” – but instead should simply buy stocks and hold them for the long term, because the underlying trend always goes higher. And here the logical circle is closed.
Since the 1980s, the random walk theory has come under increasing criticism, and many economists outside the Wall Street houses acknowledge that it’s not a realistic model. Many still want to cling to some variation of randomness, however, and have developed hybrid models that include some degree of self-recursiveness along with the randomness, giving us things like the exotically named GARCH model: “generalized auto-regression with conditional heteroskedacity.” However, scientific studies have shown that these models have zero value in predicting market movements.
Another argument in favor of “buy and hold” investing is the claim that the stock market consistently over the years has provided an average annual percentage return that is higher than other kinds of investments. But this depends very much on how you calculate the annual return. The usual figure is 8 percent, compared with half that return or less from interest-bearing investments such as bonds.
It’s true that if you take the closing price of an index like the Dow on a given day and calculate the percent change from the same day the year before, and you average that calculation over the past 100 years, you’ll get a figure something like that 8 percent number. But the calculation doesn’t bear any resemblance to how people actually invest: You can’t “buy the Dow” every day and sell it a year later.
I’ve constructed a model that I think represents more accurately how people really invest. I had to make some simplifying assumptions, but I believe the result is still more indicative of the kind of average returns investors can expect.
Here’s the idea: Let’s suppose that a worker sets up a program in which he or she invests a set percentage of his or her income each month. This program continues for 30 years, at which point the worker retires and cashes out. I’ve also assumed that the worker gets a cost-of-living raise at the beginning of each year, based on the nominal inflation rate (based on the Consumer Price Index) for the previous year. (That’s something few workers are actually seeing today, so the model may actually overstate the investor’s returns somewhat.)
The following chart shows the average annual return a worker would have received by following this investment strategy:
The time scale (if you can read it; right-click to open it bigger in a new window) indicates the date upon which the worker began the monthly investment program, and the vertical scale shows the percentage return at the end of 30 years for a worker who started investing on the date shown. For example, a worker who started an investment program in the very first month that Charles Dow calculated his industrial average in 1896 (i.e., the very beginning of the blue line) would have earned an average annual return of 2.86 percent over the next 30 years.
The very highest average return, 18.11 percent, would have been earned by a worker who began a monthly investment program in December 1969 and cashed out in December 1999. Obviously, the average annual return for someone who started 10 years later and cashed out last year would have been quite a bit less, just 3.92 percent at the low point in February 2009.
Worse yet, people who started investing in late 1901 to mid-1903 would have lost money, as would almost everyone who started in 1912. Perhaps most surprisingly, anyone who embarked on this kind of program in the late 1940s to mid-1950s -- which we're used to thinking of as boom times -- would have earned a fairly paltry annual return of about 2 percent or less when they cashed out in the late 1970s-early 1980s -- which were not so booming.
Overall according to this scenario, investors who have cashed out to date have earned an average annual return of 5.05 percent or a median annual return of 4.44 percent. Those figures aren’t all that much above the long-term average or median returns on interest-bearing investments. As I said earlier, the results may overstate the average returns because of my assumption about annual salary increases. In addition, the numbers don’t include any taxes or transaction costs such as brokerage commissions.
However, the real lesson of this exercise isn’t about long-term average returns, it’s about the wide variability of the real-time returns. What it boils down to is that even for a long-term investor, your results still depend entirely on when you start investing and when you cash out. If we relate that to our entry into the career world, it shows how much the decision is out of our hands: We can’t choose what year we’re born. Whether we like it or not, we’re all market-timers.
One instance of this detachment from reality is the “random walk” theory of financial markets. This model of market behavior was developed in the 1960s and still holds sway in the retail research departments of most U.S. brokerage firms. It’s based mainly on this:
What the chart shows is the Dow Jones Industrial Average from its creation in 1896 to the present (monthly closing prices; my apologies for the unreadability). The greenish line is a “linear regression,” that is, the straight line that is the best fit to the actual data points. Linear regression was a cutting-edge tool back in the ’60s, when computing power was rather limited. But there’s an unargued underlying assumption in applying it to a market, which makes the “random walk” theory an exercise in circular logic.
The assumption is that the graph of a stock index like the Dow in a way “wants” to be a straight line but can’t manage it. What economists actually say is that the market “seeks equilibrium,” by which they mean that it wants to go up continuously and at a consistent rate. But instead, there are “perturbations” that cause “fluctuations” above or below the idealized rate of gain. Those fluctuations are by nature random (hence “random walk”) and therefore are unpredictable.
Thus, investors shouldn’t try to guess when the market is going to have one of those “perturbations” – in other words, they shouldn’t try to practice “market timing” – but instead should simply buy stocks and hold them for the long term, because the underlying trend always goes higher. And here the logical circle is closed.
Since the 1980s, the random walk theory has come under increasing criticism, and many economists outside the Wall Street houses acknowledge that it’s not a realistic model. Many still want to cling to some variation of randomness, however, and have developed hybrid models that include some degree of self-recursiveness along with the randomness, giving us things like the exotically named GARCH model: “generalized auto-regression with conditional heteroskedacity.” However, scientific studies have shown that these models have zero value in predicting market movements.
Another argument in favor of “buy and hold” investing is the claim that the stock market consistently over the years has provided an average annual percentage return that is higher than other kinds of investments. But this depends very much on how you calculate the annual return. The usual figure is 8 percent, compared with half that return or less from interest-bearing investments such as bonds.
It’s true that if you take the closing price of an index like the Dow on a given day and calculate the percent change from the same day the year before, and you average that calculation over the past 100 years, you’ll get a figure something like that 8 percent number. But the calculation doesn’t bear any resemblance to how people actually invest: You can’t “buy the Dow” every day and sell it a year later.
I’ve constructed a model that I think represents more accurately how people really invest. I had to make some simplifying assumptions, but I believe the result is still more indicative of the kind of average returns investors can expect.
Here’s the idea: Let’s suppose that a worker sets up a program in which he or she invests a set percentage of his or her income each month. This program continues for 30 years, at which point the worker retires and cashes out. I’ve also assumed that the worker gets a cost-of-living raise at the beginning of each year, based on the nominal inflation rate (based on the Consumer Price Index) for the previous year. (That’s something few workers are actually seeing today, so the model may actually overstate the investor’s returns somewhat.)
The following chart shows the average annual return a worker would have received by following this investment strategy:
The time scale (if you can read it; right-click to open it bigger in a new window) indicates the date upon which the worker began the monthly investment program, and the vertical scale shows the percentage return at the end of 30 years for a worker who started investing on the date shown. For example, a worker who started an investment program in the very first month that Charles Dow calculated his industrial average in 1896 (i.e., the very beginning of the blue line) would have earned an average annual return of 2.86 percent over the next 30 years.
The very highest average return, 18.11 percent, would have been earned by a worker who began a monthly investment program in December 1969 and cashed out in December 1999. Obviously, the average annual return for someone who started 10 years later and cashed out last year would have been quite a bit less, just 3.92 percent at the low point in February 2009.
Worse yet, people who started investing in late 1901 to mid-1903 would have lost money, as would almost everyone who started in 1912. Perhaps most surprisingly, anyone who embarked on this kind of program in the late 1940s to mid-1950s -- which we're used to thinking of as boom times -- would have earned a fairly paltry annual return of about 2 percent or less when they cashed out in the late 1970s-early 1980s -- which were not so booming.
Overall according to this scenario, investors who have cashed out to date have earned an average annual return of 5.05 percent or a median annual return of 4.44 percent. Those figures aren’t all that much above the long-term average or median returns on interest-bearing investments. As I said earlier, the results may overstate the average returns because of my assumption about annual salary increases. In addition, the numbers don’t include any taxes or transaction costs such as brokerage commissions.
However, the real lesson of this exercise isn’t about long-term average returns, it’s about the wide variability of the real-time returns. What it boils down to is that even for a long-term investor, your results still depend entirely on when you start investing and when you cash out. If we relate that to our entry into the career world, it shows how much the decision is out of our hands: We can’t choose what year we’re born. Whether we like it or not, we’re all market-timers.
Saturday, June 12, 2010
The Human Factor
Before I moved back to Petersburg in the fall of 2008, I had been working for five years for the Post and Courier in Charleston, S.C., as assistant business editor. In that capacity, I was asked to write a business blog, which I did for about six months before I accepted a buyout and left.
There was a sort of cognitive dissonance between me and the bosses about that blog: what I was writing turned out not to be what they had expected me to write (and I wasn't even doing any of the cosmic stuff then). So all of my posts there were taken offline pretty quickly after I left.
That's a shame, because personally I think some of them were pretty good, although that might just be my memory playing tricks on me. But one of them in particular I want to try to reconstruct, because the point it made was one that needs to be remembered.
I'm sure at some point or other, all of us have seen a sign at a business that says, "Our people are our most important asset." It's a nice sentiment, but if there's any truth at all in what it says, it's purely symbolic. Under "generally accepted accounting principles," people are not an asset.
If you look at actual corporate financial statements, you won't find "people" listed anywhere. But if you know where to look, you can find where they're hidden. It's not on the statement of assets. On the contrary, people show up on the income statement as a cost of doing business. And if the company owes money to its employee pension fund, that shows up on the statement of liabilities.
As a result, when business slows down (or goes down the toilet), it's a no-brainer for the MBAs and CPAs and other bean-counters to look at the financial statements and think, "Hey, here's a quick and easy way to make the numbers look better: Fire some workers and dump the pension plan."
If people really were treated as an asset in some way - if companies were required to account for the potential cost of training replacements, for example - then it wouldn't be such an easy decision to fire them en masse. That's because anytime a company has to write down the value of an asset, the writedown has to be reflected on the income statement as an expense. So laying people off wouldn't automatically make "the bottom line" look better.
It's ironic, or something, that businesses do account for their "property, plant and equipment" as assets, but not the employees who actually make those things work to put out products or provide services, and in general to create profits. In the real world - the world beyond the spreadsheets and trial balance ledgers and forecasting models - people do actually have value.
There was a sort of cognitive dissonance between me and the bosses about that blog: what I was writing turned out not to be what they had expected me to write (and I wasn't even doing any of the cosmic stuff then). So all of my posts there were taken offline pretty quickly after I left.
That's a shame, because personally I think some of them were pretty good, although that might just be my memory playing tricks on me. But one of them in particular I want to try to reconstruct, because the point it made was one that needs to be remembered.
I'm sure at some point or other, all of us have seen a sign at a business that says, "Our people are our most important asset." It's a nice sentiment, but if there's any truth at all in what it says, it's purely symbolic. Under "generally accepted accounting principles," people are not an asset.
If you look at actual corporate financial statements, you won't find "people" listed anywhere. But if you know where to look, you can find where they're hidden. It's not on the statement of assets. On the contrary, people show up on the income statement as a cost of doing business. And if the company owes money to its employee pension fund, that shows up on the statement of liabilities.
As a result, when business slows down (or goes down the toilet), it's a no-brainer for the MBAs and CPAs and other bean-counters to look at the financial statements and think, "Hey, here's a quick and easy way to make the numbers look better: Fire some workers and dump the pension plan."
If people really were treated as an asset in some way - if companies were required to account for the potential cost of training replacements, for example - then it wouldn't be such an easy decision to fire them en masse. That's because anytime a company has to write down the value of an asset, the writedown has to be reflected on the income statement as an expense. So laying people off wouldn't automatically make "the bottom line" look better.
It's ironic, or something, that businesses do account for their "property, plant and equipment" as assets, but not the employees who actually make those things work to put out products or provide services, and in general to create profits. In the real world - the world beyond the spreadsheets and trial balance ledgers and forecasting models - people do actually have value.
Labels:
assets,
capitalism,
economics,
expenses,
finance,
layoffs,
liabilities,
workers
Tuesday, December 2, 2008
Bernanke Must Go
Some years ago (40, to be exact) I attended a rally for then-U.S. Sen. Eugene McCarthy, who was running against Lyndon Johnson for the Democratic nomination for president. “Clean Gene,” as he was called, shared a bit of barnyard humor that has stuck with me ever since. My knowledge of farm animals is pretty limited, so I can’t vouch for whether it’s true on a biological or zoological level.
According to McCarthy, pigs are mostly insensitive to temperature except in their snouts. In Minnesota, where McCarthy was from, it gets pretty cold, of course. But because pigs mainly sense temperature with their snouts, as long as their snouts are warm, they believe they’re warm all over. So when a pig gets cold, McCarthy said, it will try to warm itself up by sticking its snout between the hind legs of another pig.
According to McCarthy, it’s not unheard-of to see whole herds of swine forming a kind of daisy chain, each with its nose up the backside of the one in front of it. And if there’s an unexpected hard freeze, an unfortunate pig farmer might come out the next morning to find his entire herd frozen to death in a circle.
McCarthy shared this somewhat indelicate information as a metaphor for the behavior of politicians, but it also strikes me as highly applicable to the way financial regulators and executives have been behaving lately.
Take, for instance, Federal Reserve Chairman Ben Bernanke. Throughout the first half of this year, Bernanke insisted that the U.S. economy was not in a recession and stood a fair chance of avoiding one. While he acknowledged that the economy was weak and the financial system vulnerable because of mortgage-related problems, he expressed confidence that the Fed’s cuts in its key interest rate would be enough to prevent an actual economic decline.
We know now, of course, that Bernanke was wrong. According to the National Bureau of Economic Research, a private nonprofit business group that is the quasi-official authority on economic cycles, the U.S. entered a recession a year ago this month.
What’s more, a lot of people have known that all along; even an armchair economist like me. Back on April 23, I wrote in my blog for The Post and Courier that “it would appear likely that we’ll look back at the fourth quarter of 2007 as the beginning of this recession.”
But Bernanke – who holds his job as Fed chairman because he’s regarded as one of the nation’s top economists – continued to insist that there was no recession and that a recession could, in fact, be avoided.
There are only two possible reasons why Bernanke kept saying those things: Either he’s an incompetent economist or he was being deliberately deceptive.
I’d probably opt for the latter explanation, because there does seem to be a kind of traditional belief in the financial community that denial of negative conditions will somehow make those conditions go away. (The real estate community took a somewhat similar approach early in the ongoing collapse of that market.) And there’s also the Straussian belief, widespread in the Bush administration, that deception of the citizenry is a valid policy tool.
However, it doesn’t matter which explanation you prefer. Either way, it’s clear that we have no good reason to trust Bernanke as a steward of our economy and financial system.
Bernanke’s partner in the ongoing economic Tweedledum and Tweedledee act, Henry Paulson, will be leaving office in January as part of the turnover of the White House to Barack Obama’s team. But Bernanke’s 4-year term as chairman of the Fed doesn’t expire until January 2010, and his 14-year (!) term on the Fed’s board will last until 2020.
The totally inadequate response of Bernanke and Paulson to the current economic and financial problems is reason enough to want them both gone. But now that we have clear, decisive evidence of Bernanke’s unreliability even on the level of Economics 101, it’s imperative that he be replaced as rapidly as possible.
Bernanke should do the honorable thing and resign, now. And if he won’t do that, then the new administration and the new Congress should do whatever is necessary to dismiss him for incompetence. All he has done is try to keep financial executives’ noses warm, but the economic temperature is still dropping.
According to McCarthy, pigs are mostly insensitive to temperature except in their snouts. In Minnesota, where McCarthy was from, it gets pretty cold, of course. But because pigs mainly sense temperature with their snouts, as long as their snouts are warm, they believe they’re warm all over. So when a pig gets cold, McCarthy said, it will try to warm itself up by sticking its snout between the hind legs of another pig.
According to McCarthy, it’s not unheard-of to see whole herds of swine forming a kind of daisy chain, each with its nose up the backside of the one in front of it. And if there’s an unexpected hard freeze, an unfortunate pig farmer might come out the next morning to find his entire herd frozen to death in a circle.
McCarthy shared this somewhat indelicate information as a metaphor for the behavior of politicians, but it also strikes me as highly applicable to the way financial regulators and executives have been behaving lately.
Take, for instance, Federal Reserve Chairman Ben Bernanke. Throughout the first half of this year, Bernanke insisted that the U.S. economy was not in a recession and stood a fair chance of avoiding one. While he acknowledged that the economy was weak and the financial system vulnerable because of mortgage-related problems, he expressed confidence that the Fed’s cuts in its key interest rate would be enough to prevent an actual economic decline.
We know now, of course, that Bernanke was wrong. According to the National Bureau of Economic Research, a private nonprofit business group that is the quasi-official authority on economic cycles, the U.S. entered a recession a year ago this month.
What’s more, a lot of people have known that all along; even an armchair economist like me. Back on April 23, I wrote in my blog for The Post and Courier that “it would appear likely that we’ll look back at the fourth quarter of 2007 as the beginning of this recession.”
But Bernanke – who holds his job as Fed chairman because he’s regarded as one of the nation’s top economists – continued to insist that there was no recession and that a recession could, in fact, be avoided.
There are only two possible reasons why Bernanke kept saying those things: Either he’s an incompetent economist or he was being deliberately deceptive.
I’d probably opt for the latter explanation, because there does seem to be a kind of traditional belief in the financial community that denial of negative conditions will somehow make those conditions go away. (The real estate community took a somewhat similar approach early in the ongoing collapse of that market.) And there’s also the Straussian belief, widespread in the Bush administration, that deception of the citizenry is a valid policy tool.
However, it doesn’t matter which explanation you prefer. Either way, it’s clear that we have no good reason to trust Bernanke as a steward of our economy and financial system.
Bernanke’s partner in the ongoing economic Tweedledum and Tweedledee act, Henry Paulson, will be leaving office in January as part of the turnover of the White House to Barack Obama’s team. But Bernanke’s 4-year term as chairman of the Fed doesn’t expire until January 2010, and his 14-year (!) term on the Fed’s board will last until 2020.
The totally inadequate response of Bernanke and Paulson to the current economic and financial problems is reason enough to want them both gone. But now that we have clear, decisive evidence of Bernanke’s unreliability even on the level of Economics 101, it’s imperative that he be replaced as rapidly as possible.
Bernanke should do the honorable thing and resign, now. And if he won’t do that, then the new administration and the new Congress should do whatever is necessary to dismiss him for incompetence. All he has done is try to keep financial executives’ noses warm, but the economic temperature is still dropping.
Tuesday, November 11, 2008
Reasonably Irrational
I’ve been trying for some time to heap scorn on one of the central tenets of orthodox economics, namely the concept of the “rational investor.” Anyone who follows the markets can see quite clearly that investors behave irrationally at times, or have we forgotten the dot-com bubble? But the theory remains firmly in place, not because economists are stupid or because they’re deliberately trying to mislead people, but because the whole structure of mainstream economic theory would collapse without it.
Put simply, economists believe that economies and markets function efficiently because people naturally choose the courses of action that are most likely to give them the greatest benefit. In this obviously naive belief, economists are clinging to the ideas of those theorists of the so-called Age of Reason, such as John Locke and Adam Smith, who laid the foundations of our modern political and economic systems. For Locke and Smith and their like-minded contemporaries, “reason” alone is sufficient to guide all human life and unlock all the mysteries of existence, while “unreason” is all bad and a great impediment to our progress as individuals and as a society.
In particular, the thinkers of the 18th-century “Enlightenment” – many of them Deists, including a number of the founding fathers of the United States – identified “unreason” with traditional and “emotional” forms of religion. After all, they were keenly aware of the violent upheavals of the 15th and 16th centuries, when partisans on both sides of the Reformation engaged in repeated and vicious wars to promote or defend their theological positions.
These cutting-edge 18th-century opinions still hold sway with a large number of contemporary thinkers. Richard Dawkins, for example, in “The God Delusion,” voices the opinion that religious belief persists in our time mainly because of bad parenting (i.e., parents teaching their children religion), and if only we could rid ourselves of this irrational belief in the supernatural, the world would quickly enjoy unprecedented peace and harmony.
The main problem with this whole line of thought is that it takes into account only a small part of the human psyche while denying and devaluing the rest.
This was already the response of the Romantic movement, which followed close on the heels of the Enlightenment and celebrated the emotions and fantasies that had been swept out of the tidy Neoclassical worldview of Locke and Smith. The Romantics restored “irrationality” to a place of value and usefulness, perhaps even giving it too high an estimation; these swings of the pendulum do tend to carry to extremes.
It’s a bit ironic that the rationalists of the Age of Reason looked to ancient philosophy for support for their arguments, because the ancients actually had a much more balanced view of human psychology. In particular, Plato and his followers clearly delineated the psyche into an irrational and a rational part, and though they did argue that the rational soul should rule the individual psyche, they contended that the psyche as a whole should aim to serve a higher, super-rational level of being. (To be technical, this “higher level” is called nous in Greek and is translated generally as “spirit” or “intellect,” depending on the inclinations of the translator; neither term really works very well, in my opinion.)
There are many, I’m sure, who will find it absurd to accord any value to irrationality. But consider: Are our sense-perceptions rational? Of course not; they simply report the facts of our environment to our emotions and our thinking. What about instincts? No, but they're pretty useful in keeping us from starving to death and so on.
What about emotions? Well, as Carl Jung pointed out, there is in fact a kind of emotional logic, which is why he defined "feeling" as a "rational function": We can rate and rank and judge things according to how they make us feel, good or bad, better or worse. And that kind of evaluation seems pretty important to our well-being. But in our modern worldview, dominated by the belief that “rationality” consists entirely of verbal or numerical logic, it doesn’t make the cut.
And let’s not forget the importance of irrationality in creativity, in making breakthroughs. Logical analysis just breaks things down or connects one existing thing to another; it doesn’t produce anything new.
However, ignoring or denying the existence or importance of these things doesn’t make them go away; instead, it simply sweeps them under the mental rug, into the unconscious – something else a lot of contemporary thinkers like to pretend is nonexistent. And from their lurking-place in our mental shadow, they can feed on our basic appetites and drives, and grow large and powerful enough to dominate us now and then, causing all sorts of embarrassing problems and bloody conflicts.
In addition, there’s a tendency toward the thoroughly unproven and frankly rather smug belief that “we” – that is, the intellectual inheritors of the Western (specifically, the Northwestern European) worldview – are the only really rational people, while “they” – all those mostly darker people in the rest of the world – are irrational (“medieval,” “emotionally volatile,” “politically immature,” etc. etc.) and therefore in need of our benevolent (of course) guidance (or the firm hand of a dictator chosen by us).
It scarcely needs to be said, but I’ll state that I don’t think “we” are as rational as some of us like to believe, nor are “they” as irrational. And in any case, I think we need to practice irrationality to some extent. You might say that the problem isn’t that we’re irrational, it’s that we just aren’t very good at it.
Put simply, economists believe that economies and markets function efficiently because people naturally choose the courses of action that are most likely to give them the greatest benefit. In this obviously naive belief, economists are clinging to the ideas of those theorists of the so-called Age of Reason, such as John Locke and Adam Smith, who laid the foundations of our modern political and economic systems. For Locke and Smith and their like-minded contemporaries, “reason” alone is sufficient to guide all human life and unlock all the mysteries of existence, while “unreason” is all bad and a great impediment to our progress as individuals and as a society.
In particular, the thinkers of the 18th-century “Enlightenment” – many of them Deists, including a number of the founding fathers of the United States – identified “unreason” with traditional and “emotional” forms of religion. After all, they were keenly aware of the violent upheavals of the 15th and 16th centuries, when partisans on both sides of the Reformation engaged in repeated and vicious wars to promote or defend their theological positions.
These cutting-edge 18th-century opinions still hold sway with a large number of contemporary thinkers. Richard Dawkins, for example, in “The God Delusion,” voices the opinion that religious belief persists in our time mainly because of bad parenting (i.e., parents teaching their children religion), and if only we could rid ourselves of this irrational belief in the supernatural, the world would quickly enjoy unprecedented peace and harmony.
The main problem with this whole line of thought is that it takes into account only a small part of the human psyche while denying and devaluing the rest.
This was already the response of the Romantic movement, which followed close on the heels of the Enlightenment and celebrated the emotions and fantasies that had been swept out of the tidy Neoclassical worldview of Locke and Smith. The Romantics restored “irrationality” to a place of value and usefulness, perhaps even giving it too high an estimation; these swings of the pendulum do tend to carry to extremes.
It’s a bit ironic that the rationalists of the Age of Reason looked to ancient philosophy for support for their arguments, because the ancients actually had a much more balanced view of human psychology. In particular, Plato and his followers clearly delineated the psyche into an irrational and a rational part, and though they did argue that the rational soul should rule the individual psyche, they contended that the psyche as a whole should aim to serve a higher, super-rational level of being. (To be technical, this “higher level” is called nous in Greek and is translated generally as “spirit” or “intellect,” depending on the inclinations of the translator; neither term really works very well, in my opinion.)
There are many, I’m sure, who will find it absurd to accord any value to irrationality. But consider: Are our sense-perceptions rational? Of course not; they simply report the facts of our environment to our emotions and our thinking. What about instincts? No, but they're pretty useful in keeping us from starving to death and so on.
What about emotions? Well, as Carl Jung pointed out, there is in fact a kind of emotional logic, which is why he defined "feeling" as a "rational function": We can rate and rank and judge things according to how they make us feel, good or bad, better or worse. And that kind of evaluation seems pretty important to our well-being. But in our modern worldview, dominated by the belief that “rationality” consists entirely of verbal or numerical logic, it doesn’t make the cut.
And let’s not forget the importance of irrationality in creativity, in making breakthroughs. Logical analysis just breaks things down or connects one existing thing to another; it doesn’t produce anything new.
However, ignoring or denying the existence or importance of these things doesn’t make them go away; instead, it simply sweeps them under the mental rug, into the unconscious – something else a lot of contemporary thinkers like to pretend is nonexistent. And from their lurking-place in our mental shadow, they can feed on our basic appetites and drives, and grow large and powerful enough to dominate us now and then, causing all sorts of embarrassing problems and bloody conflicts.
In addition, there’s a tendency toward the thoroughly unproven and frankly rather smug belief that “we” – that is, the intellectual inheritors of the Western (specifically, the Northwestern European) worldview – are the only really rational people, while “they” – all those mostly darker people in the rest of the world – are irrational (“medieval,” “emotionally volatile,” “politically immature,” etc. etc.) and therefore in need of our benevolent (of course) guidance (or the firm hand of a dictator chosen by us).
It scarcely needs to be said, but I’ll state that I don’t think “we” are as rational as some of us like to believe, nor are “they” as irrational. And in any case, I think we need to practice irrationality to some extent. You might say that the problem isn’t that we’re irrational, it’s that we just aren’t very good at it.
Labels:
Adam Smith,
Age of Reason,
colonialism,
Deism,
economics,
irrational,
Jung,
plato,
rationalism,
Western worldview
Sunday, October 26, 2008
Loose Morals
So far in this blog, I’ve done a lot of ranting about the role of randomness in the prevailing Western worldview and the resulting lack of “focus,” as defined in my first posting here. As I’ve indicated, I think it’s a wrong view of things, an inaccurate description, account or narrative.
But what difference does it make? What does it matter if scientists and economists and so on are working from a faulty conception of the overall cause and meaning of the cosmos? As long as they get the lower-level details right, and the electricity still makes my lights work and I can still click a link and look at stuff on the Internet, is there any reason to care about overarching theoretical stuff that may not be provable anyway?
Well, certainly in the case of economics, we’re seeing what happens when a wrong theory holds sway: Vast sums of money vanish in the blink of an eye, people lose their jobs, and political consequences follow.
In physics, there’s apparently some possibility that we might see even more disastrous results in a few months, when the big new CERN supercollider is fired up again, after it blew a fuse on the first try several weeks ago. Some physicists have expressed concerns that when their colleagues start smashing tiny bits of matter together, it might possibly cause the end of the world. Others scoff at that idea, though; I guess we’ll find out who’s right eventually.
However, I think we’re already living every day with almost equally disastrous results from this materialist-atomist worldview, because it leaves us with no “higher good,” no center-of-the-universe, no focus. What we’re left with is an absurdist value-neutral universe in which every action is pretty much as valid as any other. If, as Nietzsche proclaimed, “God is dead,” then by what standard do we judge our own or others’ words and actions?
The answer for Nietzsche and many others: the individual will, or what a lot of people might prefer to call the personal ego. From that perspective, “good” is what’s good for me, “bad” is what’s bad for me.
Amazingly enough, this position is in fact the stance of orthodox economics, though in that discipline the concept is sugar-coated with the notion of “rational agency,” which in essence claims that people (or at least those who make economic decisions such as whether to buy or sell stuff) act out of what some refer to as “enlightened self-interest.” Meaning that people are generally aware that the effect of their decisions on other people is something they need to keep in mind; for example, if you’re stealing food from others, you need to leave them enough so they don’t starve to death, if you want to be able to keep stealing from them.
But as we’ve seen in the banking industry, some people don’t get that part of the theory; instead, in their egotistical greed, they’re willing to burn their own house down to keep the fires lit. “Rationality” wouldn’t seem to have much to do with it, except in the sense that some of them were able to find plausible-sounding rationalizations for what they were doing.
Now, I’m not aligning with those upholders of religious orthodoxy who decry “situational ethics” and “moral relativism.” I think any ethics that doesn’t vary somewhat depending on the situation is too limited to be valid, and I think all morality is relative – relative to the true, final good.
I don’t agree, either, with those who claim it’s possible to establish a valid ethical system on a purely materialistic-scientistic basis. Any moral system that posits the “highest good” as some physical thing – prosperity, social order, the pursuit of scientific knowledge – will lead eventually to immoral results. For example, if you suppose that the highest human good is social order, you’ll inevitably end up making utilitarian compromises, seeking “the greatest good for the greatest number,” which means some “lesser number” will be hauled off to prison whenever it’s convenient, without violating your moral rules.
As for “scientific knowledge” as a “highest good,” it sounds nice and noble, but of course in the real world, research gets done when someone – the Pentagon, the pharmaceutical companies, the cigarette makers, etc. etc. – is willing to fund it.
So is there a way to find the true “highest good,” and to do so objectively, without recourse to traditional authority, such as religious dogma? I believe there is, and I’ll go into detail in a posting in the next few days.
Right now, I’d like to make a brief comment about reader comments. I’m delighted anytime anyone wants to leave a comment here. I’ve set it up so you don’t have to register or anything like that. I do say things from time to time that I think are fairly provocative, and I don’t mind anyone disagreeing or criticizing or challenging any of it. However, I won’t allow obscenity, libel or hate speech. So please feel free to critique, but please be grown-up about it.
But what difference does it make? What does it matter if scientists and economists and so on are working from a faulty conception of the overall cause and meaning of the cosmos? As long as they get the lower-level details right, and the electricity still makes my lights work and I can still click a link and look at stuff on the Internet, is there any reason to care about overarching theoretical stuff that may not be provable anyway?
Well, certainly in the case of economics, we’re seeing what happens when a wrong theory holds sway: Vast sums of money vanish in the blink of an eye, people lose their jobs, and political consequences follow.
In physics, there’s apparently some possibility that we might see even more disastrous results in a few months, when the big new CERN supercollider is fired up again, after it blew a fuse on the first try several weeks ago. Some physicists have expressed concerns that when their colleagues start smashing tiny bits of matter together, it might possibly cause the end of the world. Others scoff at that idea, though; I guess we’ll find out who’s right eventually.
However, I think we’re already living every day with almost equally disastrous results from this materialist-atomist worldview, because it leaves us with no “higher good,” no center-of-the-universe, no focus. What we’re left with is an absurdist value-neutral universe in which every action is pretty much as valid as any other. If, as Nietzsche proclaimed, “God is dead,” then by what standard do we judge our own or others’ words and actions?
The answer for Nietzsche and many others: the individual will, or what a lot of people might prefer to call the personal ego. From that perspective, “good” is what’s good for me, “bad” is what’s bad for me.
Amazingly enough, this position is in fact the stance of orthodox economics, though in that discipline the concept is sugar-coated with the notion of “rational agency,” which in essence claims that people (or at least those who make economic decisions such as whether to buy or sell stuff) act out of what some refer to as “enlightened self-interest.” Meaning that people are generally aware that the effect of their decisions on other people is something they need to keep in mind; for example, if you’re stealing food from others, you need to leave them enough so they don’t starve to death, if you want to be able to keep stealing from them.
But as we’ve seen in the banking industry, some people don’t get that part of the theory; instead, in their egotistical greed, they’re willing to burn their own house down to keep the fires lit. “Rationality” wouldn’t seem to have much to do with it, except in the sense that some of them were able to find plausible-sounding rationalizations for what they were doing.
Now, I’m not aligning with those upholders of religious orthodoxy who decry “situational ethics” and “moral relativism.” I think any ethics that doesn’t vary somewhat depending on the situation is too limited to be valid, and I think all morality is relative – relative to the true, final good.
I don’t agree, either, with those who claim it’s possible to establish a valid ethical system on a purely materialistic-scientistic basis. Any moral system that posits the “highest good” as some physical thing – prosperity, social order, the pursuit of scientific knowledge – will lead eventually to immoral results. For example, if you suppose that the highest human good is social order, you’ll inevitably end up making utilitarian compromises, seeking “the greatest good for the greatest number,” which means some “lesser number” will be hauled off to prison whenever it’s convenient, without violating your moral rules.
As for “scientific knowledge” as a “highest good,” it sounds nice and noble, but of course in the real world, research gets done when someone – the Pentagon, the pharmaceutical companies, the cigarette makers, etc. etc. – is willing to fund it.
So is there a way to find the true “highest good,” and to do so objectively, without recourse to traditional authority, such as religious dogma? I believe there is, and I’ll go into detail in a posting in the next few days.
Right now, I’d like to make a brief comment about reader comments. I’m delighted anytime anyone wants to leave a comment here. I’ve set it up so you don’t have to register or anything like that. I do say things from time to time that I think are fairly provocative, and I don’t mind anyone disagreeing or criticizing or challenging any of it. However, I won’t allow obscenity, libel or hate speech. So please feel free to critique, but please be grown-up about it.
Labels:
economics,
ethics,
physics,
science,
utilitarianism
Subscribe to:
Posts (Atom)