Category Archives: Articles

Regulation and Productivity

Overregulation Is Making the United States Increasingly Non-Competitive
by John Hospers

People don’t enjoy having their lives regulated, whether they are children  rebelling against parental commands or adults whose actions are subject to  legislation by government. Still, don’t we need regulators with coercive power  such as only government has? What would happen if everyone could, without  penalty, forge checks, violate contracts, dump poisonous wastes into the soil,  and manufacture cars that are accident-prone? The market sometimes regulates  itself, but not always: people will often profit by causing injury and damage to  others.

The problem is that the watchdogs themselves are imperfect. They are  vulnerable to bribery and corruption, and most of all, operate with gross  inefficiency. Moreover, those who are entrusted with positions as watchdogs  often have an inordinate desire to increase their own powers. Regulating others  often gives them more satisfaction than their income does, and they spare no  effort to keep on increasing their own regulatory powers. And often nobody  watches the watchers.

I shall present three examples, deliberately taken from a diverse array of  activities, to illustrate how this problem affects the business community.

1. Environmental Regulation

Not many people set out to make their natural environment dangerous for human  habitation, or desire to render entire species of living things extinct. Laws  are enacted to inhibit those whose actions have this effect. Today, however,  regulations have become so all-encompassing that no business and no landowner  could long survive if all the regulations now on the books were strictly  enforced. For example, there are countless underlings in the Departments of  Interior and Agriculture who are empowered to say to farmers, “That mud puddle  in your back field is hereby declared a wetland,” thus making it no longer  permissible for farmers to cultivate such land although they still continue to  pay taxes on it.

Thousands of letters were sent out in 1993 from the Bureau of Reclamation of  the Department of the Interior, informing the recipients that the Bureau  intended to look for endangered species on their land. What if the landowner  refused to permit such inspection? Then, since the absence of endangered species  could not be confirmed by inspection, uncultivated parcels would be labeled as  habitat for endangered species.

What happens if a piece of land is declared a habitat? Strict controls on use  then come into play. “When the U.S. Fish and Wildlife Service had designated a  habitat-study zone, one family lost $60,000 worth of production a year.”[1] Since the zone is off  limits to crops, a farmer cannot replant there. Moreover, banks no longer will  make loans to buy such properties because they are aware that the buyer will not  be able to use the land for planting crops.

Congress passes a law; the “beef” in the law is the enabling clause which  permits the regulatory agency to make whatever regulations it deems necessary  and proper to implement the law. Those who are subjected to the regulations must  obey every one, however trivial or burdensome, or else receive large fines or  even jail sentences. Usually the act is applied by its enforcers beyond the  scope of what was envisioned when the law was passed. Already every landowner is  subject to the intricately detailed provisions of the Clean Air Act, the Clean  Water Act, the Safe Drinking Water Act, the Endangered Species Act, the National  Environmental Policy Management Act, and so on. They are drowning under a flood  of regulations, from which the only benefit may be to the regulators, keeping  them in well-paid positions at the expense of the taxpayers.

The mission of the national Biological Survey is “to catalog everything that  walks, crawls, swims, or flies around this country.” To do this their agents  must be able to enter every parcel of land in the United States—not every  decade, as with the census, but on a continuous basis. “Landowners fear that the  net effect will be to transfer de facto control of thousands more acres  to the federal government.”[2]

2. Housing Regulations

In past decades, prior to the massive interference of the federal government,  inexpensive housing was far less of a problem than it is today. Cheap rooms  could be had, for a dollar or two a week, with no particular amenities and  perhaps a bathroom down the hall shared by several tenants.[3]  But in most cases American cities these “flop-houses” facilities were torn down: “We can’t have people living like that.” The government tore down the building  and built other ones at much higher cost. Most of those who had previously  occupied these buildings could not afford the new ones.

To limit the cost to tenants, rent control measures were initiated, but of  course such controls only prevented new housing from being built, and massive  shortages developed. Who wants to risk losing money on real estate in New York  City? Landlords who can sell do so at a loss and get out.

But rent control is only the most notorious form of regulation. In most  states it is illegal to refuse to rent a room or apartment to someone because he  or she is a welfare recipient: the ultimate threat of the renter whose every  whim is not satisfied is “I’ll report you to the Welfare Board and then you’ll  never be able to use your buildings for rental again!” It is a pervasive desire  of landlords not to rent to welfare recipients; in general, owners say, they  have little sense of responsibility; they are “all rights and no  responsibilities.” Many tend to be slovenly and messy in their personal habits;  they demand privileges not in the contract; they leave lighted cigarettes where  there are no ashtrays, and leave the flushing of toilets to lesser beings.  Landlords do what they can to avoid renting to them, but if they say “I’m  evicting this person because she has dirty habits” they will be told “No, you’re  trying to evict her because she’s on welfare, and that won’t work.”

New regulations are constantly introduced to make ownership of rental  property more burdensome. Every door (in some states) must be equipped with a  large metal rod on a spring so that it will automatically close in case of fire.  (This costs about $50 per door.) With new regulations being continuously  enacted, the landlord’s margin of profit, already precarious, often disappears  entirely. Moreover, it profits the tenants to break some pipes or destroy some  electrical fixtures because they don’t have to pay rent until these are  repaired.[4]

Meanwhile a new state law (in Minnesota,for example) specifies that if the  owner does not pay his entire property tax in the year it is due, the entire  property can be confiscated the following year. (What happens if the owner has a  bad year? The government confiscates the property, and may operate it at a loss,  payable by taxpayers.)

3. Mining

In a recent Roper public opinion poll, people were asked their opinion of  each industry. Of 222 industries, mining ranked next to last; only tobacco fared  worse.[5] But mining was,  and is, more heavily regulated in the United States than in any other industrial  nation.

Mitsubishi Corporation of Japan decided to build a new copper smelter in  Texas City, Texas. Japanese officials were assured by state and federal  officials that all the relevant permits would be issued in 12 to 18 months. The  first application was submitted in June of 1989. Then came three years of  conflict among environmental groups, permitting agencies, and company  management. Air-and-water discharge permits had to be obtained; the U.S. Army  Corps of Engineers had to issue its own permit; and an assortment of permits  from state, county, and city agencies were also required—more than thirty in  all. The Army Corps of Engineers promised a decision within sixty days, but  waited 21 months.

Exhausted by the attrition, Mitsubishi finally cancelled the project. The new  chairman of the Texas Water Commission said that when his permit came up for  review in four years he would demand zero discharge of waste water—technically a  virtually impossible demand. The air discharge permit from the Texas Air Control  Board would take most of a year; building the plant would take another two  years, and less than a year after that the company would be faced with the  zero-discharge requirement. For these reasons Mitsubishi abandoned the project  in March of 1992. They decided to build the identical copper smelter in Japan,  where all the required permits were obtained in 14 days and the plant was built  in 17 months. The president of Key Metals and Minerals Engineering Corporation,  Dr. Thomas Mackey, wrote, “This action ended a marvelous opportunity for the  U.S. to acquire a minimum-pollution energy-efficient modern copper smelter which  would have been strategically located on the Gulf of Mexico’s coast . . . .”[6]

As a result of this and numerous similar incidents, Japan is ahead of the  United States in the development of mining technology. For many years the United  States was a net exporter of copper. Today the United States has been surpassed  in copper production by Chile. Gradually we are becoming non-competitive.

In 1992 the Congress passed a bill which may seem trivial by itself, but  taken together with a mass of similar ones, is a significant straw in the wind  on the future of mining in America. As a result of the new legislation, whenever  your company buys an electric motor you are now required to buy “the most  efficient” one: 96 percent efficiency is now mandated, whereas the earlier  requirement was 94 percent. So what, one might say, what’s a difference of 2  percent? The catch is that it must be 96 percent-efficient when operating at  full speed. The 96 percent efficient motor is more efficient at full speed,  but it has less starting torque. In fact a conveyor belt could never get  started with the newly required motor. But since the 94 percent efficient  motor is no longer permitted, users must now go from a 96 percent-efficient  motor of 100 horsepower to one of 200 horsepower, just to get the motor  started.

Once the 200-horsepower motor is running, it doesn’t require all that extra  energy it can easily do with 100. But since the 100-horsepower motor that would  do the job is now outlawed, it is necessary to use the 200. The extra energy is  wasted, but no other option exists that is not illegal. By contrast, Japan can  still use the 94 percent-efficient motor. American equipment will be more  inefficient and more expensive, thanks to many laws such as this one.

The new law does not save energy—it requires industry to waste energy. It  does its bit to make the United States non-competitive. It is assisting the  gradual process of de-industrializing America.

Conclusion

Regulation—actually, more suitably called “prohibition”—of limited scope is  necessary to prevent people from harming other people, that is, when one person  or group would otherwise violate the rights of others. But the vast majority of  today’s regulations are not of this kind, but could better be called regulation  for regulation’s sake. It is these that are eroding America’s industrial base  and making the United States increasingly non-competitive in the world  economy.

It was not always so: America today would be unrecognizable to those who  lived here a century ago, thanks to the labor and ingenuity of many thousands of  productive individuals—inventors, manufacturers, merchants, farmers, and  countless others employed by them and associated with them. But in the last half  century an opposing force has gathered momentum, threatening to bring these  productive advances gradually to a halt. The conflict is between those who have created this vast array of goods and services, and those whose aim is  to control the creators. Will the economies of other nations, not as  burdened as ours by harassing regulations, replace the United States as the  economic leaders of tomorrow? At present it is far from clear what the outcome  will be. []

Notes

  1.    Jeff A. Taylor, “Species Argument,” Reason,  January 1994, p. 53.
  2.    lbid., p. 52.
  3.    See William Tucker, The Excluded Americans:  Homeless and Housing Policies, Regnery-Gateway, 1990, Chapter 4.
  4.    Albert Lee, Slum-lord, Arlington House,  1975.
  5.    Engineering and Mining Journal, December  1993, p. 14.
  6.    Ibid., p. 16-B.

 

Conversations With Ayn Rand Part 1

by John Hospers

From time to time I had heard Ayn Rand’s name. I had seen a few printed comments on The Fountainhead, but had never read it myself. I had read numerous reviews — mostly unfavorable — of Atlas Shrugged, and determined to make up my own mind by reading it when I was less busy. A cousin in Iowa wrote to me, “If you don’t read anything else this year, read Ayn Rand’s Atlas Shrugged.” I wrote her that I would do so as soon as I had finished writing my ethics book, Human Conduct. (Had I but known, I would have interrupted the writing of this book to read the new novel. But I had no idea then of its relevance to ethics.) The writing took every hour I could spare from classes. But before I had a chance to read Atlas, I read the announcement that Ayn Rand herself would address the student body of Brooklyn College, on “Faith and Force: The Destroyers of the Modern World.”

It was April 1960. I looked forward eagerly to hearing her. Little did I know how much the course of my life would be changed.

I had no substantial disagreement with the lecture, though I would not have come at the subject the same way. I made some notes about assertions that required qualification or should be stated less strongly, though I did not as yet appreciate the context in which her remarks were set.

When I spoke with her afterward and invited her to lunch at once, she accepted without hesitation. Nathan and Barbara Branden, who had brought her, returned to Manhattan. Ayn graciously consented to reserve an hour for discussion with me. That was at 12:30. We were still sitting in a booth at the restaurant at 5:30.

I have some (but far from total) recollection of our discussion. What I remember most vividly were her friendliness, her directness, her passionate intensity. She was totally serious, totally dedicated to ideas. Her dark eyes looked right through you, as if to scan every weakness. I remember that quite early on she said that she could provide a solution to every ethical problem. I was more than usually interested in this assertion.

I presented her with a problem that had recently occurred to me. A father is told by his physician that he had two choices with regard to his small daughter: If she has a serious operation on her leg, she will suffer much pain, but there is a 50 percent chance that eventually she will be able to walk normally; but if she does not have the operation, she will suffer no more pain but one foot will never grow, and she will be on crutches all her life. What should he decide? She admitted at once that she couldn’t answer that one — it represented no choice between principles, only a choice between applications of the same principle (one I would later identify as “rational egoism”).

The solution would depend on certain details resulting from our incomplete knowledge of the situation, rather than on the elaboration of a principle. Recognizing this, I accepted her answer. But that only brought another to my mind: If you are driving and, on rounding a bend, have a choice between hitting a human being or a dog, you would presumably spare the human being. But if the choice was between hitting a stranger and your dog, what should you do? Surely you have more interest in preserving your dog than a person you have never met; and you would grieve more for the dog if it were killed, and so on.

This, she granted at once, was very difficult. There was indeed a conflict of principles here. On a scale of value, a human being is above a dog, for human beings embody many valuable features that dogs do not. On the other hand, on the scale of my value, my dog is more important. I thought she would say without qualification that I should save my own dog, but she didn’t. Was it that certain things should be done, and certain values achieved, regardless of whether they are conducive to my long-range self-interest? Or is it somehow to be made out that in the long run, all things considered, the saving of the stranger will be more to my interest (“no man is an island”), although it may not seem so to me at the moment? If she gave an answer, it was far from clear to me at the time.

But she gave me instant credit for “thinking of ingenious examples.” She did this many times during the course of our developing friendship.

We agreed to meet again at some unspecified future date. Meanwhile, I bought a copy of Atlas Shrugged and started to work through it. I would teach till mid-afternoon, work on my book most of the evening, and read Atlas as long as I could before retiring in the wee hours. I was so excited by it that only a great resolve to go against my inclinations, and an unwillingness to be sleepy that next day, kept me from reading it straight through.

About two weeks went by. I had finished Atlas (comments on it below). I received in the mail an invitation to attend one of the NBI lectures, the one in a series of 20 on aesthetics. I accepted gladly. It was probably the wrong lecture for me to begin with. Had I been asked to attend, for example, the economics lecture, I would have found it a revelation. Economics was virgin territory for me then. But aesthetics was the area where I had done most of my work, including my doctoral dissertation (later published as a book entitled Meaning and Truth in the Arts). I found a lot to criticize in the lecture, even though I found myself in general agreement with principal points in Rand’s aesthetic.

It was the examples that riled me most. I did not like to see Picasso and Faulkner (to take just two examples) relegated to the scrap-heap. Faulkner was no special favorite of mine, but I had a high opinion of his literary artistry and spoke in his defense. I was almost shouted down by members of the audience who apparently considered my action some kind of treason. Hugo and Doestoyevsky were favorites of Rand’s, and mine as well; but we came to loggerheads on Tolstoy. I mentioned in the discussion period that I thought Tolstoy was the keenest observer of details of nature and human behavior that ever wrote, and his ability to provide a rich and vivid impression through the selection of details was probably unequaled in fiction. Ayn responded that the plot in War and Peace was quite disconnected, with events not leading “inevitably or probably” into each other — which I granted was often true in this enormous saga. But I thought that individual scenes, such as Prince Andrey’s encounter with Napoleon, were tremendously vivid and uniquely moving.

After the lecture, I was invited to Ayn’s apartment. Nathan and Barbara were there for a while, but when they left Ayn noticed my copy of Atlas. She saw the notes I had written in the margins — comments for my own future reference, not intended for others to see. Ayn offered at once to exchange my earmarked copy for a new copy, inscribed to me. How could I refuse? “I didn’t necessarily comment on the most important parts,” I said; “I just marked what struck me or appealed to me for one reason or another, often highly personal.” She said that this didn’t matter, she wanted to see what I liked. And she put my copy aside for future reference.

She was in her best mood — more than friendly, full of enthusiasm and radiating benevolence. Before discussing the ideas in Atlas, she wanted to get my impressions of its aesthetic quality. I spent several hours going over this with her. I told her how impressed I was by its intricate structure, with a critical plot development in each of the ten chapters of each part, and a mini-climax at the end of each of the three main parts. I praised the development of the plot from one chapter to the next, the “rising action” as it proceeded from chapter to chapter, the richness accumulating like a snowball always gathering more snow on its downhill course. I showed by examples how a scene that would have been out of place earlier was perfect later, with further developments having intervened. I mentioned how the scenes were a combination of inevitability (given what went before) and surprise when they did occur. I extolled the clarity and vividness of the writing, and how I loved especially the total purposiveness of the work, proceeding without irrelevance like a coiled spring, constantly striving toward a goal. I also praised it as a mystery story — clues being dropped here and there, with rising tension resulting (where were the men going who kept disappearing from the scene?); and I praised the discovery of the motor at Starnesville, the discovery of why it had been abandoned, the whole story of Starnesville as told by the tramp on the train that was heading for its doom in the Colorado tunnel — the action rising to almost unbearable heights of suspense, while at the same time it served a philosophical purpose: how thrilling, how right, how perfectly it worked into the structure and texture of the novel. I mentioned that in other philosophical novels, like Thomas Mann’s The Magic Mountain, the philosophy was not integrated into the narrative and “stuck out like a sore thumb,” but that in her book they were perfectly integrated; a fusion, not merely a mixture.

She was radiant. I had not expected such a glowing reaction, though I knew that authors enjoy hearing praise of their work. I just assumed that she was getting this from all directions, and that my comments just added a minute amount to the existing pile. I learned only much later that she hardly got such comments at all: that people commenting on her work were either harshly critical, not understanding what she was doing or coming from vastly opposed premises; or they simply sang empty praises, uttering syrupy remarks with nothing for her intellect to bite on. Apparently I had appreciated the very qualities she had endeavored to put into her work. She seemed warmly grateful that I had discussed them at such length with her.

It was after 2 a.m., and we agreed to meet again at her apartment two weeks later.

At our next meeting I resumed the discussion of Atlas. Rearden was my favorite character, because he grows and develops through the pages. I thought her style was clear and eloquent, and more than eloquent in memorable passages like the initial run of the train through the Colorado mountains. But I thought that the parts that sparkled the most, and were the most vibrant with energy, were those in which there was a direct confrontation of ideas, as in Francisco’s encounters with Rearden, the dialogues involving James Taggart, and Francisco’s remarks about money. This was powerful presentation of ideas and high drama at the same time.

I could see the point of having characters with no defects, such as Galt and Dagny, but though there was a philosophic purpose in this I thought it detracted from the characterizations, which in Galt’s case most readers perceived as somewhat unreal. Nor could I fault her decision to make everything end well, though I found the “tragic” parts (such as Wet Nurse’s death) more effective in tapping the emotions. We had some disagreement about “acceptable types of fiction.” I had no objection to “gutter realism” in which a slice of low-life is portrayed, as in Zola’s novels, nor did I demand that the end-effect be inspiring and never depressing, as long as fidelity to human nature was not sacrificed. I admired, for example, Theodore Dreiser’s An American Tragedy and similar works of “naturalistic fiction” for which she had no use at all.

I had nothing but high admiration for Atlas as a paean to economic freedom. I had never thought much about the effect of government intervention in the economy, and I was totally convinced by her descriptions of this. Her economic message in the book hit me like a ton of bricks.

Nor did it take much for me to be convinced by most of her ethical tenets in the book, such as the admiration of independence and integrity, and pride in personal achievement. As a product of a Dutch colony in Iowa in which these virtues were instilled from one’s earliest years, I could resonate to all of this without difficulty. I especially enjoyed her attack on tired cliches like money being the root of all evil. I also shared her denunciation of altruism, if altruism was defined not as generosity (which I considered a fine thing) but as forsaking one’s own interests in order to pursue the interests of others. I hadn’t appreciated how much “love of others” could be appealed to in order to justify the major crimes of history.

She was amused when I told her the “parable of the concert ticket,” then circulating in philosophic discussions: A is given a concert ticket and wants to go to the concert, but being an altruist he gives his ticket to B, who also wants to go. But B is also an altruist, and is equally committed to forsaking what he wants in order to give to others, so B gives his ticket to C. And so on, until just before the concert the ticket goes to someone who doesn’t care for the concert and doesn’t even bother to go.

Other aspects of her ideas in Atlas would come out in future discussions. The philosophic tenets presented in Galt’s speech, for example, were partially (never entirely) chewed over in discussions much later. These things came to the fore in our discussions as the spirit moved. I shall reserve any description of metaphysical and epistemological issues for the second half of this memoir, although in historical fact these discussions were interspersed among our other conversations right from the beginning.

Early in our next meeting we agreed that Garbo was the greatest of the film actresses — an embodiment of intelligence, sensuality, and sensitivity — though Dietrich came in for some discussion, as did Marilyn Monroe, whom Ayn admired not as a sex symbol but as a vulnerable child projecting innocence and vulnerability. This, Ayn thought (and I agreed), was really the secret of her wide appeal.

We lingered fondly on works of art that had meant a great deal to us. We compared notes on plays, films, paintings, and musical compositions. When she said that her favorite dramatist was Schiller, I regretted that I had not known her in time to take her to see Schiller’s Maria Stuart, the best performance of a play (starring Irene Worth and Eva le Gallienne) I had ever seen. It would have been great to introduce Ayn to that experience, to savor the work together.

The following week I did take her to see the full-evening Martha Graham dance Clytemnestra. She was very perceptive about what was going on, though unfamiliar with the medium of modern dance. She liked the dance more than the music, as did I. Frank was ill at the time, and she would take care to make dinner for him before we left, and would rush back afterward to make sure he was all right. Her solicitude for him was touching. But when she made sure he was in satisfactory condition, she returned to the living room and we resumed our conversation.

“Who is your favorite movie director?” was one of the questions she asked, presumably to sound me out as to where my likes and dislikes lay. “Fritz Lang,” I told her at once. She was instantly suspicious. “How did you know?” she said, frowning.

I was puzzled, then grasped what her suspicion was. “I didn’t know,” I said. I told her how as an adolescent in Iowa I had haunted the theater to see Fury, about a mob attacking a courthouse to lynch a man who turned our to be innocent (Spencer Tracy). I told her how I admired most of all Lang’s work Hangmen Also Die, about the World War II occupation of Czechoslovakia: its structural complexity — wheels within wheels, just like Atlas — and how impressed aesthetically I was whenever little hints were dropped here and there and apparently forgotten, but then picked up later when they turned out to be essential to the resolution. She sensed my enthusiasm, and her warmth and vivacity increased as I related to her (as if it were new to her) various hints dropped in Atlas that were picked up and used later on. Apparently her suspicion, that someone had told me who was her favorite director, had vanished. Indeed, in an unexpected burst of warmth, she exclaimed, “Then I love you in the true philosophical sense.” I was too surprised and flattered by this compliment to question what the “true philosophical sense” was.

I found it incomprehensible that she didn’t much like Shakespeare. But I could not disagree with her judgment when I asked her who she thought was the greatest prose artist of the twentieth century. She said “Isak Dinesen.” She didn’t like Dinesen’s sense of life, but thought her a superlative stylist — a judgment in which I concurred. On a subsequent occasion when I brought a copy of Out of Africa and read her a page from it, she was positively glowing. She disliked Dinesen’s pessimism, but loved the economy of means and the always-just-right word selection. When Ayn and I both admired the same work, and compared our reactions to it and the reasons for our admiration — that was a high point of our friendship. During these conversations the rest of the world was left far behind; nothing mattered but our experiences of these works of art. We held them up to the light, slowly rotating them to exhibit their various facets, like precious jewels. Ayn was all aglow when our reactions struck common ground: she was no jaded critic, but had the spontaneous enthusiasm of a little girl, unspoiled by the terminology of sophistication. Even today I treasure these moments, and can hardly think of them without inducing the tear-ducts to flow just a little.

We did get into a bit of a flap about Thomas Wolfe. I had grown up on his novels, and there were passages of his poetic prose that had become so close to me that I had them virtually memorized. I brought a copy of his Of Time and the River one evening and read aloud to Ayn, Nathan and Barbara a passage of about five pages — a part of the description of the young man (Eugene Gant), having left his native North Carolina for the first time, reflecting on his chaotic childhood as the train is pounding away all night through the hills and forests, propelling him forward toward the unknown (his first year at Harvard). I empathized with so much in the passage that I waxed quite emotional in the delivery of it.

When I had finished, Ayn proceeded to decimate it bit by bit. How could I possibly care for such drivel? It was anti-conceptual; it was mystical; it was flowery and overlong. I do not remember the details of the criticism (then as on many other occassions, I wished I had had a tape recorder with me).  I remember that they all seemed to be valid points, and I was somewhat ashamed that my emotional reactions did not jibe with these rational ones. But I defended my favorable verdict on the passage with the observation that Wolfe has a tremendous evocative power, the power to generate very intense emotions by drawing on haunting memories of days past and setting them in the context of the present experience.

And then Barbara came to my aid. She said, very simply, “Wolfe is beautiful music.” And suddenly it struck me how true this was. I thought of Walter Pater, who said that all great art approaches to the condition of music; and how Wolfe is as near as American literature has yet come to creating literary music.

Some of her other preferences I found surprising, almost unbelievable. I could see why she liked Salvador Dali, though I couldn’t see why she preferred him to Picasso. (My own favorite painters were the post-Impressionists — Cezanne, Gauguin, Van Gogh. She had no use for non-representational painting, though I liked Mondrian a lot — and I tried vainly to convince her that a line could be expressive even though that line was no part of a represented person or object.) I was most surprised of all by her musical evaluations. Of the classical composers, she preferred Rachmaninoff and Tchaikovsky, and not much else. I liked them too — I had none of the anti-Romantic bias that was then fashionable — but I was astounded that she didn’t care for Beethoven or Brahms, and that she didn’t like Bach at all. Bach and Handel were my favorites, though almost as much as these I liked certain pre-Bach composers such as Ockegham, William Byrd, De Lassus, Victoria — none of whom she had heard of. I would bring records to her and play parts of them, but her tastes never changed. When she wanted an inspiring musical theme to introduce her new weekly radio program on the Columbia University station, I played for her some candidates: Purcell’s Trumpet Voluntary, prelude to Wagner’s Meistersinger, Handel’s Dettingen Te Deum, introduction to the march from Berlioz’s The Trojans. Of all the pieces prior to the 19th century, she said “These represent a static universe,” and cared to hear no more. So in spite of all my efforts, the final verdict was still Rachmaninoff. (Were these the composers she heard most during her girlhood in Russia, I wondered, and for that reason made the most powerful impression on her? I brought up to her the difference between differing preferences and differing evaluations. But she stuck to the view that her giving Rachmaninoff the number one place among composers was not merely preference but an “objective” evaluation — though, she added, in the case of music she couldn’t prove that the evaluation was the right one.)

We discussed the objective vs. the subjective in art. I suggested to her that a traditional Aristotelian canon such as organic unity was objective in the sense that the unity is actually to be found in the work (though it may need some pointing out), and that an indication of this was that the criterion had survived with variations for over  2,000 years. On the other hand, I said, there are times when it is less appropriate to say “That’s good” than to say “I like it.” For example, I tend to like massive works — Michelangelo’s Sistine Chapel, Bach’s B-Minor Mass. She, on the other hand, despite having written Atlas Shrugged, tended to like works small. She once showed me her study, where she had written the last half of Atlas. It was terribly cramped and small, but that was what she felt comfortable with — “infinite riches in a little room,” I told her. But the room would have given me claustrophobia within an hour.

This was the honeymoon period. There had been no major tensions between us on any issue. I did not have any idea how quickly her ire could rise. I thought we could discuss any subject as dispassionately as we were now discussing the arts.

She kept inviting me back. For many months I was at her apartment about once every two weeks. We would meet around 8 p.m., and usually agree on a cutoff time of midnight. But when midnight came we were always engrossed in a discussion we didn’t want to terminate, and the result was that I seldom left the apartment before 4 a.m. Occasionally we would talk all night, after which she would prepare breakfast for me and I would drive off to Brooklyn in the early hours of the morning.

Whenever I took her out to dinner, she made a point of returning the favor. She and Frank would typically take me to a Russian restaurant. She had no appetite for small talk. Even when I was trying to extricate the car from a tight parking place in front of her apartment, she would be raising philosophical issues. Seated in the restaurant, she would radiate benevolence, but she didn’t go in for jokes or humor — most of which escaped her completely. But once in a great while she would laugh like a schoolgirl. When I told her the tired joke about the two behaviorist psychologists meeting one another, the one saying to the other “You are finehow am I?” she could hardly stop laughing. Apparently the joke exposed in condensed form the heart of a discarded (or eminently discardable) theory. Frank too was caught up in the humor of it. I came to value and respect him more and more — not as an arguer (he couldn’t do it, he left that department to her) but as a warm, benevolent human being with all the right instincts, and a largely unappreciated (at that time) artistic ability. I have nothing but good memories of him.

At Ayn’s suggestion I bought a copy of Henry Hazlitt’s Economics in One Lesson and it transformed my entire thinking about economics (not that I had done much thinking about it before). She gave me a copy of von Mises’ Socialism and I devoured that also. (She explained to me that she would not autograph gifts of books, if those books had been written by others.) Here I was the student and she the teacher. Though the conversation always turned to ethical implications, Ayn was not bothered if I asked her purely economic questions. I may have been the only person who learned free-enterprise economics personally from Ayn Rand. Much of her political philosophy had already come through to me in reading Atlas, but the conversations with her amplified it enormously. I had never given enough thought to political philosophy, and my conception of it (in relation to ethics) could have been summarized much as follows:

We each have different sets of desires, often conflicting with one another. We have to put a limit on our desires because, if followed out in action, they often get in each other’s way.

In traffic, we need rules of the road: you can’t drive on the wrong side of the road, you can’t pass cars on hills, you can’t exceed a certain speed, etc.

In life, we also need “rules of the road.” We have to refrain from doing certain things to one another, such as robbery and murder. So we need (1) moral principles, for people to obey voluntarily, and (2) laws, for people to be required to obey even if they don’t choose to do so voluntarily.

Not everyone will agree about what these rules should be. Should the rules prohibit adultery? abortion? deception or fraud? negligence? Should mentally incompetent people be excused from obeying them? And so on.

We can try to have the rules changed, but once a law is in force we should usually obey it. If everyone disobeyed laws when they felt like it, or even when they disapproved of the law, there would be much more chaos and less predictability in human relations, and all of us would be much less secure than we are now.

As readers well know, Ayn did not fundamentally disagree with most of these tenets. But she came at the whole enterprise in a very different way, much more precise than mine, and cutting lots of important ice in a variety of places.

When I first mentioned to her that I thought the government should do this or that, enact such-and-such a law, she would remind me that the government acts through coercion or threat of coercion: that if you want the government to tax other people for your pet project, you are in effect holding them up with a gun and forcing them to act in accordance with your wishes. You don’t wield the gun, but the government agent wields it on your behalf. And that’s all right if the government just protects you against aggression (retaliatory use of force), but not if it is to initiate aggression against others in order to achieve your ends. By the same token, why can’t it initiate aggression (e.g. forcibly raise taxes) to promote someone else’s ends at the expense of yours? If you can use force against A to make A support your favored project, why can’t A use force against you to make you an unwilling subsidizer of A’s project? It was all so obvious when pointed out, but I had never thought about it in that way before.

I had never formulated to myself Ayn’s precept, “No man should be a non-voluntary mortgage on the life of another.” But government helping one person at the expense of another is (Ayn reminded me) an obvious violation of this rule. If A’s life can forcibly be enslaved to fulfill B’s ends, why can’t B’s life be enslaved to fulfil A’s ends? And then it became a matter of who is strongest, or has the biggest gang.

I found Ayn most insightful of all on the topic of rights. (I later came to admire her paper “Man’s Rights” more than any other, though it was not yet written at the time of our discussions.) I had read much on that topic, but Ayn’s way of laying out the subject struck the jugular in a way that nothing else did. And gradually I came to treat more and more aspects of ethics and political philosophy under the rubric of rights. It also drew my thoughts toward a different magnetic pole: previously, my first question in evaluating a proposed law was “Whom does it benefit and whom does it hurt?” whereas Ayn’s first question was “Does it violate anyone’s rights?”

I had not thought of the American Constitution before as a distinctive rights-protector — protecting the rights of individuals against their encroachment by other individuals and (most of all) the government itself. And the rights defended in the Constitution and the Bill of Rights, she pointed out, were all of the kind that I called negative rights — rights which demand only from others the duty of forbearance, or noninterference. The positive rights, such as “welfare rights,” all demanded as duties some positive action, such as using part of your paycheck to pay for government projects which are supposedly for the benefit of others. Such subsidies of course violated her voluntarism principle (no one should be a non-voluntary mortgage . . . ). In time I supplemented this with another argument, that only the negative rights are consistently universalizable (applicable to everyone). That is: “I have a right to speak freely” can hold true no matter how many people there are, but “I have a right to part of your income” can hold true only when there are enough other people in society to provide it. If there are not enough givers and too many takers, the principle becomes impossible to apply. Ayn’s input was like a gust of fresh air on a subject (political philosophy) which I had previously considered too dull to pursue — at least the current literature was, if not the subject itself. Prior to knowing Ayn, I was not very happy with any theory on the subject that I knew about. I had realized that in a civilized society you can’t let persons do what they want with their lives (such as nothing at all) and at the same time assure them that all their basic needs will be taken care of, courtesy of the state — for where would the state get the wherewithal to supply these needs if many people remained idle or didn’t (or couldn’t) contribute to it? But I had not resolved the matter in my own mind, nor had I thought of the issue systematically until I was hit with a huge blast of clearly enunciated political philosophy from Ayn Rand.

Gathering diverse data into a neat system had always been exciting to me, and the Randian political philosophy stimulated me to consider the subject seriously for the first time. At the same time, I was skeptical about the acceptability of any system, particularly a neat and elegant one, and was always looking for exceptions to test the system. If truth could be obtained only by sacrificing neatness and elegance, then they would have to be sacrificed. I was worried, for example, about the welfare problem. I could see that once the government got hold of tax money for this purpose, it was an invitation to graft and corruption, and that people are not as careful with other people’s money as they are with their own. And it might indeed be true that in a free unregulated economy there would be such abundance that there would be little or no need for welfare, because private charity would bridge the gap. But I simply could not make myself be sure of this. I was not sure that people’s charitable impulses would be expressed in sufficient quantity at the needed time and place. I thought of children living in grisly slum conditions, fatherless and largely untended. The fact (if it was a fact) that at some future time when the economy would be free and far more prosperous than now, such people would not be in need thanks to private charity, was no help to them now — the help they needed was immediate, and the children’s situation was not their own fault. And I was quite sure that some parents would always be so lazy or incompetent that they could not (or sometimes would not) hold any job at all, no matter how prosperous the economy — the general prosperity would simply pass them by.

I was even more convinced of the need for universal education. Without it, many children with high potential would not have the benefits of education, and their talents would simply be wasted — don’t they all deserve a chance? I was all in favor of competing private schools (rather than a government-run educational system), but I wanted to make sure that private benevolence would get to the right place at the right time and in sufficient amounts. I found myself more sure of the need for universal educational opportunity than I was of a political theory in which education was no concern of the state. I agonized over this. Ayn never assented to the view that private charity was “guaranteed to be sufficient.” The recipient had no right to receive what was not freely given, and if not enough was freely given, that was unfortunate but not immoral; what would be immoral would be to force the giver to give (which would be robbery). The moment you start nibbling away at a principle by making exceptions, the more you will be led to make further exceptions, and finally the whole principle will go up in smoke. Why could Ayn rest comfortably with this, while I could not? The marvelous passage in Atlas Shrugged beginning “Stand on an empty stretch of soil in a wilderness unexplored by men and ask yourself what manner of survival you would achieve . . .” kept hammering through my mind. If you penalize those who make life economically bearable for the rest of mankind, what hope is there for future improvement? It is not only impractical, but immoral, to kill the goose that lays the golden eggs. At the same time, here are the horribly deprived children of the ghetto, finding themselves in a situation not of their own making from which they could not extricate themselves without help. I was unhappy, even ashamed, that I could not resolve this burning issue to my own satisfaction. I would keep speaking of needs that could not be met through private charity — at least that was my fear. I would speak of the homeless and starving of the world. Each day’s headlines would call attention to more instances of this, usually in Africa or Asia. At last I think Ayn lost patience with me. Instead of agonizing over this, she said, I ought to take steps to ensure a free market in those countries. There is no greater creator of prosperity than the market. She was not against charity, she said. If a needy person came to her door, she would not say no. When she said this, I replied, “What of the thousands of people who can’t come to your door, because they’re too far away, too sick, too crippled, or are small orphaned children?” She then told me again somewhat brusquely that I was looking at the issue from the wrong end. I was viewing it from the point of view of the needy; I should look at it instead from the point of view of the producers of wealth — all charity would have to come from the surplus of their production (here she referred me to Isabel Paterson’s The God in the Machine). If production was not sufficient, these people would have to do without in any case. Charity must come from their surplus — and not a surplus wrung from them by coercive taxation, but whatever surplus they voluntarily chose to allot for this purpose. And then she described how an industrialist could do much more good by keeping his company solvent and his employees on the payroll than by selling it and giving the proceeds to charity. And unless I came up with some new ideas on this subject, she indicated, she considered the subject closed, not to be brought up again. But the subject kept coming into our conversations, even though only peripherally. I remember, for example, describing to her the situation of a person who contracts a disease that requires thousands of dollars each month in medical costs, which he can’t afford, and which insurance companies won’t take on. “It’s not his fault that he contracted the disease,” I said. “And neither is it anyone else’s fault,” Ayn retorted. I did not pursue the subject, but I remember reflecting that from the fact that it’s nobody’s fault nothing follows as to who should pay. I could often tell from her tone of voice that she was on the edge of anger, which would break out if I pursued the issue. For the sake of future discussions, I would decide to drop the issue this time around.

On another occasion I mentioned the inequality in the educational system, which did not confer as much time or money on children from the slums, or on those who could learn in time but could not keep up with the rest. “And what about the geniuses?” she asked — the ultra-bright children who could go ahead much faster, but were kept back by the mediocrities. One genius, a Newton or a Pasteur, could improve the lot of all humanity, but many of them, she thought, had been stifled by the educational system catering to the dull-witted. I quoted to her once Anatole France’s statement that the rich have as much right as the poor to sleep under bridges. “And who built the bridges?” she shot back at me like a bullet. Nothing aroused her ire faster than quotable quotes from liberals and leftists.

I invited her one day to teach my ethics class at Brooklyn College, and she accepted at once. The students were impressed, but it would have taken much longer than an hour to make her line of thought come home to them. On another occasion she visited my graduate ethics seminar, at which she made some apt comment about the emotive theory of ethics (which we were then discussing). She expressed some surprise that I let my students take just about any position they chose. I did point out logical fallacies and inconsistencies, and tried to bring out the hidden presuppositions of views which I thought they accepted too hastily, but I was far from anxious in class to get them to believe whatever I myself believed. I could see that Ayn was less tolerant of deviant beliefs; I explained to her that I was more concerned with how they came to believe what they did.

I told her that I thought the great danger was to accept a view, even true view, for an inadequate reason, or for the wrong reason, or no reason at all — or as an article of faith, because of a teacher’s magnetic personality. Such faiths, I said, could be adopted one day and discarded the next when another guru came along. Once they make their degree of conviction proportional to the actual evidence for a belief, they can be trusted to arrive at true beliefs themselves. It is the method more than the content that (I suggested) has to be taught — which was just what the American educational system was not doing.

She agreed, of course, that one should not accept beliefs on faith — though surely, I thought, she knew that many of her disciples came to espouse her views largely because of her personal magnetism. At any rate, Ayn wanted to guide them to “correct beliefs” more than I did, so as to be sure that they ended up in the right place.

We discussed many aspects of private property. Her view that all property, including roads, should be private was new to me, and fascinating. I remained a bit skeptical about roads, for it seemed to me that, like oceans, they are primarily ways to get from one place to another, and I didn’t think these should be in the hands of a private party who might be vindictive against certain persons or groups. The considerations that justified private ownership of houses and land did not seem to me to justify the private ownership of roads and navigable waters.

But our main disagreement occurred when I mentioned a car trip I had taken into the South when, as a student at Columbia University, I had been a fellow passenger with a black student. The moment we entered the South, there was no hotel or motel, and very few restaurants, that would accept him. I considered this grossly unjust; so did Ayn — an example of collectivism at its worst (racism being a particularly crude form of collectivism).  Our disagreement came when I said that motels should be required to serve persons regardless of race. But she held to her view that motels are private property and people should be able to admit whomever they choose on their own property. True, blacks were as entitled as whites to build motels, and then serve only blacks if they so chose. But the issue was academic — in view of history, and the economic status of most blacks, there just weren’t enough black property-owners in the South to make this a viable option. Again, I would make an exception to a principle in order to correct an injustice. And Ayn, perhaps seeing better than I did where this might lead, declined to make the exception.

I remember another argument we had, concerning censorship. Only government, she said, could be said to censor. I brought up the case of the Catholic Church censoring a book or film. She insisted that this was not censorship. A cardinal or pope may threaten excommunication for reading the book, but if one doesn’t like it one can leave the church that imposes such restrictions. The church can’t take away your citizenship or put you in prison. The government, by contrast, can do these things.

The question was whether these differences were sufficient to entitle us to say that it is censorship in the government case but not in the church case. One could slice that either way, I suggested. But suppose that I grant that the government can censor a film and the church can’t (i.e. what the church does isn’t censorship). What then of the following example? A book is published exposing the practices of certain drug companies and pharmaceutical houses. The drug companies don’t like this, but of course they can’t arrest anyone for buying the book. So they pay the publisher X thousands of dollars to withdraw the book permanently from circulation. The book is then as effectively stifled as if the government had banned it. Is that not censorship? No, not by Rand’s definition. Yet it has exactly the same effect as government censorship; would it really be false, or even unreasonable, to say that the book had been censored? Ayn opposed all government censorship, but she had no objection to the voluntary agreement between the publisher and the drug company.

One other aspect of political philosophy that seemed to bother Ayn as well as me was the problem of imperfect governments. A government that uses force only in retaliation against its initiation by others is entitled to our support. But every government in the world violates this principle (that force may be used only in retaliation). Even the act of collecting taxes is the initiation of force against citizens.

Under what circumstances then is a citizen obliged to do what his government decrees? What if the law says that you can’t use physical force to restrain the person who is in the process of stealing your car (you can’t commit a crime against a person to correct a crime against property)? That is the law in the United States; but suppose you don’t agree with that law. Must you obey it anyway? More serious still, what if the government itself is a rights-violator? Ayn would not say that the government of the U.S.S.R deserves our allegiance, or that we have a moral duty to obey it (e.g., to report our friends who criticize the government). But the government of the United States differs only in degree from such a government. Should we obey only those laws that do not violate the retaliatory force principle (that is, only laws in which the government is exercising its proper function, the retaliatory use of force against those who have initiated it, such as murderers and muggers)? But then are we free to ignore all the others, such as laws prohibiting polluting someone else’s property (or is pollution to be called a case of the initiation of force?)? It seems as if the phrase “initiation of force” isn’t very clear, and its application to cases far from obvious.

Suppose you head the government of Spain and the Basques rebel, seeking independence. Should you suppress the revolt or not? One view would be that you should suppress it in order to restore law and order, which after all is what government is all about — you can’t be expected to live in a state of civil insurrection. On the other hand, if you think the Basques have been served a bad hand for these many years, you will think their cause a just one, and if Spain suppresses the revolt then Spain is initiating force against those who only want their freedom. (And the same with Northern Ireland, etc.) I suggested that what you will call initiation and retaliation will depend on your sympathies. You will put down the rebellion if you think the Spanish are in the right; if you think they are not, you will encourage the rebellion in the cause of freedom (and perhaps argue that they are only retaliating against the past aggressions of Spain, in keeping them part of Spain when they wanted only to be independent). Let’s accept the non-initiation of force principle, I said. How to apply it in cases is very, very sticky. Your country may have started the war, but if you are a soldier and another soldier comes at you with a bayonet, you will retaliate (preventatively?) even though your country, or its government, had initiated the conflict.

What justifies government, I wondered, in raising an army and doing other things connected with national defense? Government, she said, is the delegated agent of the individual to act in his or her self-defense. (She described all this in her paper “The Nature of Government,” but that had not yet been written at the time of our discussions. Neither had any of her non-fiction works other than a very few short papers such as “Notes on the History of Free Enterprise” and “The Objectivist Ethics.”)

But this worried me. What about people who don’t want the government to act for them in such a capacity — either they don’t trust the government to do this, or for some other reason don’t desire the government to act as their agent? Ayn’s view (as I remember it) was that the government protects them whether they want the protection or not. (For example, it protects insane people although the insane people can’t give their consent.)

I was also concerned about how such delegation occurred. I don’t remember delegating my right of self-defense to government or indeed to any other person or institution. No contract was signed, nor was there, apparently, even an implicit agreement. But then there was a discussion of what constituted implicit agreement. John Locke, I said, held that continued residence implies consent, but surely this is mistaken — did continued residence in the U.S.S.R imply consent to that government? Like so many other issues, we played around with this one for awhile without coming to any definite conclusion.

Ayn and I had very different attitudes toward nature. I liked vacations in the mountains, swimming in lakes, tramping through the woods. She cared for none of these things. The city was man’s triumphant achievement; it was not nature but man’s changes on the face of nature in which she reveled. She had (I gathered) broken Frank’s heart by insisting on the move to New York City from their estate in the San Fernando Valley, where Frank had been in his element. But she had had enough of nature. She spoke movingly to me of Russian villages in which anything manmade was treasured. She spoke of having to walk, as a child, with her parents, through the Russian countryside from Leningrad to Odessa, to live with their uncle and escape starvation (her father had been classified as a capitalist by the Bolsheviks, and left to starve with his family in Leningrad). “Why should I help to pay for public beaches?” she once said. “I don’t care about the beach.”

I liked fresh fruit for dessert, and tried to avoid pastries. She, on the contrary, loved pastries; perhaps the fresh fruits reminded her too much of the wild nature of which she had had her fill in Russia. She tempted me with pastries when she and Frank took me to a restaurant, and I of course gave in and devoured as much pastry as she did.

Other than the details just mentioned, she seldom referred to her early years in Russia. She preferred to discuss principles rather than specifics. But when I mentioned tyrannies and dictators, her voice would become hard and unrelenting. She almost sputtered in indignation at the mention of Khruschev, who was then at the helm in the USSR. I suggested that there has been some improvement there since Stalin, and that people were being invited to write letters of complaint to newspapers, for example about pollution and industrial inefficiency. “So that they can smoke these people out and then arrest them!” she spit out, from as deep a reserve of anger as I had ever heard in her.

She may not have known much about psychology — and she admitted as much — but when it came to the psychology of tyrants, she was a master sleuth of human motivations. She knew, as if from inside, how tyrants think. And her voice, it seemed to me, contained the grim but unspoken residue of years of hurt, disappointment, and anger in being victimized by tyrannical governments and their incompetent and uncaring bureaucracies. (She specifically instructed me to read Ludwig von Mises’s little book Bureaucracy to see why bureaucracies always worked badly, and I did.)

I did not have the unpleasant associations with the wide open spaces that she did. I was concerned with conservation of natural resources, including wildlife, and worried about the deterioration of the soil and the extinction of species. I was concerned too about human overpopulation of the globe and its effect on nature, the animal kingdom, and man himself. She did not seem to share my concern. Nature was merely a backdrop for man. As for overpopulation, she was all for population expansion. She mentioned the vast stretches of Nevada and Wyoming, largely empty of human beings; the United States could double its population and still not be crowded. A capitalist economy could do all this and more. I did not deny that it could, but wondered how all these added people in the wastes of Nevada would make a living, and how they would get enough water, and what room would be left for wild animals and plants if the human race filled up all the cracks.

But I found no responsive chord in expressing these worries to her; this was a vein that could not be tapped. The most vividly-expressed concerns on my part evoked in her only a kind of incomprehension. Of course one could put this the other way round: that she could find in me no responsive chord by which to move me to the realization that these concerns were of no human importance.

I mentioned to her once that I thought the Europeans who settled America were in some respects more barbaric than the Indians they replaced: they robbed the Indians of their land, they decimated them with guns and smallpox, and robbed them of their food by wantonly killing their buffalo. What made the whites triumph, I opined, was not the superiority of their intellect or even the superiority of their political philosophy, but the superiority of their technology, specifically firearms. We had guns and the Indians didn’t; that was what defeated them, I said.

Native Americans were not among Ayn’s concerns. The greatness of the political ideal of the Founding Fathers overrode all the rest in her view. Not that she wanted Indians exterminated, of course — she wanted them to be a part of a nation operating on the principles of the American Constitution, citizens, voters, entrepreneurs if they chose to be. A proper government would have had a place for all races on equal terms. The shame that I, a descendent of some of these European intruders, felt at what my ancestors had done apparently was not felt by her. And what should have been done if the Indian wanted no part of the white man’s government is a topic that she never addressed; or whether, if the Indian had claimed all of America as his own, since he had been here first, this claim should be honored. That America had a  functioning Constitution limiting the power of government and promoting individual liberty — this, in her view, was such an extreme rarity in the history of nations, and such a unique event on this planet, as to justify whatever trouble it cost. The view of the white man as an interloper on another’s domain was strange indeed to one for whom America had been a beacon of light in a dark world — and which had meant for her the saving of one’s spirit and one’s very life.

On a visit to my parental home in Iowa I stopped to visit a colleague who had just returned from Peru. I had given Ayn my phone number in Iowa, and sure enough, she phoned. I remember asking her on the phone what she would say about the situation in Peru, where a few landowners (descendents of the Spanish conquistadors) owned almost all the land, leaving the native Indians little or nothing. Ayn remarked that if  they didn’t use all the land themselves, but let it lie fallow as I described, they could make a lot more money renting it out to the native Indians, and in the course of time the Indians with their earnings could buy portions of it back, so as to own it once again. But that won’t work, I said — the Spanish purposely let the land lie fallow (some of the most fertile land in the nation), as a matter of pride, to show others that they don’t need to cultivate it for profit. Thus the Indians can’t even share-crop any of it, and are forced to settle further up into the mountains on land whose soil is too thin to withstand the plow. I suggested that under such conditions a government policy of land redistribution was called for.

Such a torrent of abusive language against compulsory redistribution then came over the wire that my parents could hear it across the room. I could hardly get a word in. I had no idea that mention of compulsory redistribution would ignite such venom. I said why I thought it was usually a bad policy, but that in the conditions described it would probably be desirable, as when MacArthur did it in postwar Japan. But she would not hear of it. Dinner had been set on the table, and I motioned my parents to go on eating without me. But they didn’t, and by the time Ayn’s telephone tirade was over, half an hour later, the dinner was cold.

It was pleasant indeed to be invited to Ayn’s apartment to meet Mr and Mrs Henry Hazlitt and Mr and Mrs Ludwig von Mises. There wasn’t much shop talk, but it was wonderful to meet them and to socialize with them. (I later met with Henry Hazlitt numerous times in connection with his forthcoming book The Foundation of Morality.) I felt honored to be invited to join this distinguished company. I also enjoyed several luncheon meetings with Alan Greenspan.

I learned much more economics from my conversations with Ayn. But once I put my foot in it. She was explaining why, if some industry was to be deregulated, the businessman would have to be given fair warning, he would be unable to make the rational calculations he would have to make at the time.

I said nothing in response on that occasion. But a few weeks later, when she exclaimed that the New York taxicab medallions should be abolished at once, I said “But consider the taxi driver who has bought a medallion for $25,000 just before their abolition. He would lose that whole amount. Shouldn’t the taxi driver be given an interim period also for making his own rational calculations?”

She saw the point. “You bastard!” she exclaimed, and flounced out of the room to prepare tea. I could hear the cups clattering in the kitchen, and Frank trying to pour oil over troubled waters. When she returned to the living room she had partially regained her equanimity, but was still curt and tense.

I learned from that incident that it didn’t pay to be confrontational with her. If I saw or suspected some inconsistency, I would point it out in calm and even tones, as if it were “no big deal.” That way, she would often accept the correction and go on. To expose the inconsistency bluntly and nakedly would only infuriate her, and then there would be no more calm and even discussion that evening. I did not enjoy experiencing her fury; it was as if sunlight had suddenly been replaced by a thunderstorm. A freezing chill would then descend on the room, enough to make me shiver even in the warmth of summer. No, it wasn’t worth it. So what, if a few fallacies went unreported? Better to resume the conversation on an even keel, continue a calm exchange of views, and spare oneself the wrath of the almighty, than which nothing is more fearful.

At the same time, she was an inspiration to me. It was inspiring to talk with someone to whom ideas so vitally mattered. By presenting intellectual challenges she set my intellectual fires crackling in a new way. And she was largely responsible for renewing my spirits. I never got bored with teaching — I always enjoyed contact with students — but I had become discouraged about its results. A class ends, I seldom hear from the students again, and a new crop comes in with all the same errors and unquestioned prejudices and assumptions as the one before. I suppose this was to be expected, but I was often discouraged by the lack of improvement. Doubtless I could have noticed some if I had been able to follow the members of the class after they had had my courses. And as for changing the world from its ignorance and lethargy, there seemed little hope of this occurring; all the combined efforts of high school and college teachers seemed to do little to prevent wars or create happiness or even ease the human situation very much.

So I was surprised when Ayn said, “Yours is the most important profession in the world.”

I responded, “Important, but not very influential.”

“That’s where you’re wrong,” she said. “You deal in ideas, and ideas rule the world.” (I seldom quote Ayn directly, and do so only when I clearly remember exactly what she said.)

I objected rather lamely that I didn’t see any ideas molding the world, in fact that the world seemed quite indifferent to ideas. But she persisted that it was indeed ideas that ruled the world — and that if good ideas did not come to the fore, bad ones would rule instead. Nature abhors a vacuum, and it is when good ideas are not taught that a Hitler or a Lenin can come in, filling the vacuum, trying to justify the use of force (for example) against entire classes of victims, when even a modest amount of teaching about human rights would have shifted the battle of ideas and perhaps carried the day. She reiterated that it was ideas — specifically the ideas underlying the American Revolution — that had created the greatness of America. Prosperity had been a consequence of the adoption of these ideas; it occurred when physical labor was animated by an economic theory by which the work could be productive.

We came back to the subject many times, and I began to notice a new energy in my teaching, a new bounce in my attitude, as if the intellectual life was not fruitless after all, and as if I might even make a bit of real difference in the world. Not much in the whole scheme of things, to be sure; but later, when ex-students would say to me, “My whole life has been changed by your course,” or “Something you said at the end of your lecture one day years ago changed me forever,” the words not only buoyed me up, but made me aware of a fearsome responsibility. I don’t know whether I ever communicated to Ayn this gradual change in my professional attitude. In a way, she had saved my life. I wondered, much later, whether she ever knew this.

She did not take kindly to any recommended change in her writing, not even a single word. I was strongly in sympathy with this. Even if a word was appropriate in what it meant, it might not fit into the rhythm of the sentence or the idiom of the passage. But there is one occasion on which she gave way to me nonetheless. She showed me the typescript of her forthcoming introduction to Victor Hugo’s novel 1793. I then proceeded to read certain passages of it aloud to her. By this means, I convinced her that some passages were unidiomatic, and that certain words hindered the ambience rather than helping it. She went along with all my recommended changes. “Boy, do you have a feeling for words,” she said glowingly as she made the changes.

She was convinced that on my forthcoming trip to California I should call on her Hollywood producer, Hal Wallis. “He’s a movie producer,” I said; “I would have nothing to say to him. And he’d be about as interested in me as in a hole in the ground.”

Not so, she said. She said I had no idea what an intellectual inferiority complex these people have. “To have a philosopher come to them would be an honor to them,” she insisted.

But I had no idea what I would say if I did go; I would probably stand there with a mouthful of teeth. (And I never did follow her suggestion.) “Well, maybe I could write the script for the movie Atlas Shrugged,” I said, more than half in jest.

But at once she put her foot down, though in good humor. “Nathaniel Branden is going to write the script for Atlas Shrugged,” she said decisively, and that was that.

She reserved her best-chosen curse words for her philosophical arch-enemy, Immanuel Kant. She considered him the ultimate altruist and collectivist. Though not a Kantian, I did not share her extreme view of him. I invited her to read his book on philosophy of law, with its defense of individual rights, and certain sections of his Metaphysics of Morals in which he discussed duties to oneself. But it was all in vain. She insisted that these were only incidental details, but that the main thrust of Kant’s philosophy was profoundly evil. I did not consider him more altruistic than Christianity, and in some ways less so.

I did get her to acknowledge agreement, I think, with Kant’s Second Categorical Imperative, “Treat every person as an end, not as a means,” even though I tended to believe that the implications of this precept for ethical egoism might be ominous. And I told her that I thought she was also Kantian in her insistence on acting on principle (even though she and he didn’t share the same principles). I even thought that she shared some of his emphasis on universalizability: that if something is wrong for you to do it is also wrong for others (in similar circumstances), and that before acting one should consider the rule implied in one’s actions as it if were to become a universal rule of human conduct. She would praise impartiality of judgment as strongly as any Kantian. Sometimes, when we were discussing another view, such as existentialism, I would twit her, saying “You’re too Kantian to accept that, Ayn,” and she would smile and sometimes incline her head a bit, as if to admit the point before going on with the discussion.

The more I thought about it, the more I was convinced that the most fundamental distinction in practical ethics was between individualism and collectivism. Consider the American Civil War, I said. Assuming that it played a decisive role in eliminating slavery, wasn’t the result worth the loss of half a million lives? Yet it may well not have been worth it to the men who were drafted into the army to fight that war. The fact that it “helped the group” (the collective) may not have been much comfort to them.

Or consider the American Revolutionary War. It produced an enormous benefit, the founding of a free America, and was the most nearly bloodless of all major revolutions. Yet was it “worth it” to those who shed their blood fighting in the cause of independence? If you look at the group as a whole, the group was better off because those wars were fought; we’re glad that somebody did it. But if you look at the individuals, it was a case of some individuals sacrificing their lives so that others could live in freedom and prosperity.

Ayn’s response was that no human life should be sacrificed against that person’s will. If a person believes a cause to be worth it, such as freedom from slavery or oppression, then he may willingly sacrifice his life for that cause; but no one should be forced to do so. The sacrifices must be made voluntarily.

But are you enlisting voluntarily if you do it because you’ll be drafted anyway later? I wondered. Perhaps voluntariness is a matter of degree. And what if the Germans are invading France and the Germans draft all their young men and the French don’t? Then the French would be overrun and perhaps enslaved. To escape this fate, France institutes the draft. But this example didn’t deter Ayn. Then France is overrun, she said. (The principle of voluntariness must not be violated.) And maybe the prospect that this was going to happen would be sufficient to make most Frenchmen voluntarily enlist.

But then, I suggested, there is another problem: what is meant by “voluntary”?

You think about doing something, you deliberate, then do it. Nobody forces you or pressures you. Let’s take this as a paradigm case of voluntary action. On the other hand, someone with a loaded gun at your back says to you, “Your money or your life,” and you surrender your wallet. This is a case of coercion, and ordinarily we’d say you don’t give up your wallet voluntarily.

OK, now the problems begin. What exactly distinguished these cases? Some say that a voluntary act is one of which one can say that just before it one could have done otherwise. Thus the patellar reflex and other reflex actions are not voluntary; you can’t prevent the response.

But all our everyday actions are by that definition voluntary, including our response to the gunman: we could have, just before surrendering the wallet, decided not to surrender it. That was within our power. (Indeed, some would say, “Under the circumstances, you voluntarily chose to give up your money.”) The result of using this definition is that practically all our acts are voluntary, even the robber example used as a paradigm case of not being voluntary. So, I said, let’s take another criterion for voluntariness. With the gunman you can still choose, but your choices are limited by his actions. (You can choose to give your life rather than your money, whereas without his intervention you would have kept both.) The gunman limits your choices. But so does the employer when he fires an employee, or lays him off because the factory is losing money. The employee’s choices are now more limited, limited by the employer’s actions.

But has the employer coerced him? Some would say yes, though he didn’t threaten the employee’s life as in the gunman case. Others would say no, he only limits the employee’s choices. Indeed, the rainfall that prevents you from going to the picnic also limits your choices as to what to do that day. Our choices are limited hundreds of times a day — limited by a wide variety of conditions, human and non-human. (Our options are never limitless in any case.) So that definition won’t distinguish our two paradigm cases from each other; there is something in both cases to limit our choices.

Let’s try another, I persisted: an act is voluntary if it’s not forced. But now what exactly is the import of the verb “force”? Did he force you to give up your wallet, since you could have said no? Is the child whose parents say to him “Kill your pet dog or we’ll never feed you again” forced to kill his dog? Are you ever 100 percent forced, except when you are physically overpowered and literally can’t do anything else?

But very few acts are forced in this sense. When we say “He forced me to go with him,” we need not mean that he physically overpowered her, but rather that he threatened her or even that he “knew what buttons to push” to get her to do what he wanted. Shall we say in that case that she did his bidding voluntarily? No matter which definition we employ, there are cases that seem to slip between the cracks. Thus, saying “He did it voluntarily” doesn’t convey as clear a piece of information as most people think it does.

I concluded that when people say “He did it voluntarily” they usually have no idea of the complexities of meaning that can be plausibly attached to that word; they have no idea which fork in the road they would choose in deciding which meaning of several to take. They just blurt out the word. And that, I suggested, is what philosophical analysis is all about — by suggestion and example (“Would you say this is a case of X? No, then perhaps that would be?” etc.) to draw out the meaning behind the words — to pierce the veil of words so as to get a hold on those meanings. But the words constantly obscure this, often in a bewilderingly complex way. Yet it’s important to keep us from blurting out some quick and easy verbal formula. It’s not easy, and takes a lot of practice; as Brahms said of his second piano concerto, “It’s not a piece for little girls.”

But there it is, the difficulties are there, not only for “voluntary” but for “free” and “caused” and “responsible” and “intentional” (to take a few from just one area of philosophy). These are especially dense philosophical thickets, which require lots of thankless untangling. Most people haven’t the heart or the will to go through with it.  I fear my little lecture was pretty much lost on Ayn. Her philosophical aspirations lay in an entirely different area. And in time the tension between these approaches to doing philosophy is what probably marked the beginning of the end for us.  — Click here for Part 2 –>

(Originally published in Liberty magazIne, 1987)

When most people talked philosophy with Ayn Rand, the relationship was student to teacher. But with Rand and John Hospers, it was philosopher to philosopher.

Conversations With Ayn Rand Part 2

by John Hospers

Ayn occasionally expressed some disquiet (perhaps resentment) that she was not recognized as a philosopher by the contemporary philosophical community. In spite of long philosophical passages in Atlas Shrugged, philosophers had never taken note of her views, and her philosophizing in Atlas had largely fallen on deaf ears in the academic community.

I told her that philosophical discussion goes on almost entirely in philosophical journals. What about philosophical books? she asked. “Yours is a philosophical book,” I said, “but it is a novel. It’s not that philosophers don’t read novels—though a lot of them don’t—but they don’t consider it their professional duty to do so.” Besides, I added, she had acquired a right-wing image in the popular press, and that is a position that most academicians are strongly opposed to. There were a few well-placed curses from Ayn about the prejudices of the “liberal establishment.”

I told her that if she wanted to become known in philosophical circles, she should write a piece or two and submit it to the Journal of Philosophy or the Philosophical Review or the Review of Metaphysics. After its publication, I said, it would be studied, commented on, and probably criticized. She would then respond to these criticisms, which again would evoke more from others, and at that point, I said, “I guarantee that you will be known as a philosopher.” But she never did this. She did not want to enter the arena of public give-and-take with them. She wanted them to come to her. What she wanted of philosophers, other than recognition, is not easy to say. I am sure she would have cursed them soundly if they offered criticisms. Even a mild criticism would often send her to the stratosphere in anger.

At the same time, I must add, she would often tolerate criticism, even revel in responding to it, if (1) it was given “in the right spirit” (the vibes had to be non-hostile) and (2) it was sort of “on the right track”—the sort of thing that could be said by someone who was “on his way to the truth” but hadn’t yet arrived there; then she would “correct him” painstakingly and in detail.

I sometimes pondered how people could approach so differently the enterprise of philosophy. I thought of the composers Igor Stravinsky and Richard Strauss; each occupies a high place in contemporary music, but neither could tolerate the other’s musical idiom. Similarly, was it just a difference of style among philosophers? Surely not. Each comes to philosophy as a satisfaction for a felt need. I had been “burned” early on by over-eager philosophic generalizations, and I was weary of systems in which different philosophers said opposed things, with no apparent way of resolving the issues in favor of the one or the other. I had come to the conceptual-analysis route as a way of resolving (or sometimes dissolving) problems that had long haunted me. Ayn had aimed instead at a “final philosophical synthesis,” and regardless of its strengths or weaknesses, that is what she had to present to the world. Human beings are distinguished from all other creatures by the power
of choice. I agreed with Ayn about this—we know that the dog scratches at the door but we don’t know that he chose to do it (nor do we know that he didn’t). But I tended to disagree with Ayn about some of the things that (according to her) we choose. Do we really choose “to think, or not to think”? I for one (I said) don’t remember making such a choice. I would often think about things, perhaps because I am a questioning sort of person and don’t usually take things on faith. Yes, often when confronted by a specific problem, I have said “I’ll think about it.” But when my first acts of thinking occurred I no more chose “to think or not to think” than I chose “to be or not to be.” But more than that I considered the scope of human choice to be much more limited than she did. Some limitations we would both agree on: a dunce can’t choose to be a genius, and a crippled person can’t choose to walk (he can only choose to try, unsuccessfully). Without practice a person can’t choose to do shorthand or typing at 60 words a minute. Neither can a person, just by choosing (or even by choosing and trying), extricate himself from situations that have been years abuilding. An obsessive-compulsive cannot just stop doing whatever he obsessively has been doing for years, such as putting the key in the lock three times and then tapping the floor three times (or whatever his ritual is). And if a teenager ran away from home to escape alcoholic parents and now has lived on the city streets for two years, she can’t just suddenly “straighten out” and become a normal citizen—the gutter-instincts (survival by any means) are just too strong by now. And so on for thousands of cases in which we may unthinkingly believe people could have chosen to do what we want them to do.

At this point in my diatribe Ayn reminded me that people do escape from the slums, that with determination they overcome seemingly impossible odds and sometimes become leaders in society. Prepared for this observation, I granted that it was true; but the fact that one person, A, can do this, doesn’t show that other persons, B, C, and D, can also do it. Each of them acts under somewhat different conditions from A.

They have one common denominator, slum upbringing; but some
had the love and trust of their parents, and the wherewithal to prepare them to surmount adversities, and others did not; some had father-figures with whom they could identify; and so on. (If a person tries hard enough, he will succeed; but what is meant by “hard enough”? Would you call it “hard enough” if he did not succeed? Doesn’t the statement come to the tautology “If you try till you succeed, you’ll succeed”?)

Anyway, all this preparatory conversation was so much chaff in the wind, for Ayn hit me with the charge that I was sure she would come up with sooner or later. “You don’t believe in freedom at all, you are a determinist.”

I knew what dense philosophical thicket lay in waiting here, with vague and overlapping meanings of crucial terms like “free,” “determined,” and “caused.” I hesitated even to embark on it. One must come at the issue from so many different aspects, breaking one stone and then another along the way—and most people lack the tenacity to go through it all, they want quick and easy solutions, so that they can repeat certain verbal formulas and convince themselves that they have the problem mastered. So I began simply: “Determinism is just universal causation. Everything that happens has some cause or other, that’s the core meaning of ‘determinism’ (to which other meanings have sometimes become attached). The causes may be matter or mind, spirits or God—all that determinism says is that everything has a cause, even if we never find out what all the causes are.” This was determinism in its most neutral, vanilla-flavored sense, without the punch it was supposed to pack, for there was nothing in my formulation that made it incompatible with freedom, yet that was the main feature which led many people to oppose it.

Of course, I continued, if everything is caused, events in human life are caused too. Every decision you or I make is caused. But so what? I decide to rake the leaves because I think the lawn looks unsightly. So what’s so hostile to freedom in that? Would it be better if I causelessly raked the lawn?

But of course, no matter how many actions are caused by decisions (or other things going on in the mind), ultimately these events in the mind are caused by things that take place in the world outside the mind. They may be hereditary factors or factors in the environment, all very complex indeed, but if my decisions are caused, so are the
factors that caused them, and so on back. And over the hereditary and early environmental factors I had no control at all. So am I really free?

Once the term “free” is raised, more clarification is called for. (I discussed this with Ayn at much greater length than I have indicated here.) The word “free,” I began, does have a use; it does describe something. Ordinarily we say that I am free when I am not coerced, when no one has forced me to act as I do; I act as a result of my own choice, unforced and unconstrained by others. If she marries him because she wants to, she does so freely, but if she is dragged to the altar she is forced. This is a rough-and-ready distinction that everyone understands and uses. Does determinism (I said) really deny this? Determinism says “My act is caused”; freedom says “I caused my act.” The difference is between the active and the passive voice. Ayn started to object, but I went on. Sure, you can find causal antecedents of human actions in the brain, in the environment, in parental influences—in such complex causation as this there are antecedents to be found all over the place. Most of the factors, however, we don’t know at all, such as what makes one person make this decision and another person in the same circumstances make a different decision. In the human realm we are very far from having established
determinism as we have done in physics and astronomy, where we can predict an eclipse to the split-second a hundred years ahead. Determinism asserts the universality of causes in the human realm, without having gone much of the distance toward proving it that has been accomplished in the natural sciences.

Ayn expressed the belief that in the area of human choices, there are indeed causes, but that a person in so acting is self-caused (causa sui). I expressed doubt as to what this could mean. If something is caused, isn’t it caused by something else, something other than itself? How could my decision cause itself? Cause has to do with origination, and how could the origin of choice X be choice X itself? We can say, truly, that I caused my choices—that I, a complex set of actual and dispositional characteristics, caused this act of choosing to occur—but that is not the same as saying that X caused X. I was not able to see causa sui as anything but a desperate attempt to escape “the dilemma of determinism.”

At any rate, what I wanted to make crystal clear to Ayn was that the “principle of determinism” (or Causal Principle), that everything that occurs has a cause, is not merely a statement (true or false) about nature’s workings; I tried to give her a sense that it had a much more complex and ambivalent epistemological status than that, which rendered labels like “true” and “false” extremely dubious. I tried to make the epistemological point very simply. Suppose a chemistry student gets some quite unexpected results when he repeats a laboratory experiment. He then reports to his teacher that the same effects don’t always arise from the same cause: he set up the experiment exactly the same both times, yet got different results (an orange precipitate in the first case, none in the second). Conditions C produced result E-l the first time and E-2 the second time—different effects from the same cause! Yet his teacher wouldn’t tolerate this for a moment. Maybe he had some evidence that the C’s weren’t the same—he might find an impurity in the liquid the second time that wasn’t there the first. But more usually he had no evidence at all—there was a difference in the E’s, he reasoned, so there had to be a difference in the C’s. And we would say this whether we know it or not, whether we ever discover it or not.

And so on in general, I said. If after repeated trials we discover the cause of something, we say that confirms the Causal Principle even more; but if after repeated trials we fail to discover the cause, we don’t say it had no cause, but only (and always) that it’s there but we haven’t discovered it yet. Isn’t this a remarkable asymmetry? Isn’t this very peculiar—a principle that discoveries confirm but no discoveries can disconfirm? A principle that parades as a truth about the world, yet is apparently immune to refutation by discoveries about the world? What does this show? Isn’t there “something funny going on” here? Aren’t we trying to run with the hare and hunt with the hounds? Isn’t this asymmetry a ground for suspicion?

I was not sure whether Ayn followed the direction in which I was pointing, but I went on. I suggested that the much-vaunted Causal Principle was not a statement about the world at all—not like “All birds fly,” which can be disconfirmed by finding a few ostriches. That which can be confirmed by experience but not disconfirmed by experience is not a statement about the world. It might be an a priori truth, like the Law of Identity, not subject to, and not requiring, confirmation by experience. But I could not think it a priori because it made claims about nature which, I suggested, could only be confirmed by observing nature—which can’t be done from one’s armchair. Instead, I suggested that it was a kind of scientific rule-of-the-game (“heuristic maxim”) that has stood us in good stead because when we used it in the past we have found lots of causes, but one which we don’t permit to be disconfirmed, for there’s nothing that we could do that we need to count as disconfirming it. It’s a rule, the following of which has pragmatic value—it helps us to find more causes; but since it isn’t falsifiable it doesn’t count as an empirical rule, which is what it would be if it were like “All birds fly” or “All bodies gravitate.”

Something may look like a plain and simple statement about the world, the only question about it being “Is it true or false?” But what looks like a statement needn’t be a statement, and perhaps this one isn’t—instead maybe it’s a rule that we use to guide our future scientific activities, or express a faith in some ultimate uniformity of nature. And if it has that status, then our talk about the Principle of Determinism being true or false is mistaken from the outset. We have been misled into thinking it has this simple true-false status at all.

I could not expect Ayn or anyone else to grasp the import of this at once: to someone who has spent most of a lifetime asking “Is it true or is it false?” it is disorienting and mind-blowing to be told that this distinction may not be applicable to the question at hand. One has to see how this approach can be applied to other philosophical problems (not just determinism), and how it clarifies or dissolves those problems rather than leaving them forever intractable. But to appreciate all this requires much more one-on-one philosophizing than I had done with Ayn. I had high hopes that we might yet do it. But whether it was the defects of my presentation or her disinclination to think outside the traditional categories with which she had operated for many years, I was never able to get far with her on this—it remained terra incognita to her, and her responses seldom indicated that she had grasped the true import of what I had said. It seemed to me that she failed to appreciate the subtle shifts of meaning of crucial terms that often occur midway in a discussion, and result in total confusion unless the shifts are pointed out when they arise. She seemed to have a number of ideas packaged together under the heading she called “determinism” and assumed that the term retained the same meaning in its various contexts of use (a common enough error). One example that I particularly remember is that she would say that according to determinism a person never could  do other than he did; and that if exactly the same circumstances were to arise again (according to determinism), the same result would occur. “And if the same thing didn’t recur,” I said, “then you’d conclude, without further evidence, that some factor in the circumstances leading up to it were different this time. And you would say it,” I insisted, “as an a priori assumption, without any independent evidence that any of the conditions were different.” I remember using this analogy: A says “All swans are white,” and B replies that there are black swans in Australia; to which A replies, “If they’re not white, they’re not swans.”

I tried to open up to her the logic of the word “could.” I said that “could” is an ability word: when someone says “You couldn’t have done otherwise,” this charge invites the retort, “Not even if I wanted to?” And of course if I had wanted to I would have done something different—I would have continued reading the paper instead of going to the kitchen. My wanting to do X instead of Y could well be the deciding factor that caused me to do X instead of Y. So, I said, it isn’t true that I couldn’t have done Y; I would have done Y if I had wanted to.

But the next step, of course, was “According to determinism, you couldn’t have wanted anything other than you did.” But what, I said, does “couldn’t” mean in this sentence? That I wouldn’t have wanted anything else even if I had wanted to? No? If not, then what does “could” mean in this sentence? I suggested that it would be preferable to say that if exactly the same conditions were repeated the same event would have happened—and then show the unprovability of that statement because of the impossibility of tracking down all the conditions.

Ayn was impatient with such subtleties. When we recapitulated, she would always return to the position that if you are a determinist you believe that nothing could have happened except what did happen. And once again I would inquire what “could” might mean in that sentence—and we would start on the merry-go-round once again. Of course, I went on, there are (as usual) other senses of “could” as well, not specifically applying to human action. We may say that when you let go this pencil from your hand it could not fly upwards, that it could not do anything but go downwards in accordance with the law of gravity. But that is only to say that the downward motion of the pencil is the one that accords with laws of nature. That is, if you assume certain laws of physics, then the pencil could not (logically could not) have moved in any other way. The “could” here is a logical “could” (not an empirical one) expressing the logical connection between statements—statements of the laws of nature, statements about the mass and volume of the pencil, and the third statements (the conclusion) about the behavior of the pencil. We can say that granted certain premises, this behavior could not have been other than it was. (But, I added, saying that the pencil could not have behaved otherwise is already a departure from the central meaning of “could,” which has to do with ability.) I never made much progress with her on determinism, but when we talked
one evening about a specific kind of causation—extra-sensory perception—I evoked in her an unexpectedly vigorous response. I do not remember how the subject arose, and I didn’t even consider it a philosophical area of discussion, but I was describing to her Soal and Bateman’s book Experiments in Parapsychology. I explained that out of thousands of tries, a few people made very good subjects; they were able to state with considerable accuracy truths that (as far as we knew) were discoverable only by sense-perception, but which they could not have known through sense-perception.

A man was sealed into a room evening after evening, and there was no possible communication between this room and another room three doors away—there were scientists who averred that there was no way a person in Room 1 could convey information to someone in Room 4. In each of these two sealed-off rooms, cards were being pulled from a deck one per minute. Every minute a bell would ring, at which moment a card would be pulled from a deck in one room and the subject in the other room would write on a piece of paper which card he thought it was. There were five different kinds of cards (apple, elephant etc.) and thus one chance out of five of guessing correctly. Getting the correct result slightly above chance (20 percent) for a time wasn’t particularly noteworthy, but getting results like 40 percent correct over 100,000 attempts was quite remarkable, the chances against this being some trillions to one. Yet several subjects were reported to have done just that, and no one knew how. Ayn looked skeptical but allowed me to proceed.

Moreover, I went on, the subjects had improved with practice. From a fifth they had gone gradually to a quarter and even to a third. No one could figure out how they got the ability to do this. They themselves didn’t know: they weren’t aware at the time that they were guessing correctly, they just “put down the first thing that popped into their heads.” And then the rules of the game were changed—”You will now write down the card that was being pulled last night at this point in the sequence”—and their achievements vanished (went down to chance), but came up again with practice to the previous fraction.

And then, most curious of all, the rules were changed once more: “You will write down the card that is going to be pulled at this point in the sequence tomorrow evening.” Again the results went down to chance, but again with practice the record gradually improved. But the implications of it shocked me: How could they possibly know the future? What if between tonight and tomorrow night the entire building burned down? And so on.

Ayn was now taken quite aback, and thought I should give no credence to any of this. It implied reverse causality, she said, and that was impossible—something at a later time causing something at an earlier time. I agreed that reverse causality was impossible—such as the rain tomorrow helping the crops grow today. But I didn’t think the example involved reverse causality but only precognition. We all predict the future, I said, usually with some evidence; what made this case peculiar was the ability of the person to make a correct prediction again and again without apparently having any evidence whatever. (At least there was nothing known to science that we would call evidence.) That was what I found different about this kind of case, and I couldn’t think of any explanation.

Ayn was quite shocked that I would take any of this “mystery- mongering” seriously. (It was hard to convey briefly the import of entire books on the subject, and the extraordinary lengths to which people had gone to make sure there was no sensory route by which A could have known B.) Didn’t I know that reality does not work in that way? Perhaps so, I said—and I added I didn’t much care whether reality does work in that way or not—but whether it does or doesn’t is not something we can know by just pontificating about it from our armchairs: we have to go the difficult route of empirical investigation to find out whether people can know truths about the universe that are not mediated through sense-organs. One cannot know this a priori, I claimed; one has to go the more difficult route of checking it all out in detail. But I gathered that she considered this all a matter of necessity—that it was necessarily the case that nature doesn’t work in this way. She was more disturbed about my permissiveness on this subject than I had thought she would be. Instead of saying that nature can’t work in this way, the question for me was whether in fact it does; if it does, then it won’t do to say that it can’t.

For me, the question of what caused what is entirely a contingent matter, on which we can make judgments only in the light of observation of the world. But it dawned on me that Ayn didn’t accept the distinction between necessary and contingent at all. For her, it seemed (though I never got it in just these words) every statement that is true is necessarily true. “Doesn’t everything that happens have to happen?” she once asked me.

I replied that one would first have to inquire about the meaning of the phrase “have to.” In most locutions, “have to” involves a command or order—”I have to be in by midnight.” When one says that events in nature, such as a comet entering the earth’s atmosphere, have to happen, it sounds first off as if this event is being commanded, perhaps by God. But this is surely not what most people mean when they say it. Perhaps we mean that if one accepts certain laws of nature (concerning gravitation, mass, velocity), and if one grants certain initial conditions (Comet X is in such-and-such a position at such-and-such a time), then Comet X must be another place at a specific other time. (Not that the comet must—but that the statement—the conclusion—logically must be true if the premises are true. The “must” is about the relation between statements, not about phenomena in nature.) When I say that if I let go of this pencil it must fall, doubtless I am saying that the statement that it does (or will) follows from certain laws of nature plus initial conditions. But it would be clearer if I just said that the pencil will fall.

There are many uses of “must” and “have to” (I took her through several more) and I told Ayn that I thought she was telescoping several disparate uses of the term “must” into one, without distinguishing among them, and that this might be why she was led to make such a statement as “whatever happens must (has to) happen.” (If you take it quite literally, I said, it seems like a more extreme fatalism than any view I have ever countenanced.) Ayn usually let me take the initiative in deciding what subjects we should discuss. The conversations described in this paper reflect largely my choice of topics—these were the things about which I was interested in sounding her out. I reflected later that in this respect I had probably made a mistake. Only occasionally did we get around to discussing topics that were central to her philosophy. That is why some topics central to her are largely absent from these pages. Her papers on these subjects had yet to be written. “A is A” is, I insisted, a tautology, but an important one: every time a person is guilty of a logical inconsistency he is saying A and then in the next breath not-A. Thus “A is A” is something of which we need to remind ourselves constantly. But it is not, I said, an empirical statement: we don’t have to go around examining cats to discover whether they are cats. (We might have to examine this creature to discover whether it is a cat.)

But, I said, statements of what causes what, such as “Friction causes heat,” are empirical statements; we can only know by perceiving the world whether they are true. How, I wondered, can the Law of Causality be merely an application of the Law of Identity? You could manipulate the Law of Identity forever and never squeeze out anything as specific as a single causal statement.

But (I went on) I could see how such a confusion might be generated. A tautology can easily look like something else. “A thing acts in accordance with its nature” might be one example. This might be taken as an instance of the Law of Identity: if a creature of type X acts in accordance with laws A, B, C, and this creature doesn’t do that, then it isn’t an X. If dogs bark and growl and this creature hisses and meows, it isn’t a dog; that is, we wouldn’t call anything a dog that did this. So we can plausibly classify the statement about what we call “a thing’s nature” as special cases of the Law of Identity. But this, I insisted, tells us nothing about the world, but only about how we are using words like “dog” and “cat.”

What is a thing’s “nature” supposed to be anyway? I went on. Is a thing’s nature its definition? Some might say yes: it’s the nature of water to be two parts hydrogen and one part oxygen. But one might also answer no: it’s the nature of water, one might say, to flow downwards, and this is no part of any (usual) definition of “water.” It wouldn’t even be true if atmospheric pressure were ever so much less than on earth (it might evaporate and not flow). So to answer the question, we have to know what the person means by talking about a thing’s nature. Often, I suggested, when we talk about a thing’s nature we are talking about a set of dispositional traits: thus, “It is the nature of cats to prowl”—yet so far as I know the tendency to prowl is not listed in the definition of “cat.” Or, when we say “I used to think his lying was just a quirk, but now I think it’s his nature,” we are saying that his tendency to lie is a more fundamental trait than we had previously thought.

I could see that Ayn was getting bored, so I summarized the moral of the tale: that statements about “X’s nature” sound simple and easy, but that under this linguistic simplicity lies a morass of vagueness, which comes out only gradually as we explicate one case after another in which we actually use the expression. I seemed unable to convey to Ayn any sense of this; and yet, it seemed to me, what was wrong with the usual philosophic formulations, including hers, couldn’t be appreciated without going through the detailed “digging” required to turn up these disparate meanings, and their confusion with one another from which the errors flow. Philosophic formulas, I said, merely give us “philosophy on the cheap.” It was inevitable that sooner or later we would get to the subject of definition. I never had an opportunity to present my views on this systematically, from the ground up. I had done this in some detail in my book Introduction to Philosophical Analysis, in the long 100-page introductory chapter entitled “Words and the World.” I gave her a copy
of the book and encouraged her to read the relevant chapter. But she never did; I was disappointed by this, for I had thought we could use this material at least as a starting place for discussion, but in time I realized that she read almost no philosophy at all. And I was amazed how much philosophy she could generate “on her own steam,” without consulting any sources.

She began by insisting that one should search for true definitions, and I responded that definitions were neither true nor false. But it shortly turned out that I was talking about definitions of words and phrases, and she was talking about definitions of things (entities in the world) or, sometimes, concepts of those things. But I expressed ignorance as to what the phrase “the definition of a thing” meant. (We also discussed “definition of concepts,” examining the differences between words and concepts.)

I suggested that there were no true or false definitions. “The word ‘symphony’ once referred to any orchestral composition, without voice, in four movements,” I said. “Then, as in Beethoven’s 9th, voices would sometimes be introduced and the work would still be called a symphony, so that was no longer a defining feature. Then in the 20th century came one-movement symphonies, such as Sibelius’ 7th, so the four-movement requirement fell out. What happened was that the word ‘symphony’ was no longer used to describe what it had described before. But there is no true or false definition of ‘symphony.'”

A simple case to the contrary, Ayn said, was that H2O is a true definition of water; if someone said water was HO or H2SO4, he would be mistaken.

I responded that I saw nothing but confusion in this. “It depends on what you mean in the first place by the word ‘water.’ If by ‘water’ you mean H2O, then course ‘Water is H2O’ is true because you’ve already defined water to mean that. All you get that way is ‘H2O is H2O,’ a simple tautology. But of course you might not already mean that by the word ‘water’—early man surely did not. He meant the liquid that flows in streams and rivers. In that meaning, it is true that water is H2O—that is, the liquid in streams and rivers has the chemical formula H2O. That is a true statement about water—an empirically true statement, not a definition. Once you are clear what you mean by the word, the issue is resolved.”

Ayn alleged that man is a rational animal, and that this is a true definition. It is true, in other words, that that’s what man is. I replied that it all depends what you mean by “man” in that sentence. As a rule we employ a biological definition of man—man is a creature with two legs, two arms, walks upright, etc.; that’s how we identify creatures as human without knowing anything more about them than our senses present to us. Now, the creature that fulfills that biological requirement is also a rational animal (that is, has rational potentialities, even if unfulfilled)—that is a true statement: not a definition, but a statement about the creatures identified by the first (biological) definition. (Of course, again, if by “man” you already mean “rational animal,” then it’s a sheer tautology.) We could say, I suggested, that man is a laughing animal, or an aesthetic animal (the only creature that enjoys works of art), a volitional animal (the only creature capable of choice), and perhaps several others. But, as Ayn aptly pointed out, these features are less fundamental. If we were not rational animals we would not be able to comprehend works of art or see the point of jokes; the rationality explains the other characteristics, not vice versa. I assented to this; but I insisted that my point still held, that if “man” is already defined as a rational animal, the statement that man is a rational animal is a tautology (merely an example of A is A); whereas if “man” is defined biologically, as we ordinarily do, then the statement that man is a rational animal is true, but not a definition. A stipulative definition, I said, merely tells others how we’re going to use a word (“I’ll use this noise to mean so-and-so”), and a stipulation isn’t a true statement, just a proposal to use a noise in a certain way. A reportive definition is a report of what a word is used to mean in a language-group. Thus, “A father is a male parent” is a report (in this case a true one) of what the word “father” is used to mean in the English language. And finally, if you already mean by “father” a male parent, the definition of “father” as male parent is presupposed, and the statement “A father is a male parent” comes to “A male parent is a male parent,” another instance of “A is A.” Confusion comes only if we get these scrambled together.

Is “Steel is an alloy of iron” a true definition of steel? No, I said, it is a definition of “steel” if that is what you choose to mean by the word “steel.” It is also a true report about how users of the English language use the word “steel,” and as such it is a true reportive definition. And if you already mean “alloy of iron” by the word “steel,” then again you have a tautology, Steel is steel, A is A. It seemed to me that these distinctions clear up the question. In every case we define words and phrases, and we describe things (using the words or phrases).

Whales were once thought to be fish. When it was discovered that they were mammals, wasn’t this a discovery of the true definition of whales? The discovery (an empirical one), I said, was that those creatures which we called “whales” (on the basis of their shape, size, and general appearance) also had the feature of being mammalian. We then changed (or biologists did) the definition of the word so as to include being mammalian as a defining feature; biological classification on the basis of mammal, reptile, etc., had already long been in place; so after the discovery nothing that looked like a whale but was a fish would have been called a whale. The re-definition of the term was simply an adaptation to existing methods of biological classification. But the discovery, that these creatures were mammals, was an empirical one, like the discovery that some nebulae are actually galaxies.

This is one of the issues that seemed so obvious to me that I did not see how anyone could think otherwise. That is why I tended not even to remember opposing remarks as long as they were not clear to me. Rather than misreport what Ayn said, I have chosen not to say anything about her remarks: what I said is very clear to me, what she said is not. At the time being described, Rand’s non-fiction works, including Introduction to Objectivist Epistemology, had not yet been written. I would like to think that our discussions helped motivate her to write some of these non-fiction works. At the time of our discussions she was writing very little. Time was on her hands, and perhaps that was one reason for inviting me back. She vehemently denied the validity of certain distinctions, like analytic vs. synthetic and a priori vs. a posteriori. Both were Kantian distinctions, and her hatred of Kant may have played a part in the rejection; but more likely her rejection of the distinctions played a part in her hatred of Kant.

Already at the time of our discussions there was critical talk in philosophic circles about the analytic-synthetic distinction. Is it analytic to say that all green things are extended? Quine had asked, and concluded that the failure to provide a satisfactory answer was due to the unclarity of the term “analytic,” not to any defects in “green” or “extended.” But the examples I used were of the very simplest sort: “All A is A” is analytic, I said (it’s another formulation of the Law of Identity), and “All A is B” is not. “Lions are lions” is analytic and “Lions are fierce” is not—to determine that you have to observe lions. And the same for a priori: you don’t have to go to the next room to discover whether the cat is a cat, but you do have to in order to find out whether the cat is lying on the bed there.

Why did Ayn deny a distinction that seemed to me so obvious—perhaps not for far-out cases like colors being extended, but for ordinary “A is A” type cases? She seemed to think, as Leibniz had done for different reasons, that the distinctions do not apply because all the statements are really in the same bag. All the features of lions, whether now known or not, are really a part of their definition. All statements about X follow from X’s definition—that seemed to be the view.

But I did not see how this could be so. That this table is a solid object does follow from (or is contained in) the definition of a table. But that we are now sitting at this table does not. Nothing in any definition of a table known to me could possibly tell us whether it is true that we are now sitting at the table.

Perhaps the issue has a different focus: This would not be the egg that it is if it had not been laid by this hen, and I would not be the person I am if I had not been born to the specific parents I had. True—but would I also have to have the characteristic of having been born at the moment that I was? If I had been born a day earlier (to the same parents etc.), wouldn’t it still have been me? True, it wouldn’t have been me if the birth had taken place in ancient Greece—the parents wouldn’t have been the same, etc. But would one really be prepared to say that all features of me are defining, including the mole on my cheek and the fact that a bee had just stung
me? I saw nothing but endless confusion in that way of trying to deny the difference between necessary and contingent statements. I tried using some examples, of the kind that made my students catch on to the distinction most quickly. That this flower is red, that there are six of them on this plant, that such plants exist at all—these are contingent statements, they depend on the way the world is, which can’t be known a priori; that 2 + 2 = 4, that the angles of a triangle equal 180 degrees, that if A is larger than B then B is smaller than A—these are necessary truths, I tried to explain, even if one doesn’t accept the analytic-synthetic distinction.

Or again, with regard to possibility and impossibility: I can’t jump 20 feet high, but I (logically) might, and if I claimed to do so my statement would be false, but there would be no contradiction in it. But if I claimed to have gone backward in time, and disappeared from 1961 to 2500 B.C. (and what could that mean?), and actually helped the Egyptians build the pyramids—this, I said, was a logical impossibility, because contradictions would be involved in asserting it: I would be saying that (for example) the pyramid-building occurred without me (I wasn’t born yet) and also that I participated in it (by “going back” in 1961 to 2500 B. C.); and that there were, let’s say, 5,368 persons building the pyramids and (with the new addition of myself) there were 5,369—but there (logically) couldn’t have been both 5,368 and other than 5,368. And so on. She granted the impossibility in the second case, but perhaps not for the reason I mentioned. To her all impossibility was of one stripe, and she did not admit the distinction between logical and empirical possibility. I stated a problem (or pseudo-problem) which seemed to fascinate my students: “How do you know that you and I are seeing the same color? True, we both pass the color-blindness tests, and you say you see green when you look at the tree, just as I do, but how do I know you aren’t the victim of a “reversed spectrum,” for example that you regularly see red where I see green and vice versa, but of course you call it green like everyone else, since that’s the word you’ve been taught to use in describing the color of trees? But perhaps if I could see what you’re seeing, I’d call it red, or something else. After all, how do I know?” Maybe the outcome has no practical import, but it’s a nice theoretical question anyway—the sort of thing that science seems unable to answer.

I cannot say that Ayn was fascinated by this question. She regarded it as rather trivial. But she heard me out. I suggested that you can (usually, perhaps always) get to what a questioner means by his question, if he can tell you what sort of thing would satisfy him as an answer—what precisely does he want to know? Now consider these possibilities (I said): (1) Suppose it were technically possible, as one day it may be, to connect one person’s eyes and optic nerve with another person’s brain. You could, then, quite literally see through the other person’s eyes; and then you would know whether the leaves looked the same color to you as they did when you looked through your own eyes. You’d be able to compare what you saw with your former eyes with what you saw through your new eyes. Perhaps when you did this you would say, “They still look the same to me,” and that would settle the question; or you might say “They don’t look as they used to at all,” and that too would settle the question.

But of course (I pursued) one may object that this won’t do. (2) Exchanging eyes isn’t enough, runs the objection. The interpretation of these visual data takes place in the brain. To settle the issue, I would not only have to have your eyes, I’d have to have your brain (or at least a part of it). But now we run into what’s called the problem of personal identity. If my brain were put into your body and vice versa (assuming this to be as technically possible as exchanging eyes) would it still be me? Would it still be me, with all my brain’s memory-traces now inside your head? Here we run into a problem that’s more than a technical problem; what is it that constitutes one’s self,
if not one’s perceptions, dispositions, and memories? How can I exchange brains with you and still be me? Thus, if this second alternative is the one demanded to resolve the problem, then unlike the first alternative, it can’t be solved: the conditions demanded for the solution are self-contradictory.

Ayn wasn’t very impressed with all this. She didn’t consider the issue to be of any importance in the first place. She was temperamentally unsympathetic to this way of doing philosophy. And she had no patience with the distinctions I used in order to arrive at a solution. For her it was a non-solution to a non-problem.

In spite of her lack of concern for shifts of meaning in a word or phrase, I had to be very careful what terms I used in her presence; for some terms, if I used them, would trigger in her an instant conclusion that was quite foreign to anything I meant. When I mentioned that a theory in science can be accepted or rejected on pragmatic grounds—as a device for explaining the most by means of the least—she would hear the term “pragmatic” and accuse me of being a pragmatist. And then I would explain at some length that I was not a pragmatist in any sense that she probably had in mind—for example, I did not hold that the truth of a statement had anything to do with its utility. I only used the term within a definite context, with a meaning defined within that context—and one should not jump to the conclusion “You’re a pragmatist,” for I wouldn’t even know what she meant by the term in that sentence.

For a person who was always insisting on “iron-clad definitions,” I found her linguistic habits quite sloppy. I was aware that Rome wasn’t built in a day and that she had not grown up in a tradition in which sensitivity to these matters was considered important—one just strode over the issues in seven-league boots (my characterization, not hers). Still, philosophic outcomes depend so much on just such subtleties that I became discouraged when after many hours of discussion she
showed no more awareness of where I was really coming from than she had when we started.

I had no problems with her ignorance of modern logic or physics (such as Heisenberg’s principle), but when the very issues she raised required a finely honed instrument to grapple with them insightfully, and she seemed quite unaware of what that instrument could do, and remained so after time, I gradually became as discouraged with her as she was impatient with me.

Somewhere she had picked up the idea that philosophers in the twentieth century were skeptical about the existence of an “external world” (tables, trees, stars, etc.). I told her that skeptical arguments in this area were still extensively examined, in the tradition of Hume, but that no one so far as I knew had any actual doubts about the existence of the chair they were sitting on, and so on. But that, she said, was the mistake: they don’t doubt it in practice but they do in theory—they don’t practice what they preach. I explained that when skeptical arguments occur, as in Hume, they have to be met, in an attempt to make theory accord with practice; one can’t just assume that “common sense” is always right. I explained a similar situation in Zeno’s paradoxes, and Parmenides’ attempt to deny the reality of motion. I said there were lots of problems about the relation of the world to the senses by means of which we perceive it. I did mention, almost incidentally, an attempt to prove that we know the existence of the external world for certain, namely by Prof. Norman Malcolm in his essay “The Verification Argument” (in Max Black’s anthology, Philosophical Analysis). Instantly she picked up on this, inquiring about Malcolm as a possible ally. She wanted to know more about him and even to invite him to New York for a personal meeting. She did not read his article, or anything else by him, but I outlined the rather complex argument of the article for her in two typed pages, trying to state his premises accurately and show how they yielded his conclusions. She expressed gratitude to me for doing this. But, she wondered, why should a person go to such lengths to defend a thesis that was so obvious? I realized that to Ayn the existence of the physical world was axiomatic and didn’t require defense, and told her that she would probably find no particular ally in Malcolm, who was most interested (in the essay) in exploring the implications of terms like “verification” and “certainty.” At any rate, there the matter dropped. She took my word as to what his arguments were, and as far as I know she  never read anything to enlighten her further on the issue. We discussed many other philosophical issues, often in a brief and fragmentary way, before concentrating on something else. I omit here those issues of which I could not now give an accurate account from memory. In many cases I remember more clearly what I said than what she said. Her non-fiction works had yet to be written, and what I endeavor to record here is what she and I said then, not what we might have said later. Moreover, most of my readers will probably be acquainted with her position on various issues, but unacquainted with mine; and I want to provide some conception, however brief and unsystematic, of where I was coming from on the issues we discussed. When we discussed metaphysical and epistemological issues, a certain
tension between us would very gradually and almost imperceptibly arise. I could usually avoid an unpleasant scene by attributing (correctly) the view being discussed to some actual philosopher, living or dead, and then she could curse the philosopher in question and take the heat off me. It’s not that I wanted to avoid responsibility for the view, but I wanted to avoid unpleasant scenes, which only impeded the progress of our discussions, and achieved no worthwhile end that I could think of. But it was clear that I was not “giving in” to her brand of metaphysics, and equally clear that my methods of what I liked to call philosophical clarification were falling on arid ground in the present case. I became somewhat discouraged, especially since she seldom acknowledged an error and seemed less interested in learning than in defending prepared positions. Moreover, what seemed like a blinding philosophical light to me would be a total dud to her, and her highly abstract philosophical pronouncements often seemed to me confused, unclear, or false, effective though they might be as banners for enlisting the philosophically un-washed. Meanwhile, several incidents occurred that distressed me. There was a professor at a midwestern university who had been denied tenure some months earlier, for saying that he wouldn’t mind too much if his daughter slept around a bit before she decided on whom to mate with for life. The faculty was up in arms against the university administration for terminating him, and started a nation-wide petition on his behalf. I had also signed a petition requesting that he not be terminated.

When I showed Ayn the letter to which I had responded on his behalf, Ayn saw my name on the letterhead and urged me strongly to dissociate myself from any attempt to defend him. He should not have referred to his daughter publicly in that way, she said. I asked her whether she really thought he should be denied tenure just on account of having said what he did. And Ayn’s reply stunned me: he should have been terminated from his job, she said, even if he’d had tenure. Knowing all that tenure means to someone who has worked for years to earn it, I found her reply shocking and astonishing. Newsweek wrote a terribly unfair piece about Ayn. I responded to it by letter, trying to answer their charges point by point. I gave Ayn a copy of my letter. Newsweek never published it, but that, said Ayn, made no difference; what mattered was that I had come to her defense by writing it and responding to the false charges.

Not long after, New York University’s philosopher Sidney Hook attacked her in print, and she wanted me to take him on as well. Knowing Sidney, I was disinclined to do this. He already knew about my acquaintance with Ayn, but we had never discussed it further (I hardly ever saw him). Should I now condemn him publicly and destroy a long-standing friendship? I knew that this friendship would be at an end if I condemned him.

Ayn was sure that nothing less than a public condemnation was required to prove to him how much I was devoted to “intellectual objectivity.” But she had very little conception of the manners and morals of professional academicians—they can get along well and even be friends, while disagreeing strongly with one another on rather fundamental issues. The philosophic arena was one for the friendly exchange of diverse ideas. But for her, it was a battlefield in which one must endlessly put one’s life on the line. I was not willing to risk years of occasional friendly communion with Sidney by condemning him publicly, even if I thought he was mistaken in some of his allegations.

But for Ayn this was a betrayal. It almost cost us our friendship. In the end she attributed my attitude to the misfortune of having been brainwashed by the academic establishment, at least with regard to their code of etiquette.

I once mentioned to her my friendship with Isabel Hungerland, a distinguished aesthetician from Berkeley with whom I would discuss issues at philosophical conventions. Ayn inquired what her politics were. “As far as I know, she’s a liberal,” I said. “What!” exclaimed Ayn, “a friend of yours—a liberal?”

I realized then that I was expected, once I knew Ayn, to sacrifice the friendship of all persons with political (and other) views opposed to hers. Not that I would have to—I was supposed to want to. It was immoral of me to continue to deal with such people. With many of them, as with Isabel, I had a kind of relaxed, laid-back relationship, never talking politics at all from one year to the next, and often not knowing what their political views were. But now I was supposed to excommunicate them all. “If thine hand offend thee, cut it off.” I was not willing to plant a flag on a new terrain and thereby disavow my allegiance to all other views, and I deeply resented Ayn’s attempt to steer me in that direction—or should I say, her assumption that I would “of course” do such a thing.

It wasn’t that I would have been unwilling to declare where I stood, if I had been totally convinced and was prepared to defend it. I try not to back off of commitments. But my whole way of coming at philosophy was quite different from hers, and in spite of various attempts I don’t think she ever understood mine. With her, it was as if she were developing a Euclidean geometry from a set of axioms; I, on the contrary, was the gadfly who kept puncturing the axioms or finding their meaning (in some cases) to be vague or confused. As a result of this I was convinced that “the high priori road” was not the way to go in philosophy; I was sure that a careful, step-by-step, case-by-case approach, frustrating though it might be in the work required and the time needed to get anywhere with it, was the only road to progress. This wearied her, bored her, and ultimately repelled her. The more time elapsed, the more the vise tightened. I could see it happening; I hated and dreaded it; but knowing her personality, I saw no way to stop it. I was sure that something unpleasant would happen sooner or later. The more time she expended on you, the more dedication and devotion she demanded. After she had (in her view) dispelled objections to her views, she would tolerate no more of them.

Any hint of thinking as one formerly had, any suggestion that one had backtracked or still believed some of the things one had assented to previously, was greeted with indignation, impatience, and anger. She did not espouse a religious faith, but it was surely the emotional equivalent of one.

When I was authorized by the American Society for Aesthetics to ask Ayn to give a twenty-minute talk at their annual meeting, which would take place this time in Boston the last weekend of October 1962, I passed on the offer to her at once. She accepted, with the provision that I be her commentator (all papers were required to be followed by a response from a commentator). She thought that I would understand her views better than those who had no previous acquaintance with them. I consented.

And so it was that on the last Friday night of October 1962, she gave her newly-written paper “Art and Sense of Life” (now included in The Romantic Manifesto). In general I agreed with it; but a commentator cannot simply say “That was a fine paper” and then sit down. He must say things, if not openly critical, at least challengingly exegetical. I did this—I spoke from brief notes and have only a limited recollection of the points I made. (Perhaps I repressed it because of what happened shortly thereafter.) I was trying to bring out certain implications of her talk. I did not intend to be nasty. My fellow professors at the conference thought I had been very gentle with her. But when Ayn responded in great anger, I could see that she thought I had betrayed her. She lashed out savagely, something I had seen her do before but never with me as the target. Her savagery sowed the seeds of her own destruction with that audience.

When her colleague Nathaniel Branden and I had a walk in the hall immediately following this exchange, there was no hint of the excommunication to come. But after the evening’s events were concluded, and by previous invitation I went to Ayn and her husband Frank’s suite in the hotel, I saw that I was being snubbed by everyone from Ayn on down. The word had gone out that I was to be (in Amish terminology) “shunned.” Frank smiled at me, as if in pain, but he was the only one. When I sensed this, I went back to my room. I was now officially excommunicated. I had not so much as been informed in advance. It was all over. In the wink of an eye. So now a two-and-a-half-year friendship was at an end. It had come with such suddenness, I couldn’t quite handle it at first. The long evenings with Ayn were now a thing of the past. I was now the one to feel a sense of betrayal.

But my pain was not entirely unmixed with relief. The pressure had been mounting, and certain tensions between us had been increasing steadily. Being forced to choose between friendship and truth as I saw it (even if I saw it mistakenly), was not my way of conducting intellectual life. I would sooner or later have had to escape from the vise, I reflected. Perhaps it was better this way, with an outside
event precipitating the break. Sooner or later, probably sooner, I would have been too explicitly frank or honest, and she would have had an angry showdown with me, and that would have been that. Or so I told myself. At any rate, along with the pain and the desolation, I felt a sense of release from an increasing oppressiveness, which had been inexorably tightening.

At dinner earlier that evening, when the radio announcer said that Kennedy would not call off his blockade of Cuba even at the risk of nuclear war, Ayn had said, “Good!” Privately I wondered whether she had also said “Good” in connection with the break in our relations. Perhaps she merely reflected with regret that the years of her efforts on my behalf had been largely wasted.

At any rate, that night was the last time I ever saw her. But I heard her once after that. In the late summer of 1968, not long before the Big Break, Nathan phoned me in California and said “I want to put you on the line to someone.” The conversation with Ayn was very brief. “I understand that you are presenting my philosophy to your classes,” she said. I replied that I was—I considered Ayn’s views in several of my courses, without thereby implying that I did so with total agreement. She seemed gratified, and wondered how I was, and then turned the telephone back to Nathan. I thought of her endlessly during the years. Her enthusiasm for ideas, her intensity, her unfailing bluntness and those piercing eyes—the image of these things was never far away from me, especially when I assigned some of her essays in my classes and discussed them with students point by point. But I never regretted that I had not been enveloped further in the web of intellectually stifling allegiances and entanglements, the route I had seen so many of her disciples go.

In the next few years, as her non-fiction essays appeared, I read them avidly and made many notes and comments in the margins—points to raise with her, questions to ask her. But of course I never got to ask them. And then, almost fifteen years after my expulsion, I heard on the radio that she had died. I felt, even after all these years, a devastating sense of loss. It was hard to stay in control during my talk at the memorial service for her in Barnsdall Park in Los Angeles.

How often, on visiting New York, I had almost stopped at her apartment building. No, I thought, her friendships are broken but her enmities last. It wouldn’t be any good. And surely she had treated me pretty shabbily. But I thought of her, up there in that apartment, without Frank now, and I wanted to be mesmerized by those piercing eyes once again, and have another all-night discussion as in the old days. I never got up the courage to take that step. It would probably have been useless.  The occasion is past, and the past is gone forever. That, I thought to myself with a certain grim irony, is at least one necessary proposition to which she would have given her assent.

(Originally published in Liberty magazine, 1987)

In our last issue, John Hospers related what it was like to talk philosophy with Ayn Rand. Now, in the conclusion to his memoir, he details some of their philosophical differences and relates the inevitable falling out between the philosopher and the visionary.

A Critique of Faith

by Dr. John Hospers

l. Religious faith

I devote the opening section of this essay to a brief summary of Sam Harris’ book

The End of Faith, with some deletions and a few additions of my own.

When I say to you, a trusted friend, “I have faith in you,” I am relying on my past experience of your character and disposition to make a statement about my present attitude toward you. Many professions of faith, however, are not of this kind:  they express a present attitude which has little or no basis in fact.  When we read, for example, that water has been turned into wine, or that a person already dead has come back to life, we have no such basis in our past experience; indeed, what is alleged is something contrary to our experience of how the world works; it is “pure faith´ in the absence of any evidence to sustain the belief.  Many of the ancient Greeks believed that there were numerous gods—Zeus on Mt. Olympus ruling the earth, Poseidon ruling the seas, Pluto ruling the underworld, and so on.  There were many forms of polytheism, as well as various forms of monotheism such as belief in the Old Testament god Yahweh.  There is no empirical evidence that would enable us to determine which of them, if any, is true; belief in them is entirely a matter of faith.   We have only the words in a supposedly sacred text.(We have independent evidence for the existence of Jesus, but not of Noah or Moses or Abraham.)

Not only have we no way to verify any of these beliefs, but there is an added problem: many of them contradict one another, so these beliefs cannot all be true.   Zeus cannot be king of the gods if Zoroaster also is; nor there one and only one god if there are also numerous gods.  If a belief is true, another belief that contradicts it cannot also be true.  Aristotle’s Law of Non-contradiction holds, regardless of the field of discourse in which we are engaged.

Even within the same religious text, there are alleged truths that contradict one another.  The god of the Old Testament is seen and heard:  he talks with Adam and Eve in the cool of the evening.  But God, we are also told, is eternal and invisible.  The infant Jesus was taken into Egypt, but (according to another Gospel) was not taken into Egypt.  God is the author of all things, and thus also of evil, but he is, we are also told, not the author of evil; Satan is.

How can people believe these mutually contradictory statements?  (1) Sometimes, I think, the belief rests on some ambiguity:  it is true if you take it in one sense but not if you take it in another:  Jesus was a man who was born in Bethlehem of Judea and died like the rest of us, but also he was God who existed “from all eternity” and “before the foundation of the world”.  This certainly seems like a contradiction, but some theologians have attempted to work out ways in which it is not.   (2) Most believers, however, fail to notice these discrepancies because they don’t really bother to read the passages in question.

They mouth the lines as part of a religious liturgy, but the repetition of the words has been almost automatic:  they do not think them through or try to connect them with other passages with which they are at odds.

Nor do they try to relate them to their everyday experience, as they do when talking about themselves or what goes on in their familiar world.  They believe, at least they do not doubt, that (perhaps in their own lifetime) Jesus will return to earth “on the clouds of heaven” to bring “the legions of the saved” into eternal paradise with him.  Yet if they were actually to see a robed figure appearing to them out of the sky and swooping  earthward, they would probably be as surprised as anyone else    They do not doubt, either, that the resurrection of Jesus was genuine:  they do not cite, as their preachers do, numerous religious authorities who proclaim to them that Jesus’ resurrection is just as certainly  true as the existence of the church in which they are sitting; they don’t think about these religious authorities, they just believe on faith that somehow after they die they will live again.

What is it that prompts people to entertain such beliefs and continue to hold them throughout a lifetime even in the face of contrary experience?    Some say is “hope, grounded in the promises of Scripture”; others, that it is hope entertained in desperation; others, that it is  “believing something you know darned well isn’t so”.  For the most part, believes Harris, it is the psychological difficulty or inability to face reality, the fact that “this is it” and death ends our mortal existence.  People find life unbearable without belief in a hereafter, particularly when life has not dealt kindly with them and they have nothing to live for in the here and now.    The parents’ six-year-old daughter has just died of a fatal disease and they desperately want to see her again; what buoys them up is the faith that they will one day be with her again.

At this point I could wish that the author had been more explicit about what the content of their belief is supposed to be:  The parents believe they will see their daughter again, be with her, and love her.  For how long?  Presumably forever.   If the parents will not see her until they reach heaven in sixty years, will she still be a small daughter at that time?  That is how the grieving parents imagine it:  they do not imagine her as a grown woman, and certainly not as an old woman some years later (and certainly not as one who in the course of time dies).   It’s their little girl, now—years later they might not feel so strongly about it any more.  Also, would she still look the same as she did here—surely not as she did when ravaged by the disease?  Would she still have those fits of coughing or sneezing as she used to, or that little limp, and the inability to digest certain foods? Or would she have no defects whatever, not even the peculiarities of personality which irritated some people and endeared her to others?   Surely the parents would imagine her as having the characteristics they liked or approved of (not quite the same thing!).  And would she coexist in heaven alongside a younger sister who had not yet been born when this one died?  And what would their relations be with each other: warmth, familiarity, a bit of strangeness perhaps?

One could speculate forever about how such things should be imagined, or exactly what there would be to imagine. (Harris does not venture so far.)  In any case, the grieving parents don’t try to imagine the future situation (happiness with their daughter in heaven) in any specific detail.  It is enough that they see her again (for how long? Forever? Might they not tire of it eventually?) .  Never mind such details as to how such things are possible, or apparent obstacles like the Law of Non-contradiction, which they have never heard of anyway.   Their primary wish is to be happy again, which they find impossible without her.  It would seem that in such a situation one doesn’t adjust one’s feelings to the facts (don’t we all think we should?) but one adjusts the facts to one’s feelings—a Randian recipe for psychological disaster.

2. Faith and morality

The above is a summary and critique of a world-view based on faith, which Harris presents in The End of Faith.  The author, however, also delves somewhat summarily into moral philosophy, or at any rate into moral pronouncements.    What apparently unites these pronouncements is the view, shared by most people at least in the West, that pain and suffering are evil and should be avoided unless such pain and suffering lead to greater happiness or fulfillment.  He repeatedly condemns the Crusades and the Inquisition  as the wanton infliction of suffering.  Also condemned are a large number of Biblical commands and prohibitions:  “What, after all, is the punishment for taking the Lord’s name in vain?  It happens to be death (Leviticus 24:l6).  What is the punishment for working on the Sabbath? Also death (Exodus 3l:l5).  What is the punishment for cursing one’s father and mother? Death again (Exodus 2l:l7).  What is the punishment for adultery?  You’re catching on (Leviticus 20:l0).”  (page 115)

Moreover, the details of such punishment are often spelled out, though modern believers have only a limited visualization of them.  “If your brother, the son of your father or of your mother, or the spouse whom you embrace, or your most intimate friend, tries to secretly seduce you, saying, ‘Let us go and serve other gods,’ unknown to you or your ancestors before you, gods of the peoples surrounding you, whether near you or far   away, anywhere throughout the world, you must not consent, you must not listen to him; you must show him no pity, you must not spare him or conceal his guilt. No, you must kill him, your hand must strike the first blow in putting him to death and the hands of the rest of the people following.  You must stone him to death, since he has tried to divert you from Yahweh your God (Deuteronomy l3:7-ll)” (page l8)

Most people today, however, do not read such passages, or even know that they exist.  They are somewhat embarrassed if they happen to come across them, but if they are committed to believing that the entire Bible is the Word of God, they dare not openly reject such passages—since they are apparently “stuck with them,” they simply ignore them or “pay them no heed.” But they cannot reject them outright if their eternal salvation depends on acceptance of the entire Bible.

The author does condemn torture and killing in all its forms (including capital punishment), including the Nazi, Soviet, and Chinese communist regimes.  But the main target of his condemnation is none of these, but current Islamo-fascism as manifested especially in Saudi Arabia and Iran.  Fundamentalist Muslims differ from their

Soviet predecessors in at least one important respect:  the Soviets were deterred by the fear of nuclear annihilation.  Today’s Islamo-fascists are not deterred by threats of death: by killing unbelievers, they are promised a blissful hereafter for themselves.

Pacifism, says Harris, is an unwillingness to die, combined with a willingness to let others die at the pleasure of the world’s thugs. Islamo-fascists exhibit, by contrast, a willingness to die, combined with a commitment to making every unbeliever die.

Such is the ultimate result of accepting religious views based solely on faith.

Harris reserves the term “moderate Christians” for   believers in Christianity who don’t take their faith very seriously.  “Moderate Muslims”, however many of them there are, do not take theirs seriously either. The fate of the world in the twenty-first century, he concludes, may hinge on how many moderate Muslims there will be in the coming years.

I must say that I find that conclusion extremely plausible.

 

Libertarianism and Legal Paternalism

by John Hospers
Department of Philosophy,
University of Southern California

In his book Principles of Morals and Legislation, the eighteenth-century philosopher and legislator Jeremy Bentham divided all laws into three kinds: (1) laws designed to protect you from harm caused by other people; (2) laws designed to protect you from harm caused by yourself; and (3) laws requiring you to help and assist others. Bentham held that only the first kind of laws were legitimate; and in general libertarians would agree with him.

The third class of laws, sometimes called “good Samaritan” laws, are greatly on the increase today, and their principal examples are not laws requiring you to assist persons in trouble (such as accident victims) although these are on the increase,’ but rather laws-both Congressional and bureaucratic- having to do with income redistribution, such as welfare and food stamps and programs for the disadvantaged. Bentham argued persuasively against these laws as well; but he also condemned laws of the second kind, and it is these I propose to discuss in this paper. Legislation designed to protect people from themselves is called “paternal legislation,” and the view that such laws are legitimate and ought to be passed is called “legal paternalism.”

1

Legal moralism is the view that the entire nation should be governed by one morality and/or religion, with dissent from the official view being punishable as a crime. Examples of legal moralism are the Catholic Church prior to the Reformation and Iran under the Ayatollah Khomeini.
Legal paternalism is the view that the law should, at least sometimes, require people to act (a) against their will (b) for their own good, in that way protecting them from the undesirable consequences of their own actions.

The term derives from the Latin “pater” (father): just as a kind father protects his children against harm and danger, pulling the child away from the speeding car or from the precipice down which he is about to fall, so the State should protect its citizens, not only against harm inflicted on them by other citizens, but also against harm which they might inflict on themselves.

Thus, according to legal paternalists, the State should prohibit drugs because otherwise people might take them, and even if the danger is only to their own health or life the State should protect such values for them if they are too foolish or incompetent to do so for themselves. Or again, the State should protect people from their own profligacy by forced savings, such as social security.

Libertarians, of course, are vigorously anti-paternalistic, believing as they do that people should absorb the consequences of their own actions, and that in any case the State has no right to legislate what people should do as long as their actions harm no one else. The concept of “harm” is admittedly vague: some people would say, for example, that a teacher is harming their children more by teaching them anti-Christian doctrines than by injuring their physical bodies, and if such people had their way they would impose not only legal paternalism but a whole system of legal moralism. Most Christians today, however, aware of what would happen if each moral or religious sect tried to impose its views on everyone in this way, would resort to persuasion rather than to force, and however evil they might find certain teachings to be they would stop short of wanting them declared illegal. But disagreement about what constitutes harm continues: some consider X-rated movies harmful, others say the same about nude beaches, and still others would make the same assertion about certain theories of education. Yet most of those who say this (in the case of education, at least, often with good reason) would stop short of saying that those who inflict this alleged harm should be subject to civil or criminal prosecution. “Harm” is usually construed by libertarians, in accordance with their own political philosophy, to include (a) bodily injury, such as assault and battery, (b) damage to or theft of property, and (c) violation of contract; and accordingly it is only these that libertarians usually seek to prohibit by law. Even libertarians are not, however, opposed as a rule to all paternalism. There are several groups of people in behalf of whom some degree of paternalistic action would be considered proper.

1. Infants and children. Infants cannot take care of themselves at all, and children cannot in many ways. Children do make decisions, hut lacking experience they often fail to comprehend the consequences of their own proposed actions. Views on children’s rights are a hotbed of current controversy; but there is probably no parent who has not at some time used coercion in order to prevent some harm to the child or bring about some good.

A degree of paternalism concerning children is also embodied in the legal system: for example, if parents demonstrably abuse their children, the State takes the children out of the parent’s custody for the children’s own good, even if such action may not be in accord with the children’s own wishes at the time. The rationale of this is that the parents have proved themselves to be unfit custodians of the children’s rights.

2. The senile. When an elderly couple can no longer take care of themselves but refuse to leave their home, and when they consistently refuse to pay the utility bills and the heat and light are cut off, it is customary for a near relative to obtain power of attorney from the court in order to pay the bills and perhaps conduct other business transactions on behalf of the parents even if the parents are unwilling, in order to protect the parents from the consequences of their own actions. Though there has been little discussion of this, it is probable that most libertarians would go along with a degree of paternalism in such cases; at least it would bespeak a certain crassness to say, “If they’re so stupid or forgetful as not to pay their utility bills, let them freeze!” Our ordinary assumption is that people are able to estimate to some extent the probable consequences of their own actions, and this assumption is unjustified in the case of senility, just as it sometimes is in the case of children.

3. The mentally incompetent (a wider class than “the insane”). This is hardly a clear-cut group, but there are many people who are quite unable to function in the world and quite as unable to fend for themselves as are young children. In most states people are at least temporarily institutionalized when they are “in imminent danger of harming themselves or others.”

Libertarians in general are opposed to the compulsory institutionalization of persons who have committed no legal crimes; but it is not clear that all libertarians would be committed to opposing the non-voluntary incarceration of a knife-wielding psychotic in an aggressive phase when he was bent on killing the children in the neighborhood. Others might approve a person’s compulsory incarceration if he was a danger to himself, or even if he was simply unable to function, e.g., to know how to find food or shelter even if he had the money in his pocket.

But let us leave these groups aside for the moment. What about “ordinary normal adults”? At least, one would think, we should be totally opposed to any paternalism with respect to them. “Neither one person, nor any number of persons,” wrote John Stuart Mill in On Liberty, “is warranted in saying to another human creature of ripe years, that he shall not do with his life for his own benefit what he chooses to do with it. . . . The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right.”

Mill, a disciple of Bentham, was a utilitarian, and based his ethical conclusions on whatever was for “the greatest good of society”. But it is doubtful whether he could justify his strong anti-paternalism on utilitarian grounds. It may be that forcing motorcyclists to wear helmets for their own protection produces in its total consequences more good, e.g., more total happiness and less unhappiness, than the policy of not forcing them-particularly if there are lots of careless riders. It may even be that the policy of having parents arrange marriages produces less unhappiness than having young people (especially when they are emotionally immature) decide these matters for themselves; yet

Mill would have them decide for themselves, even make their own mistakes and hopefully profit from them. In fact Mill, not in his Utilitarianism but in On Liberty, bases his anti-paternalistic stand on quite different considerations. “There is a part of the life of every person who has come to years of discretion, within which the individuality of that person ought to reign uncontrolled either by any other person or by the public collectively.” And again, from On Liberty, “A man’s mode of laying out his existence is the best, not because it is best in itself, but because it is his own mode. . . .It is the privilege and proper condition of a human being, arrived at the maturity of his faculties, to use and interpret experience in his own way.” Mill here is “saying something about what it means to be a person, an autonomous agent. It is because coercing a person for his own good denies this status as an independent entity that Mill objects to it so strongly and in such absolute terms. To be able to choose is a good that is independent of the wisdom of what is chosen.”~ The question I now want to ask is, Are libertarians committed to being one hundred percent anti-paternalistic, leaving aside the groups described in the previous section?

We are sometimes paternalistic with non-deranged adults, and believe ourselves to be quite justified in being so. A friend or spouse says to you, “Be sure to get me up at 7 o’clock; my job depends on it. Force me if you have to. No matter what I say at the time, get me up.” If you do so, contrary to the person’s wishes at the time, do you as a libertarian feel guilt and remorse? No, because even though forcing him to get up at that time is contrary to his wishes as of that moment, it is in accord with hislong-termgoals for himself. We are in a position in which we have to sacrifice either his short-term goal (staying asleep) or his long-term goal (keeping his job), and we consider it preferable to honor his long-term goal.

The attendant at a hospital force-feeds a patient who needs nourishment in order to live but refuses to take it. Should the libertarian say “If he doesn’t want food, it’s wrong to force him to take it”-thus letting him die?

Surely not. What we will do (or at the very least, may permissibly do) is to go counter to his present desires, which may last a day or a week, in order to fulfill his long-term desire (which was constant prior to his present illness), which was to remain alive. When the patient has recovered he may thank us for force-feeding him: “It saved my life.” If this happened, would the libertarian still say that the force-feeding was wrong? Even if we have no independent evidence at the time that the patient’s attitude was pro-life, we may tentatively infer this from the fact that he has already lived this long, and are justified in having apresumption that he wishes to live. If he is grateful to us for saving his life, this alone justifies our previous action; and if he still wants to die after his recovery, he is still alive to make that choice, and there remain many ways in which he can undertake to bring about his own death if he so chooses. Some decisions, once made, are extremely far-reaching, or dangerous, or irreversible-sometimes all three at once, as in the present case.

When this is so, we act paternalistically on the person’s behalf, so that he can live to freely choose another day.

IV

It is one thing to be justified in doing X; it is another thing to require everyone to do X by law. Is there any justification at all for legal paternalism?

Mill himself thought there were occasions when legal paternalism was justified. He held, for example, that a contract by which a person agrees to sell himself into perpetual slavery should be null and void-as indeed it would be declared by virtually any court in the Western world. But why, if a person signs such a contract, should anyone interfere with it? “The reason for not interfering, unless for the sake of others, with a person’s voluntary acts,” wrote Mill in On Liberty, “is consideration for his liberty. . . . By selling himself for a slave, he abdicates his liberty; he foregoes any future use of it beyond that single act. He therefore defeats, in his own case, the very purpose which is the justification of allowing him to dispose of himself. …The principle of freedom cannot require that he should be free not to be free. It is not freedom to be allowed to alienate his freedom.” The reason for not honoring such a contract is the need to preserve the liberty of the person to make future choices. Paternalism is justified at time t, in order to preserve a wider range of freedom for that individual at times t2,t,, tn, etc.

Perhaps this example is extreme, or at any rate unique. Let us return then to our more mundane example, the law (which exists in all the states of the United States except four) requiring cyclists to wear helmets for their own protection. But “for their own protection” is not the only reason why such laws have been passed. It is also for the protection of others-thus falling under the heading of impure paternalism rather than pure paternalism. (A law is purely paternalistic if it is solely for the individual’s protection; it is impurely paternalistic when it is partly for that reason and partly for other reasons.) Without a helmet, a cyclist involved in an accident is liable to get a permanent head injury, and under present welfare and disability laws he would be a permanent ward of the state, perhaps living on for decades at taxpayer expense. The Supreme Court of Rhode Island a few years ago upheld the helmet requirement on the ground that it was “not persuaded that the legislature is powerless to prohibit individuals from pursuing a course of conduct which could conceivably result in their becoming public charges.”

Committing suicide is commonly a criminal offense. (You can be killed for doing it.) Even unsuccessful attempts are punishable. Yet if your life is your own, haven’t you the right to take it whenever you wish? What right has the State to command you not to take it? None, we say. Yet the State orders its policemen, when a person tries to kill himself by jumping in the river, to do their best to rescue the would-be suicide provided they can do so without “substantial risk” to their own lives. Is there any justification at all for this rule? I believe that such a rule could be defended, for the kind of reason already given: by forcibly preventing a person from taking his life at time tl, he thereby enables the person to make his own choice later, whereas the person’s death would put an end to all future choices.

Perhaps the person was in a depressed state of mind which would pass, if he lived; perhaps he was confused, or drugged, or deranged -the policeman is in no position to know when he sees the man jump. It is better to assume that in the long run the man wants to live, than to assume that his continuing and steady disposition (time t*, t2,. . . t,) would be to die. If one assumes that his attempt is only a temporary aberration, and acts accordingly, the rewards may be great; whereas if it is not merely a temporary aberration, but an abiding disposition, then the man will still be alive to make a choice for death at a later time.

Paternalism in such a case represents a kind of wager made by the person acting paternalistically on another’s behalf: “I’ll wager that the long-run trend of your desires is contrary to your apparent wish at the present moment, so I will act to preserve your long-term wish even if it means denying your present, and hopefully temporary, one.” In some cases it may even be justifiable, as in the case of teen-age marriages, to have an enforced waiting period: when the consequences of the act would be far-reacing and possibly catastrophic, it may be better to make the person wait or hesitate even if he doesn’t wish to at the time, just as one makes the person get up even if he doesn’t want to at the time.

An impulsive suicide leap would have far-reaching and irreversible consequences, so isn’t one justified in erring, if at all, on the side of caution? If the weeks go by and the person is still deeply depressed and refuses advice or therapy, then hecan, with Marcus Aurelius, weigh the pros and cons carefully and still decide, “The room is smoky, so I leave it.”

Rather than adopt the simplistic conclusion that all paternalistic action is wrong, I shall adopt a more moderate conclusion: I want to say that the greater the degree to which a person’s action (or a proposed action, or a thought-of action) is voluntary, the less are other persons (or institutions, especially the law) justified in behaving paternalistically toward that person. But the key word here is “voluntary”. The popular conception of voluntariness, which is shared by most libertarians, seems to me only to skim the surface of the concept. The popular conception, embedded also in most libertarian literature, is that voluntariness means non-coercion. As long as you’ve not been coerced, this argument suggests, your decision is voluntary.

But in my view much more than this is required.

1. Freedom from coercion and pressure. It is true, of course, that when coercion occurs the decision is not voluntary. But even here there are degrees. The limiting case of coercion is one in which, for example, someone stronger than you are forces your fingers around the trigger of the gun; you resist but without success. In that case it isn’t your act at all, but the act of the person who forced you. Still, you were coerced. More typically coercion consists not of overt physical action but of the threat of it: “If you don’t hand over your wallet, I’ll shoot.” Unlike the first case, in threat cases there is a choice: you can surrender your life instead of (or probably in addition to) your wallet. But it isn’t much of a choice, and handing over the wallet isn’t the choice we would have made except for the coercion-we were made to do something we would not voluntarily have done.

Threats, too, are a matter of degree. Threat of loss of life is more serious than threat of injury; threat of injury is (usually) more serious than threat of loss of employment; and a threat by your mother-in-law to move if you don’t do what she asks is still Less of a threat-indeed it may be not a threat at all, but rather its opposite, an inducement.’ Many libertarians are willing to call it coercion only if there is physical harm or threat of physical harm, but in my opinion this is much too narrow. A threat of loss of a job may not be much of a threat if you can easily obtain another; but if no others are obtainable within a hundred miles, or if your special skill is not one for which there is any longer much demand, or if you would have to move your whole family to another state, the threat of loss of a job could be very serious. In any case it’s not a job you would voluntarily have left- you would not have quit it hut for the coercion (and it is coercion, threatening the means by which you live, differing only in degree from threat to life or limb).

Indeed, any kind of pressure put on you interferes with the voluntariness of your decision. The warden says, “If you don’t cooperate with us by joining the group therapy sessions, we’ll put you in the hole for two weeks.”

Surely this compromises the voluntariness of the prisoner’s decision.

Someone puts pressure on you to make a decision hastily when you wouldn’t have made it without the pressure; while this may not be comparable to loss of life or limb, it may seriously compromise the voluntariness of your decision. It may be that laws against duelling are justified because if duelling were legally permitted many people would feel great pressure to preserve their “macho” image by never turning down a challenge, and thus they are (not exactly forced, but) pressured (perhaps with enormous sociological pressure) into entering a duel time after time even though they would prefer not to, and would refrain but for the pressure. It’s not an outright case of coercion, but there is a continuum between coercion and pressure and when the pressure is of the kind I have described, an individual will be relieved and gratified, and in the long run fulfilling his lifeplan much more in accordance with his own wishes, if the practice is prohibited by law. (Remember the film The Duellists, in which this kind of pressure ruins the protagonist’s whole life. How different is that from killing him outright?) There is a certain paternalistic wisdom in the remark of that eminent philosopher Groucho Marx in one of his films, when he wakes from a faint and says, “Force some brandy down my throat!”

Any influence, whether pressure or outright coercion, which keeps the process of decision making from “filtering through your mind” and thus triggers the decision with partial or no cooperation from your untrammeled decision-making faculties, tends to inhibit the full voluntariness of the decision. But freedom from coercion and pressure is only one of the conditions requisite for voluntary action.

2. Informed and Educated Consent. The decision must be informed, based on the facts relevant to the case, and purged of false information. If the merchant sells you what he says is a real diamond when it’s actually glass, and you pay the price of a diamond, your decision to pay is not voluntary: “You wouldn’t have paid that much voluntarily,” we say, at least not for a piece of glass. It’s not that you were coerced, or even pressured; you were defrauded, that is, you were fed false information in making your decision.

Fraud is only one special case. You think you are drinking water, it was water you asked for and your host at the party brought a clear liquid that looked like water, only it contained poison. Even though no pressure was placed upon you, it is not reasonable to hold that you are voluntarily drinking poison. Drinking the poison is not in these circumstances a voluntary act; drinking water would have been, but that is not what you are doing. Or: you start to walk across a bridge, not knowing that further down the bridge has collapsed (you can’t see it through the fog). You know that if it has collapsed you will likely fall to your death, but you don’t know that it has collapsed. Since your aim is to cross the bridge and not to commit suicide, your action, based on misinformation, is not voluntary. If a man really thought that when he jumped out of the 20th floor window he would float through the air, would his jumping to his death still be voluntary?

When a patient consents to participate in a medical experiment-he’s not threatened, not pressured-but some of the possible serious consequences or unpleasant side-effects of the experimental drug have been concealed from him, one would not say that he consented voluntarily to take the drug.There must not only be uncoerced consent, there must be informed consent. Because his consent is not informed, it is not fully voluntary. How informed must it be to be “really informed”? The general formula is: he must be told all the relevant facts prior to making his decision. But this too turns out to be a matter of degree: one could go on forever citing medical facts which might turn out to be relevant; can one ever be quite sure one has reached an end of citing such facts? Even if the physician or researcher has cited all the facts he knows, there may still he others he doesn’t know which are highly relevant to the patient’s decision, even to his life or death. It would seem, then, that a patient can have “informed consent” but not ‘yully informed consent.” If full (complete) information is required for voluntariness, the patient’s consent must always be something less than fully voluntary. But once again, this is a matter of degree.

When prisoners, or patients in mental hospitals, are encouraged to offer themselves as experimental guinea pigs, it is highly probable that there is, lurking in the background if not in the foreground, some external pressure (punishment if you don’t, reward if you do). But in addition to this, it is seldom indeed that the patient is told even all the relevant information that the physician knows; what happens is more like “How would you guys like to join us in an interesting experiment, which won’t take much of your time” and so on. Thus the consent fails of voluntariness in both counts.

It would hardly be an overstatement to say that the consent of children to participate in such an experiment can never be wholly voluntary, and that “voluntary consent,” though it may be required in such a case, can never be given. Even if the child could reel off all the information an unusually loquacious physician has presented regarding the new medication, the child is not in a position to appreciate the force of that information. How many children can really understand the full force of a simple statement like “There’s a 50-50 chance that you’ll die”? Children can make all kinds of confident assertions, wagers, and challenges, not knowing fully what they really mean. When the twelve-year-old is offered some L.S.D., with the invitation “It’ll give you a wonderful high,” he may accept it eagerly, just as a baby might play with a stick of dynamite or a loaded gun. For this reason, contrary to what some libertarians apparently believe, all such invitations by others should be prohibited by law, for the child’s protection. The child cannot give informed consent, much less “educated consent-and those who would take advantage of the child’s incapacity should be met with the full force of the criminal law. To say of the child that “after all he gave his consent” would be ludicrous if its consequences were not so tragic.

3. Healthy Psychological State. I believe that there is a third condition that must be fulfilled as well. A person may not be under coercion or outside pressure, and he may be fully informed of the relevant facts of the case, and yet he may make his decision in what I can only describe as an unsatisfactory-or irrational, depending on what that term is taken to mean psychological state. A person may be mentally deranged; but lacking this extreme, he may be in a daze, or drugged, or in an acute state of grief or depression, or just simply confused. Ordinarily when a person is in such a state he can hardly be described as “fully informed,” and so his action would fail of voluntariness by the second criterion. But there may well be occasions when he is not pressured and all of the facts are clearly before him, and yet he is in no position to make a decision such as he would make if he were not in such a psychological state. A person in a state of depression might be quite lucid as to the facts, yet a recital of ordinarily horrifying facts, such as his own imminent death or the extinction of the entire world, may well not move him to any kind of action or response.

I do not wish to say that any decision we might label as unwise shows that the person is in such an “abnormal” psychological state; people can certainly act voluntarily and yet foolishly. I only wish to suggest that when a person is in such a mental state as I have indicated, his decisions should not be described as fully voluntary. A psychotic in a highly manic phase may jump out of a second-story window, quite without coercion and in full possession of information as to the probable effects of his action. It is primarily because of the mental state of such a person, not because of pressure or lack of information, that we hesitate to describe his actions as fully voluntary.

In discussing human action, libertarians place very great emphasis upon voluntariness. But in my opinion most libertarians conceive it too thinly. “If he was forced, he hasn’t acted voluntarily”- this much libertarians all assent to. But too often they fail to see that voluntariness is not as simple as that -that once it is clear that no coercion or pressure has been applied, the action may yet fail of being voluntary. I have argued that the simplistic conception of voluntariness not only fails to do justice to the concept, but is often highly unfortunate in its effects. And I have argued that voluntariness, like so many other concepts, is not a yes-or-no concept but a matter of degree: not only does coercion-pressure itself encompass a broad spectrum of influences, from the application of force at one end to the exertion of subtle psychological pressure on the other, but that even when no external pressure has been applied, an act may be only incompletely voluntary because of its failure to meet the other two conditions.

VI

Whenever I have offered remarks in defense of paternalism in the previous pages, paternalistic action was to be taken in order to help a person achieve his own goals. The man wants to get up at 7 a.m. to keep his job, and by going against his 6 a.m. command we are helping him achieve what he himself (though not at that moment) wants. If a person’s suicidal impulse is transitory, we help him achieve his long-term goals- which all, of course, presuppose life-by not letting him kill himself now. Even when laws prohibiting duelling were defended, it was on the assumption that a life freed of this curse is what the person who is constantly being challenged to other duels really wants for himself.
But there is also paternalism which thwarts the person’s long-term goals.

Laws limiting the number of hours per week a person may work are often defended as protecting that person; but what if the person doesn’t want any such thing? What if the person wants to work extra long hours this year in order to have money to start a possibly lucrative business next year?

“But,” one may say, “surely laws or actions that thwart the person’s own goals can’t be paternalistic at all, because part of the definition of paternalistic action is that it’s for the person’s own good.” Yes, but there’s the rub: what is for the person’s good may not be the same as what he wants (even in the long run). Suppose that what would be for his good is to develop his talents so as to have a fulfilling life, but that all he wants is to be a bum. Or suppose he is a drug addict, and all he wants for himself even over a life-span is a state of drug-soaked euphoria (he doesn’t mind if his life is short, as long as it is, by his own standards, sweet). Even if we believe, and even if we believe truly, that such a life does not serve his good-we think of the wasted talents and of what he might have achieved and enjoyed if he had not (on our view) thrown away his life-we are nevertheless faced with the fact that what we want for him is not the same as what he wants for himself.

Any kind of paternalism which consists of our acting against his will to achieve our goals for him, rather than our acting against his (present) will to achieve his own goals (assuming, of course, that he is sufficiently mature to have them), is the kind of paternalism which I believe libertarians should condemn. Libertarians have condemned all paternalism without recognizing its two distinct forms, one of which may sometimes be acceptable and the other not.

Once it is clear that our goals for a person do not coincide with his goals for himself, and once we have used reason and possibly persuasion to convince him (never force), and he still sticks to his own, then as libertarians we must conclude, “It’s his life, and I don’t own it. I may sometimes use coercion against his will to promote his own ends, but I may never use coercion against his will to promote my ends. From my point of view, and perhaps even in some cosmic perspective, my ideals for him are better than his own.

But his have the unique distinguishing feature that they are his; and as such, I have no right to interfere forcibly with them.” Here, as libertarians, we can stand pat. It is, after all, just another application of Kant’s Second Moral Law -that we should always treat others as ends in themselves, never as means toward our own ends.

NOTES

1. See, for example, James Ratcliffe, ed., The Good Somariron and rhe Low (New York: Anchor Doubleday Books, 1966).

2. See Joel Feinberg, Social Philoso~hy (Englewood Cliffs, N.J.: Prentice-Hall vawrback. Foundations of ~hilosophy series, 1973), chaps. 2 and 3

3. Gerald Dworkin. “Paternalism.” The Monist 56. no. I.

4. See John Hospers, “Some Problems ~oncerni& Punishment and the Use of Force,” Reason (November 1972 and January 1973).

Journal of Libertarian Studies, Vol. IV. No. 3 (Summer 1980)