Author Archive for Family Wealth Wisdom

Accumulation of Assets – Only Half a Plan

By Charles Griffin,
CLU, ChFC
President of Family Wealth and Wisdom, Inc.

Try climbing up a mountain only to find that you can’t make it down.  Sounds silly, I know, but this happens regularly.  To begin with, most people who even attempt to scale Mount Everest fail to get to the top (72%).

Unfortunately, our retirement objectives fail us even more abysmally.  Of 100 people who start working at age 25, by age 65 only 4% or fewer have adequately stowed away enough capital for retirement!  That means that 96% failed to get to the top of their own personal retirement mountain!  How about yourself?  Are you heading for a financial disaster like most people are?

Alas, even worse than failing to amass a retirement nest egg, is that the vast majority of us who do actually accumulate some assets for retirement wind up losing a huge portion, sometimes well over half of what we’ve accumulated, because we don’t know how to get back down the mountain with our assets intact.  This eventuality comes upon us at a time when we are tired of sacrificing to make a living.  We have worked for years only to see large portions of our wealth erode or be taken away from us by the government, banks, financial gurus, brokers, insurance companies, and others.  At this point in our lives we are often times ill equipped to get back down the financial mountain.  Everyone who has made a living from us has headed for the hills.  Those who haven’t, have no idea how to advise us on how to avoid the financial pitfalls which we are about to face, pitfalls more onerous than those we encountered on our climb up the mountain.  (Some of these pitfalls Include inflation, taxation, market risk, access to and proper use of assets, control of assets, associated costs in maintaining assets, medical needs, estate preservation, gifting, charitable giving, proper stewardship, etc.).

We’re not properly set up to avoid the inherent and the planned (subtly and those not so subtle) pitfalls we will encounter in our retirement years.  These types of tragedies, while extremely common, can be avoided.  The sooner we acknowledge their existence and come to grips with them the better we and our loved ones will fare.

Avoidance starts with proper preparation.  It is never too late to start planning and trying to improve your present position.  For instance, did you know that up to 85% of your social security income can be (and most likely will be) taxed again?  That’s right!  That means that if you’re receiving $20,000 a year in social security retirement benefits you can be taxed an additional $4,250 per year, each and every year, for the remainder of your retirement years (in a 25% bracket).   So, in 24 years of your retirement, you can be paying over $100,000 in taxes that you don’t need to be paying, simply because you didn’t know how not to do so.  You are not going to be told how to get down from your financial Mount Everest by the government, the entity that wants all of your money anyways.

Consider this fact.  All along our career’s journey we’ve thought of ourselves as common simply because most of our friends were in the same socio-economic class as we were/are.  In reality, it turns out that we are not very common at all.  Most of us, those whose household’s adjusted gross incomes is $66,000 are actually in the upper 25% of all income earners in the country.  Those of us whose adjusted gross income is $112,000 are in the top 10% of all income earners in the country.  Do you feel rich or just ordinary, common?  The top 5% in income earners goes to those whose adjusted gross income is $155,000 while the upper 1% tops out with those earning $345,000.

So, does this mean that your income is relatively common or uncommon?  Do you think that the advice given to the masses, to common income earners focuses on wealth transfers due to taxation?  Of course not!  So why are we continuing to do the same common things that most everyone else is doing?  While it’s true that we don’t know what we don’t know, we don’t need to stay in financial bondage any longer than it takes for us to become educated.

We at Family Wealth & Wisdom, Inc. are here to educate those who want to be able to do more with what they have.  Call us for your free personal interview.  We look forward to hearing from you soon.

The Fall of Communism in Virginia

By Murray N. Rothbard

Mises Daily: Thursday, February 23, 2012

[Conceived in Liberty (1975)]

In fact, the Virginia colony was not doing very well in drawing off England’s surplus poor. Besides transporting vagrants and criminals to Virginia, the London Company and the City of London agreed to transport poor children from London to Virginia. However, the poorest refused the proffered boon and the company moved to obtain warrants to force the children to migrate. It seemed, indeed, that the Virginia colony, failing also to return profits to the company investors, was becoming a failure on every count.

The survival of the Virginia colony hung, in fact, for years by a hair-breadth. The colonists were not accustomed to the labor required of a pioneer, and malaria decimated the settlers. Of the 104 colonists who reached Virginia in May 1607, only 30 were still alive by that fall, and a similar death rate prevailed among new arrivals for many years. As late as 1616, only 350 colonists remained of a grand total of over 1,600 immigrants.

One major reason for the survival of this distressed colony was the changes that the company agreed to make in its social structure. The bulk of the colonists had been under “indenture” contracts, and were in servitude to the company for seven years in exchange for passage money and maintenance during the period, and sometimes for the prospect of a little land at the end of their term of service. The contract was called an indenture because it was originally written in duplicate on a large sheet — the two halves separated by a jagged line called an “indent.” While it is true that the original contract was generally voluntary, it is also true that a free society does not enforce even temporary voluntary slave contracts, since it must allow for a person to be able to change his mind, and for the inalienability of a person’s control over his will and his body. While a man’s property is alienable and may be transferred from one person to another, a person’s will is not; the creditor in a free society may enforce the collection of payment for money he may have advanced (in this case, passage and maintenance money), but he may not continue to enforce slave labor, however temporary it may be. Furthermore, many of the indentures were compulsory and not voluntary — for example, those involving political prisoners, imprisoned debtors, and kidnapped children of the English lower classes. The children were kidnapped by professional “spirits” or “crimps” and sold to the colonists.

In the concrete conditions of the colony, slavery, as always, robbed the individual of his incentive to work and save, and thereby endangered the survival of the settlement. The new charter granted in 1609 by the Crown to the company (now called the Virginia Company) added to the incentives of the individual colonists by providing that every settler above the age of ten be given one share of stock in the company. At the end of seven years, each person was promised a grant of 100 acres of land, and a share of assets of the company in proportion to the shares of stock held. The new charter also granted the company more independence, and more responsibility to its stockholders, by providing that all vacancies in the governing Royal Council be filled by the company, which would thus eventually assume control. The charter of 1609 also stored up trouble for the future by adding wildly to the grant of land to the Virginia Company. The original charter had sensibly confined the grant to the coastal area (to 100 miles inland) — the extent of English sovereignty on the continent. But the 1609 charter grandiosely extended the Virginia Company “from sea to sea,” that is, westward to the Pacific. Furthermore, its wording was so vague as to make it unclear whether the extension was westward or northwestward — not an academic point, but a prolific source of conflict later on. The charter of 1612 added the island of Bermuda to the vast Virginia domain, but this was soon farmed out to a subsidiary corporation.

The incentives provided by the charter of 1609, however, were still only future promises. The colony was still being run on “communist” principles — each person contributed the fruit of his labor according to his ability to a common storehouse run by the company, and from this common store each received produce according to his need. And this was a communism not voluntarily contracted by the colonists themselves, but imposed upon them by their master, the Virginia Company, the receiver of the arbitrary land grant for the territory.

The result of this communism was what we might expect: each individual gained only a negligible amount of goods from his own exertions — since the fruit of all these went into the common store — and hence had little incentive to work, or to exercise initiative or ingenuity under the difficult conditions in Virginia. And this lack of incentive was doubly reinforced by the fact that the colonist was assured, regardless of how much or how well he worked, of an equal share of goods from the common store. Under such conditions, with the motor of incentive gone from each individual, even the menace of death and starvation for the group as a whole — and even a veritable reign of terror by the governors — could not provide the necessary spur for each particular man.

The communism was only an aspect of the harshness of the laws and the government suffered by the colony. Absolute power of life and death over the colonists was often held by one or two councillors of the company. Thus, Captain John Smith, the only surviving Royal Council member in the winter of 1609, read his absolute powers to the colonists once a week. “There are no more Councils to protect or curb my endeavors,” he thundered, and every violator of his decrees could “assuredly expect his due punishment.” Sir Thomas Gates, appointed governor of Virginia in 1609, was instructed by the company to “proceed by martial law … as of most dispatch and tenor and fittest for this government [of Virginia].” Accordingly, Gates established a code of military discipline over the colony in May 1610. The code ordered strict religious observance, among other things. Some 20 “crimes” were punishable by death, including such practices as trading with Indians without a license, killing cattle and poultry without a license, escape from the colony, and persistent refusal to attend church. One of the most heinous acts was apparently running away from this virtual prison to the supposedly savage Indian natives; captured runaway colonists were executed by hanging, shooting, burning, or being broken on the wheel. It is no wonder that Gates’s instructions took the precaution of providing him with a bodyguard to protect him from the wrath of his subjects; for, as the succeeding governor wrote in the following year, the colony was “full of mutiny and treasonable inhabitants.”

The directors of the Virginia Company decided, unfortunately, that the cure for the grave ailments of the colony was not less but even more discipline. Accordingly, they sent Sir Thomas Dale to be governor and ruler of the colony. Dale increased the severity of the laws in June 1611. Dale’s Laws — “the Laws Divine, Moral and Martial” — became justly notorious: They provided, for example, that every man and woman in the colony be forced to attend divine service (Anglican) twice a day or be severely punished. For the first absence, the culprit was to go without food; for the second, to be publicly whipped; and for the third, to be forced to work in the galleys for six months. This was not all. Every person was compelled to satisfy the Anglican minister of his religious soundness, and to place himself under the minister’s instructions; neglect of this duty was punished by public whipping each day of the neglect. No other offense was more criminal than any criticism of the Thirty-nine Articles of the Church of England: torture and death were the lot of any who persisted in open criticism. This stringent repression reflected the growing movement in England, of Puritans and other Dissenters, to reform, or to win acceptance alongside, the established Church of England. Dale’s Laws also provided

That no man speak impiously … against the holy and blessed Trinity … or against the known Articles of the Christian faith, upon pain of death.…

That no man shall use any traitorous words against His Majesty’s person, or royal authority, upon pain of death.…

No man … shall dare to detract, slander, calumniate or utter unseemly speeches, either against Council or against Committees, Assistants … etc. First offense to be whipped three times; second offense to be sent to galleys; third offense — death.

Offenses such as obtaining food from the Indians, stealing food, and attempting to return to England were punishable by death and torture. Lesser offenses were punished by whipping or by slavery in irons for a number of years. Governor Dale’s major constructive act was to begin slightly the process of dissolution of communism in the Virginia colony; to stimulate individual self-interest, he granted three acres of land, and the fruits thereof, to each of the old settlers.

Dale’s successor, Captain Samuel Argall, a relative of Sir Thomas Smith, arrived in 1617, and found such increased laxity during the interim administration of Captain George Yeardley that he did not hesitate to reimpose Dale’s Laws. Argall ordered every person to go to church Sundays and holidays or suffer torture and “be a slave the week following.” He also imposed forced labor more severely.

Fortunately, for the success of the Virginia colony, the Virginia Company came into the hands of the Puritans in London. Sir Thomas Smith was ousted in 1619 and his post as treasurer of the company was assumed by Sir Edwin Sandys, a Puritan leader in the House of Commons who had prepared the draft of the amended charter of 1609. Sandys, one of the great leaders of the liberal dissent in Parliament, had helped to draw up the remonstrance against the conduct of James I in relation to the king’s first Parliament. Sir Edwin had urged that all prisoners have benefit of counsel; had advocated freedom of trade and opposed monopolies and feudalism; had favored religious toleration; and generally had espoused the grievances of the people against the Crown. For Virginia, Sandys wanted to abandon the single company plantation and to encourage private plantations, the ready acquisition of land, and speedy settlement.

The relatively liberal Puritans removed and attempted to arrest Argall, and sent Sir George Yeardley to Virginia as governor. Yeardley at once proceeded to reform the despotic laws of the colony. He substituted a much milder code in November 1618 (called by the colonists “The Great Charter”): everyone was still forced to attend Church of England services, but only twice each Sunday, and the penalty for absence was now reduced to the relatively innocuous three shillings for each offense. Yeardley also increased to 50 acres the allotment of land to each settler, thereby speeding the dissolution of communism, and also beginning the process of transferring land from the company to the individual settler who had occupied and worked it. Furthermore, land that had been promised to the settlers after a seven-year term was now allotted to them immediately.

The colonists themselves testified to the splendid effects of the Yeardley reforms, in a declaration of 1624. The reforms gave such encouragement to every person here that all of them followed their particular labors with singular alacrity and industry, so that … within the space of three years, our country flourished with many new erected Plantations.… The plenty of these times likewise was such that all men generally were sufficiently furnished with corn, and many also had plenty of cattle, swine, poultry, and other good provisions to nourish them.

In his Great Charter, Yeardley also brought to the colonists the first representative institution in America. The governor established a General Assembly, which consisted of six councillors appointed by the company, and burgesses elected by the freemen of the colony. Two burgesses were to be elected from each of 11 “plantations”: 4 “general plantations,” denoting subsettlements that had been made in Virginia; and 7 private or “particular” plantations, also known as “hundreds.” The 4 general plantations, or subsettlements, each governed locally by its key town or “city,” were the City of Henrico, Charles City, James City (the capital), and the Borough of Kecoughtan, soon renamed Elizabeth City. The Assembly was to meet at least annually, make laws, and serve as the highest court of justice. The governor, however, had veto power over the Assembly, and the company’s edicts continued to be binding on the colony.

The first Assembly met at Jamestown on July 30, 1619, and it was this Assembly that ratified the repeal of Dale’s Laws and substituted the milder set. The introduction of representation thus went hand in hand with the new policy of liberalizing the laws; it was part and parcel of the relaxation of the previous company tyranny.

 

Murray N. Rothbard (1926–1995) was dean of the Austrian School. He was an economist, economic historian, and libertarian political philosopher.

Postwar Rent Controls

Mises Daily

by Robert L. Scheuttinger and Eamonn F. Butler

This article is excerpted from Forty Centuries of Wage and Price Controls: How Not to Fight Inflation (1978),

The rent that a landlord charges for his accommodation is merely an instance of a price for a commodity, like all other prices for all other commodities. And like all other prices and all other commodities, rents have been a prime target for government restrictions. The postwar experience with rent control has been particularly revealing in regard to the adequacy of controls in general.

Governments have three main reasons for imposing rent control. The first is the fear that those who can pay will get all the housing and the poor will be left in the cold. The second is that landlords benefit too much from rents that can be indefinitely raised. The third is that a rise in rents is a form of inflation, and so should not be allowed.

The Housing Record of San Francisco

In a particularly penetrating article Milton Friedman and George Stigler examined the housing record of San Francisco.[1] After the earthquake of April 18, 1906, the heart of the city was utterly destroyed by fire. Some 225,000 people were homeless. “Yet,” say the authors, “when one turns to the San Francisco Chronicle of May 24,1906 — the first available issue after the earthquake — there is not a single mention of a housing shortage! The classified advertisments listed 64 offers (some for more than one dwelling) of flats and houses for rent, and 19 of houses for sale, against five advertisements of flats or houses wanted. Then and thereafter a considerable number of all types of accommodation except hotel rooms were offered for rents.”

In 1906, San Francisco allowed the free market mechanism to allocate accommodation, allowing rents to find their own level after the disaster. Even so, there was a great deal of low-cost accommodation available in San Francisco at that time. (Friedman and Stigler quote the 1906 advertisement “Six-room house and bath, with 2 additional rooms in basement having fire-places, nicely furnished; fine piano; … $45.”)

To take another example of housing shortage, in 1946 the population of San Francisco had increased by about a third in six years as people migrated westward. The problem was much less severe than the 1906 shortage, at least on paper. But the newspapers billed the shortage as “the most critical problem facing California.” Advertisements for apartments to rent ran at about one-sixteenth of the 1906 level, while advertisements of houses for sale were up threefold. In 1906 after the earthquake, there was 1 “wanted for rent” for every 10 “houses or apartments for rent”; in 1946, there were 375 “wanted for rent” for every 10 “for rent.”

Why the disparity? Because in 1906, rents in San Francisco were free to rise. In 1946, the use of higher rents to ration housing had been made illegal by the imposition of rent ceilings.

And what of the arguments against the allocation of housing by price? The first is very questionable: as Friedman and Stigler observe, “At all times during the acute shortage in 1906, inexpensive flats and houses were available.” The second is misleading. Of course landlords do benefit from a shortage like that of 1906. But the ultimate solution of a housing shortage must be the construction of new property, and nobody will construct new houses for rent if he is denied, through rent control, an attractive return on his money. As for the third argument, that high rents are a form of inflation, it must be observed that one does not keep prices down in an economy merely by taking commodities off the market, as rent controls do.

Scandinavian Houses

Rent control was introduced in Sweden in 1942 as an “emergency” and temporary regulation. At least until the end of the Socialist government in 1976, it was still in effect. The wartime housing shortage reached its peak in 1942 and seems to have become much worse over the period that controls had been operating.

In Sweden, the record of rent control speaks for itself. Says Sven Rydenfelt:[2]

To the economist, it seems self-evident that a price control like the Swedish rent control must lead to a demand surplus, that is, a housing shortage. For a long period the general public was more inclined to believe that the shortage was a result of the abnormal situation created by the war, and this even in a non-participating country like Sweden.… This opinion does not, however, accord with the evidence … that the shortage during the war years was insignificant compared with that after the war. It was only in the post-war years that the housing shortage assumed such proportions that it became Sweden’s most serious social problem.

The main demand-effect of Swedish rent controls has been to draw a huge number of single people — who would be more inclined to live with their families were rents allowed to rise — into the housing market.

Professor Frank Knight commented[3] on the phenomenon: “If educated people can’t or won’t see that fixing a price below the market level inevitably creates a ‘shortage’ … it is hard to believe in the usefulness of telling them anything whatever in this field of discourse.”

Rent Controls in Britain

Rent controls were first introduced to Britain in December 1915, prohibiting landlords to charge rents higher than those charged in August 1914, when the Great War broke out on the Continent of Europe. After the war, controls were relaxed to some extent, but new controls were imposed on September 1, 1939.

The economic effects of rent controls were (as Professor Paish notes)[4] inadequate maintenance, reduction in mobility, and fewer houses to let.

In recent years the situation has become more and more complicated. In the first place, public housing has become so heavily relied upon as a means of meeting the inevitable shortage that followed controls that some 42 percent of the population of the United Kingdom now live in publicly-owned housing; in Scotland, the proportion is 48 percent. Minimal rents are charged to these tenants. Private landlords are thus forced to compete with an ultra-low-cost housing alternative, which explains why the waiting lists for public housing are so long (a normal wait is several years) and why landlords are withdrawing their property from the market.

Another series of restrictions derives from the government’s attempts to tidy up the undesirable effects of restrictions themselves. When landlords cannot extract an adequate profit margin from their properties, then they let the buildings deteriorate, attempt to squeeze more tenants into the same building, and try to find ways around the rent restrictions. Hence the phenomenon of “Rachmanism,” which hit Britain in the early ’60s. To deal with these effects, it was thought necessary to introduce a new Rent Act in 1965, which gave security of tenure to many tenants. At the same time, rents of property not covered by the earlier regulations were “regulated” — that is, a “fair rent” was fixed and could be appealed from time to time by the landlord or the tenant. Since 1972, nearly all unfurnished rented property has been put under this rent regulation mechanism. And what has happened? The Francis Committee on the Rent Acts[5] published a table showing that the number of unfurnished vacancies advertised in the London Weekly Advertiser fell from 767 to 66 in the seven years up to 1970. In the same period, the number of furnished vacancies has increased from 855 to 1,290. “It is strange,” is the cynical comment of F. G. Pennance,[6] “that the Francis Committee forebore to draw the obvious conclusion — that rent regulation had affected supply.” And it had affected it for the worse. British landlords have become very reluctant to put their property up for rent, because the security of tenure offered to their customers means that they will often have difficulty in reclaiming the house. Nobody could estimate the numbers of landlords — especially small-scale operators — who have been driven from the housing market.

Other Effects of Rent Control

Many other severe side-effects of rent control are easily seen, other than mere shortage of housing.

The first is that controls, originally designed to help the poorer tenants, have now precipitated a situation in which many landlords are in fact poorer than their own tenants. For example, Dr. Willys Knight found in his study of Lansing, Michigan, that the median income of tenants was greater than the median income of landlords.[7] While the difference might be due to the effect of age (landlords are older and hence many of them have no income except rent), the encouragement of this difference does not seem to be a sensible way to solve our housing shortages. B. Bruce-Briggs, an urbanologist with the Hudson Institute, asserts flatly that “From the first, rent control [in New York] was actually a subsidy to the working and middle classes … partially levied on the very poor. “[8]

The second is that artificially low rents lead to misallocation of housing resources. Persons do not need to move into smaller apartments to reduce their rents, because rents are low. Similarly, homeowners with one or two spare rooms that they would rent out to single persons will not enter the housing market because they do not receive a sufficient return to justify the expense of repairs and redecoration. These and other effects generate a situation in which many individuals and families are homeless, while perfectly good accommodation is withdrawn from the market.

With the lack of money in the housing market comes lack of adequate facilities. In his 1942 essay on French rent controls, de Jouvenal[9] observed that controls meant that middle-class apartments with three or four reception rooms frequently cost about $2 a month. “Rent seldom rises above 4 per cent of any income,” he commented. “Frequently it is less than 1 per cent.” In Paris at that time there were about 16,000 buildings in such disrepair that they could only be demolished. And 82 percent of Parisians had no bath or shower; more than half had outside lavatories, and a fifth had no running water. The owners, who were not in a financial position to keep up their own buildings, could hardly be blamed.

As the capital stock deteriorates, slums are born. Since no economic incentive exists for owners to repair run-down properties in declining areas, the blight spreads. As the blight spreads, more and more buildings become uninhabitable. The effect of rent controls is ultimately to remove once-habitable dwellings from the housing stock.

The Decline of New York

These effects can be seen to have contributed to the demise of one city in particular, namely New York. Federal rent control went into effect there in November 1943 and the state took over its administration in May 1950. In 1962, the city became the administrator: in so doing, it made a stick to beat itself. The damage done by controls “cannot be properly assessed in dollars-and-cents alone. As even the hapless officials responsible now reluctantly concede, rent control is costing the City of New York, through abandonment and ultimate destruction, upwards of 30,000 dwelling units annually.”[10]

Social effects in New York are both severe — as the crumbling tenements make clear — and subtle: by setting tenant against landlord, rent control fans the flames of social hatred and class warfare in a city once known as the nation’s melting pot.

Urbanologist B. Bruce-Briggs has concisely summed up the consequences of rent control in New York City. He notes that:

Rent control reduces mobility by encouraging tenants to stay put. It also encourages people to occupy more space than they otherwise would. It offers the landlord incentives not to provide adequate services; he must be forced to do so by law, leading to endless litigation. (In 1975 nearly a half-million cases went to New York’s Housing Court.) Rent control must be, administered, at a cost of $13 million to New York City and State. It creates unimaginable costs for tenants and landlords in time and administrative fees. It has resulted in massive tax delinquency.[11]

Economic effects on the city itself are far-reaching. “Maximum Base Rents” are rarely increased by more than 8.5 percent per annum. In contrast, according to Barron’s[12], “taxes and labor are rising at well over 10 percent per year, while in the past 18 months, the price of fuel oil, a ponderable part of total operating costs, has soared by 2000 percent. Small wonder that more and more buildings are being run at a loss, while tax delinquency, once largely confined to one or two rotten boroughs, has spread far and wide.” Real estate tax delinquencies for fiscal year 1974–75 were estimated at $220 million, up from $148 million and $122 million in the two preceding years.

When government agencies insist that their policies have kept down the cost of accommodation, it seems fair to ask that costs such as these be taken into consideration. Rent controls in the postwar period, like most price and wage restrictions, have turned out to be an expensive failure.

Robert Schuettinger was educated in a one-room schoolhouse in Charlotte, Vermont. He later studied under Nobel Laureate and Mises Institute founding board member F.A. Hayek at the Committee on Social Thought of the University of Chicago, where he also edited the New Individualist Review with Ralph Raico. See Robert L. Scheuttinger’s article archives.

Eamonn Butler is director of the Adam Smith Institute. See his website. See Eamonn F. Butler’s article archives.

This article is excerpted from Forty Centuries of Wage and Price Controls: How Not to Fight Inflation (1978), chapter 12, “Postwar Rent Controls.”Notes

[1] M. Friedman and G. Stigler, “Roofs or Ceilings? The Current Housing Problem,” Popular Essays on Current Problems, Vol. 1, No. 2 (Irvington-on-Hudson: Foundation for Economic Education, 1946).

[2] Sven Rydenfelt, “Rent Controls Thirty Years On,” Verdict on Rent Control (London: Institute of Economic Affairs, 1972). See also a Canadian version of this valuable work entitled Rent Control–A Popular Paradox (Vancouver, British Columbia: The Fraser Institute, 1975).

[3] Frank H. Knight, “Truth and Relevance at Bay,” American Economic Review, December 1949, p. 274.

[4] F. W. Paish, “The Economics of Rent Restriction,” Lloyds Bank Review, April 1950.

[5] Report of the Committee on the Rent Acts, Cmnd. 4609, H.M.S.O. London, 1971, p. 8.

[6] Introduction to Verdict on Rent Control, op. cit.

[7] W. Knight, Postwar Rent Control Controversy, Research Paper 23 (Atlanta: Georgia State College School of Business, 1962).

[8] B. Bruce-Briggs, “Rent Control Must Go,” The New York Times Magazine, April 18, 1976. There are of course many anecdotal “horror stories” to illustrate this all too common phenomenon (the sad fact that many social programs designed to help the poor do the exact opposite in practice). The author offers his own pet example:

My favorite freeloader is a good friend who earns more than $30,000 a year. He has possession of a rent-controlled apartment in Yorkville that he rarely uses, since he prefers to live in his mistress’s penthouse in the East 50’s. He would be a fool to give up the rent-controlled apartment — they may split — so he uses it as an occasional office or pied-à-terre, lends it to friends, or occasionally subleases it for a short period of time. When asked to justify this, he will look you straight in the eye and tell you about the needs of the old people in rent-controlled apartments on his block.

[9] Bertrand de Jouvenal, “No Vacancies” (Irvington-on-Hudson: Foundation for Economic Education, 1962).

[10] Barron’s, October 27, 1975, p. 7.

[11] B. Bruce-Briggs, op. cit.

[12] Barron’s, op. cit.

USPS: The Cursed Carriers

Mises Daily

By Brian Anderson

From the original conception of the United States Postal Service in the 1700s to the technologically advanced market of today, the words that enumerated to Congress the power to “establish Post Offices and post Roads” have never been more than a waste of ink.

In Uncle Sam, the Monopoly Man, William C. Wooldridge explains well the historical patterns of failure within the United States Postal Service (USPS):

More than a decade before Parson Weems immortalized the cherry tree, the United States Post Office was losing money. For most of the years since Postmaster General Thomas Osborne reported the first deficit to President George Washington, it has continued to lose money, receiving all the while less critical attention than the cherry tree it antedates. Yet the stars in their courses do not ineluctably dictate a government postal monopoly.

Early Americans saw these failures each day, so they became actors working for a change.

The United States has a long and healthy history of entrepreneurial disobedience. You can easily say that the individualistic rejection of force in the marketplace was one of the only real mechanisms of “checks and balances” that actually worked against government.

So opposed were people to these government-run postal services in the 1800s that a natural order kicked in where no jury would even think of convicting the private agencies — an “underground nullification,” if you will. One of the first private American express firms was founded by William F. Harnden in 1839.

His business became very successful in the public eye, and the postmaster general, realizing that the competition hurt government revenue, began an investigation into its workings. Harnden wrote, in a letter to a Philadelphian business partner,

Receive nothing mailable. You will have no small number of Post Office spies at your heels. They will watch you very close. See that they have their trouble for their pains.

Since the service provided by Harnden’s firm was classified more as “package protection” and less as “package shipment,” however, there wasn’t too much the government could do.

Eventually Harnden contacted Henry Wells, whose connection with Daniel Drew, a well-known steamboat magnate and competitor to Cornelius Vanderbilt, allowed for a network expansion between shippers. Henry Wells — with George E. Pomeroy and Crawford Livingston — hoped to be acknowledged as a legal alternative to the USPS, and offered to carry mail for a mere 20 percent of the government’s then-current rate. The bureaucrats rejected the proposal but were subsequently forced to lower their own prices in fear of backlash from the general population in response to the detrimental protectionism.

Three years after the meeting between Harnden and Wells, individualist anarchist Lysander Spooner came up with his own plan to compete against the USPS through the creation of the American Letter Mail Company. Unlike his predecessors, Spooner didn’t pretend to be in compliance with the government’s postal monopoly. He made two arguments:

  1. the Constitution didn’t openly ban private carriers from voluntarily serving customers, and
  2. he’d keep delivering the mail even if it was deemed illegal.

Spooner elaborated on the first point through his fervent pamphlet entitled The Unconstitutionality of the Laws of Congress, Prohibiting Private Mails. He writes,

If Congress cannot carry the letters of individuals as cheaply as individuals would do it, there is no propriety in their carrying them at all.…

By the old articles of Confederation, it was declared that “the United States, in Congress assembled, shall have the sole and exclusive right and power of establishing and regulating post-offices from one State to another throughout all the United States.”

When the constitution came to be adopted, this phraseology was altered, and the words “sole and exclusive” were omitted. This alteration … must certainly have been intentional — and it clearly indicates that the framers of the constitution did not intend to give to Congress, under the constitution, the same “exclusive” power, that had been possessed by the Congress of the Confederation.

But this obviously purposeful alteration in wording didn’t work well enough to convince the court system that his voluntary, high-quality business provided a legitimate service in the economy. In the words of Peter Schiff, “The government’s going to do what it wants to do, and the courts are going to support that. The courts don’t care.”

Luckily for us, Spooner continued to successfully and cheaply deliver mail for seven years before the US government shut down his operation. The 12¢ stamps sold by the USPS were no match for Spooner’s 3¢ stamps, so the US government, in order to oppose the inevitable, officially declared that all city streets were to be deemed post roads, available only to the USPS in letter delivery. (The disobedience led to Spooner’s more radical work 23 years later, No Treason: the Constitution of No Authority, which argued for the Constitution’s invalidity as a legal contract.)

Skip ahead to the 1900s and we see that the price of a first-class stamp increased 633 percent in only 27 years, and this number is supplemented by a 10 percent speed decrease in 15 years. One would assume that, with the invention of basic email in the late 20th century, carriers would feel a stronger need to reflect the quick pace allowed by technology; the government didn’t. In a policy analysis for the Cato Institute, James Bovard finds,

In 1969 it required 1.5 days on average to deliver a first-class letter. By 1982 the average first-class letter required 1.65 days for delivery, and by 1987, 1.72 days. In the quarter of 1990 before the new standards were implemented, the average had increased to l.80 days. In the quarter after the new standards began to be implemented, the average rose to l.83 days — a 1.7 percent increase that makes current average delivery 22 percent slower than 1969 delivery.

And now we see the USPS in its saddest state yet. The 2000s have wrecked its only recognizable foundations. Nearly universal access is available to various kinds of communication, so it isn’t surprising that letter carrying is becoming obsolete. I don’t expect many carriers to continue business as usual, but it seems to me as if the government hasn’t even realized that we’re living in a new age.

For the past five years in a row, the USPS has had a negative net income in the billions with a record loss of $8.5 billion in 2010, up an entire $4.7 billion from the 2009 alone. These failings easily led to the organization’s recent placement on the Government Accountability Office’s list of high-risk institutions. Meanwhile, private agencies like FedEx and United Parcel Service are growing fantastically each year, even with the current restrictions set against them.

Earlier last year the USPS announced the closure of nearly 3,700 post offices across the United States in one last attempt to salvage its reputation, but the $200 million it will save stands insignificant next to its deficits. Not surprisingly, the postal-workers union isn’t making it easy. While labor costs represent only 32 percent and 53 percent of expenses for FedEx and United Parcel Service, respectively, they represent an astounding 80 percent for the USPS. So why isn’t this negative mechanism taken into consideration?

Murray Rothbard writes in Man, Economy, and State:

The inefficiencies of government operation are compounded by several other factors. As we have seen, a government enterprise competing in an industry can usually drive out private owners, since the government can subsidize itself in many ways and supply itself with unlimited funds when desired. Thus, it has little incentive to be efficient. In cases where it cannot compete even under these conditions, it can arrogate to itself a compulsory monopoly, driving out competitors by force.

And we clearly see this phenomenon in the case of the post office. Instead of facing the real issue at hand, executives at the USPS are focusing on $50 million in “stolen items” (a mere 0.5 percent of the 2010 annual deficit) that they’d like thieves to please return. We can only hope that the post office’s decision to eliminate next-day delivery for first-class mail will infuriate people to the point of paying attention to the root conflict.

It’s pathetic that a government-enforced monopoly continues to lose money.

In every facet of its business, the USPS has either been a failure from the get-go or its value has now been swept under the rug by newer and quicker streamlines in communication. In either case, to echo Spooner, it is unfit to exist. Congressional action needs to strip down and cease the enforcement of every last private-express statute in the legal code.

Neither snow nor rain nor heat nor gloom, only privatization can keep these couriers from being replaced by real choices in the free market.

 

Brian Anderson is an undergraduate majoring in the biological sciences. He lives in the southeastern United States.

Inflation Targeting Hits the Wall

By Antony P. Mueller

www.Mises.org

The financial-market crisis is not over but has grown into a vicious sovereign-debt crisis. Nevertheless, monetary policy makers of the major economies go on to practice the same sort of policy that has led to the crisis. Following the model of inflation targeting, they continue to disregard the quantity of money and the amount and kind of credit creation. As they did before, central bankers cut interest rates as low as they can. Few seem to remember that the monetary-policy concept of inflation targeting was adopted with the promise that low and stable inflation rates would produce financial and economic stability. Reality has not confirmed this assurance. On the contrary, inflation targeting was instrumental in bringing about the current financial crisis.

What Is Inflation Targeting?

A central bank that pursues an inflation-targeting monetary policy model would raise the policy interest rate (which in the case of the United States is the federal-funds rate) when the current price-inflation rate tends to move beyond the target and to reduce the policy interest rate when the rate tends to fall below the range. Operationally, the inflation rate is the target variable of this approach while the policy interest rate serves as the instrument variable. Different from monetarism, the monetary aggregates play only a secondary or no role at all in the inflation-targeting model.

The monetary policy model of inflation targeting can be expanded into the so-called Taylor rule to include the output gap and thus to encompass economic policy goals such as economic growth and employment. Unlike the original Taylor rule, monetary policy of inflation targeting in practice has ignored the growth of money and credit and uniquely selected the current official price-inflation rate as its foremost standard. Particularly in phases when the unemployment rate was found to be above the acceptable level, low rates of the consumer price index have served as a justification to bring down interest rates to excessively low levels. In many parts of the world where monetary policy employs an inflation-targeting framework, it has become a rule to ignore the expansion of monetary aggregates and to install extremely low interest rates. Inflation targeting has led monetary authorities to ignore not only money and credit growth but also asset prices along with other variables such as the exchange rate. By the rationale of inflation targeting, monetary policy has become blunt and ignorant by design, and in this respect a repetition of an earlier failure has occurred just at a time when the future head of the Federal Reserve felt sure that he could promise that “we won’t do it again” and let the US economy fall into depression. Not different from other areas of policies, in monetary policy, too, the only lessons that are learned from history are the wrong lessons.

As was revealed by the recent release of the transcripts of the meetings of the Fed in 2006, central banking is a fraudulent institution where the men at the top act like deliberate ignoramuses. As the transcripts flagrantly show, it was not lack of information that made the authorities ignore reality but a deficiency of comprehension and an almost childish faith in model constructions.

An Earlier Episode of Inflation Targeting

Inflation targeting is not new. Its basic idea was conceived by the American economist Irving Fisher (1867–1947). The Fed implemented a rudimentary form of inflation targeting shortly after it became operative in 1914 and explicitly practiced a policy of “stabilizing the price level” in the 1920s, in the decade before the inception of the Great Depression.

The 1920s marked a period of rapid accumulation of debt that until 1929 was accompanied by a rise of wealth due to a stock-market and housing boom. The collapse of the market ushered the economy into the Great Depression, which lasted over a decade.

During the 1920s the US monetary authorities seemed little concerned with credit expansion because the main focus was the “price level” — a statistical construct that Fisher also had promoted. Noticing that the price level was “stable,” the Federal Reserve felt no need to change course or to become preoccupied with what was going on. The Roaring Twenties were in fact exuberant times — albeit not for agriculture. It was industry that celebrated the new era and most of all this decade was one more heyday for Wall Street after the financial bonanza that World War I had delivered as the great enrichment of the financial sector.

The focus on price inflation had induced the monetary authorities to ignore credit growth and the expansion of money as well as to disregard the productivity gains of the US economy in this period. The Fed felt vindicated for letting the monetary aggregates expand as long as the price level remained relatively stable. No consideration was given to the notion that with productivity advances the price level should decline as had been the case when the United States was still on a full gold standard, and thus the quantity of money was relatively constant. In the 1920s, fixed on the price level, the monetary authorities did not hold the quantity of money constant, which would have meant deflation, but instead allowed an expansion of the money supply, because there seemed to be no reason to be concerned as long as the price level stayed stable.

What had happened in the 1920s was a wrong reaction of monetary policy makers to the widening divergence between the agricultural and industrial sector of the US economy. While agriculture fell into depression already shortly after World War I, US industry experienced a monetary-induced boom. On average, the price level appeared steady, although its stability was the result of a leveling out due to the combination of a deflationary depression of the agriculture sector and an inflationary boom of the industrial sector.

The Current Crisis

The latest episode of a megaboom occurred in the 1990s when, as in the 1920s, there was a stock-market bubble in combination with a massive increase of indebtedness of consumers for housing and other items. Central bankers did not pay much attention to the money supply and remained sanguine throughout the period that led up to the current crisis. The mantra of monetary policy was that, as long as the price level was relatively stable and only moderate price inflation rates were registered, interest rates could fall as low as they can drop, and the money supply could grow without restraint and get as large as demand of money seemed to warrant.

There were a series of severe shocks in the 1990s as well as in the decade before and after. Yet up to the outbreak of the last crisis, all of the preceding calamities could be overcome, so it seemed, with the simple tool of bailing out the creditors and by an expansion of the money supply. Inflation targeting consequently entailed a pervasive policy of bailouts and thus laid the basis of a financial culture of moral hazard.

In 2007 financial markets suddenly began to freeze, the flow of money in the interbank market came to a sudden standstill. It was as if a cardiac infarction had hit the heart of the financial markets. Albeit shocked, monetary policy makers demonstrated full confidence that a proper amount of liquidity injection would make the markets move again and soon; thus, they believed in their naïve conviction, the economy would recover to full bloom again. Yet doom set in when the old recipe didn’t work anymore. Despite massive injections of liquidity, markets only slightly recovered, and in 2008 a wave of defaults of financial institutions occurred. In August 2011, the United States came close to bankruptcy when Congress was reluctant to raise the statutory debt limit. Shortly thereafter the global financial crisis deteriorated into the European sovereign-debt crisis. Greece came close to bankruptcy and contagion hit Spain, Portugal, and Italy.

By early 2012, monetary policy has reached a stage where it is almost completely paralyzed. With interest rates close to zero in the major economies of the world, it is only through gargantuan amounts of liquidity injections that the financial system is propped up. By practicing a “zero interest rate policy” (ZIRP), by buying assets of dubious quality from financial institutions through its Troubled Assets Relief Program (TARP) and by trying to pump ever more liquidity into the market through its policy of “quantitative easing” (QE), an expansion of unprecedented proportions of the Federal Reserve’s balance sheet has occurred. The real or imagined assumption that the financial system is on the verge of complete collapse has brought about massive government bailouts and stimulus programs that have resulted in rising fiscal deficits and unsustainable public-debt burdens. Deflation has become the ultimate scare of governments and the dreadful nightmare of central bankers.

Fear of Deflation

It is largely forgotten that the spectacular economic rise of Britain, of parts of the European continent, and of the United States in the period of almost a century until the outbreak of World War I was characterized by moderate deflation, particularly since the beginning of the latter half of the 19th century when productivity increases began to accelerate. The price level did fall in the expanding economy because the money supply was linked to the gold stock and the gold stock was relatively constant. The deflationary period was marked by the ascendancy of prosperity brought about by a financial environment of stable interest rates, moderate long-term declining prices, and rising real wages. Letting good deflation happen put a break on excessive economic growth. A fixed stock of base money prevented the excess on the upside, and thus it automatically provided a safeguard against the excess on the downside. A stable stock of base money did not imply a strictly fixed amount of liquidity, because an adaptable velocity of money does provide a range of flexibility.

The outbreak of World War I marks the end of the deflationary period and the beginning of the inflationary age. When the last obstacle against a full discretionary hold on money was instituted and the Federal Reserve gained unrestricted power to produce as much money as it wanted, a new chapter in monetary history began. With the abandonment of the last remnants of the gold standard with the Smithsonian Agreement, monetary policy no longer had any anchor other than some kind of a policy concept. By slashing what was left of a monetary anchor in 1971, the monetary base of the US dollar began to rise and has swelled into an avalanche of money.

Now, the escalation of government debt to exorbitant heights practically prohibits central banks from raising interest rates if inflation should emerge more prominently. As of now, the additional liquidity that the major central banks have created serves mainly for the banking sector to refinance itself. Monetary policy has become a vehicle for the bailout of a whole sector of the economy. By coming to the rescue of the financial sector, central banks have delivered even more monetary dynamite. The world is at a crossroads. The chance to get out of the fix without great pain is much smaller than of having either a hyperinflation to be followed by economic depression or of crashing right away into the abyss of a deflationary depression. Monetary policy has reached the dead end. Once again a magic formula of an interventionist monetary policy has hit the wall.

Conclusion

Inasmuch as central banks dominate the discourse about monetary policy, there is almost no debate going on about the thesis that inflation targeting is not only defective in guaranteeing monetary stability but that it also provided the conditions for the current financial crisis to happen. The episode that was praised as the great moderation was a great delusion, which has become the nightmare of a long stagnation. There is a vital need to establish a sound monetary system. Its consequence would be moderate deflation and the avoidance of extreme booms and busts. The main barrier against sound money is neither intellectual nor practical but political. The resistance comes from the public sector because the chief casualty of an institutional change to sound money would be the modern inflated government along with its warmongers, debt pushers, and all the rest of the spin doctors of deceitful promises who form part of this kingdom.

 

Antony Mueller is a German-born economist who teaches at the Federal University of Sergipe (UFS) in Brazil. He is an adjunct scholar of the Mises Institute and the founder of The Continental Economics Institute. He maintains the blog Cash and Currencies. Send him mail. See Antony P. Mueller’s article archives.

How Deflationary Forces Will Be Turned into Inflation

By Thorsten Polleit

www.mises.org

I.

The ongoing financial and economic crisis has not only stoked fears that it will end in inflation — as central banks will print up ever-greater amounts of money — but it has also given rise to a diametrically opposed concern: namely, that of deflation.

For instance, in December 2011 Christine Lagarde, head of the International Monetary Fund (IMF), warned that the world might risk sliding into a 1930s-style slump, such as the Great Depression.

This episode was characterized by worldwide defaulting banks, a shrinking of the money supply (or, deflation), which in turn led to falling prices across the board, sharply falling production and drastically rising unemployment.

In today’s fiat-money regime — which contrasts with the gold-exchange-standard that was in place in many countries at that time — the possibility of deflation appears fairly small indeed.[1]

This becomes obvious if one takes a look at the workings of today’s fiat-money system, a system in which the money supply can actually be increased at any point in time in any amount deemed politically desirable.

II.

Commercial banks need two ingredients to produce additional bank-circulation credit, through which the fiat-money supply is increased, namely, central-bank money and equity capital.

Central-bank money is a “monopoly product,” produced by the central bank, typically through loaning to commercial banks.

Equity capital comes from investors who are willing to invest their money in commercial banks, thereby becoming owners of the banks.

Banks need central-bank money for three reasons. First, they have to hold a certain percentage of their liability vis-à-vis nonbanks in central-bank money; these are the so-called minimum reserves.

Second, banks need central-bank money for making payments in the interbank market. And third, banks keep central-bank money for meeting the cash drain, caused by clients demanding a cash payout of their deposits.

If, for instance, the minimum reserve rate for demand deposits is 2 percent, the banking sector as a whole can produce $50 of credit and fiat money with each $1 of central-bank money (that is 1 divided by 0.02).

Government regulation requires commercial banks to back up their “risky assets” (such as loans and securities) by a “minimum” equity capital. If, for instance, the minimum capital requirement is 8 percent, a bank can produce $12.50 of credit and money (that is 1 divided by 0.08) with a given $1 of equity capital.

If the risky weighting of risky assets is, say, 25 percent rather than 100 percent, a bank can produce credit and fiat money in the amount of $50 (that is $12.5 times 1 divided by 0.25). That said, a loss of $1 requires a bank to reduce its credit and money supply by $50.

Against this backdrop we find that the lower the minimum reserve ratio is, the more credit and fiat money the banking sector can produce with a given unit of central-bank money. And further, the lower the capital requirement and the risk weightings are, the higher will be the leverage banks can build up with a given amount of equity capital.

III.

From early 1960 up to the end of 2007, US banks’ credit and money multipliers (which denote the amount of credit and money banks can produce with $1 in central-bank money) increased drastically — thanks to a continual rise in central-bank money, ever-lower reserve requirements, and readily available bank-equity capital.

For instance, with a central-bank money supply of $1, banks produced around $211 of bank credit in August 2008. This compares with less than $20 seen in the early 1960s.

 

In “fighting” the credit crisis, the US Federal Reserve increased US banks’ (excess) reserves drastically as from late summer 2008. As banks did not use these funds (in full) to produce additional credit and fiat-money balances, however, the credit and money multipliers really collapsed.

The collapse of the multipliers conveys an important message: commercial banks are no longer willing or in a position to produce additional credit and fiat money in a way they did in the precrisis period.

This finding can be explained by three factors. First, banks’ equity capital has become scarce due to losses (such as, for instance, write-offs and creditor defaults) incurred in the crisis.

Second, banks are no longer willing to keep high credit risks on their balance sheets. And third, banks’ stock valuations have become fairly depressed, making raising additional equity a costly undertaking for the owners of the banks (in terms of the dilution effect).

For instance, in the euro area, bank stock prices fell by around 76 percent from the beginning of 2007 to the beginning of 2012 — unmistakably signaling investors’ lack of confidence in the viability of many banks’ business models. In the United States, bank stock price declines amounted to slightly more than 50 percent.

 

The latest developments suggest that banks are about (at least for now) to start scaling back their risky assets in relation to their prevailing capital base. In other words, banks are adopting a strategy of “deleveraging” and “derisking.”

Such a strategy can be put into practice by, for instance, refraining from rolling-over maturing loans granted to, say, corporate, consumers, and public-sector entities — something that will in a fiat-money system result in a decline in the outstanding credit and fiat-money supply.

Alternatively, banks can reduce their risks by selling off assets so far recorded on their balance sheet. This would also reduce the outstanding money supply — as the buyers of bank assets would pay with existing demand deposits, which are thereby literally “destroyed.”

Given that investors in bank capital become unwilling to expose their money to credit risks, let alone want to assume new credit risks, the fiat-money system, which has been highly inflationary in the last decades, would turn deflationary.

IV.

The chances for a full-fledged deflation (that is a decline in the money stock) are fairly small in a fiat-money system, however, first and foremost because deflation runs counter to vital interests of government and its close associates.

This group includes, for instance, those who are directly employed by government, receive business from government, and invest their lifetime savings in government bonds.

What is more, mainstream economists, who undeniably exert an important influence on public opinion, keep saying that deflation, because of its allegedly devastating economic and political costs, has to be avoided by all means.

The political incentive structure, combined with the antideflation economic mindset, really pave the way for implementing a policy of counteracting any shrinking of the fiat-money supply with all instruments available.

And the shrinking of the fiat-money supply can be prevented, by all means. For in a fiat-money regime the central bank can increase the money supply at any one time in any amount deemed politically desirable.

Even in the case in which the commercial banking sector keeps refraining from lending to the private sector and government, the central bank can increase the money supply through various measures.

By monetizing outstanding or newly issued debt, or by purchasing foreign currency, the outstanding money stock can be prevented from shrinking — “or by ‘printing’ money and somehow distributing it to the public as a transfer payment.”[2]

In an effort to “fight” government and bank defaults, major central banks around the world have already started expanding their balance sheet. They purchase government bonds against issuing new central-bank money and provide commercial banks with any amount of central-bank money required to keep them afloat.

 

The fiat-money supply can presumably be increased most conveniently through a policy of monetizing outstanding as well as newly issued government debt — as there will be strong public support for keeping governments liquid.

Governments would then be the very apparatus through which the bulk of the new money trickles into the economy — which also means a drastic expansion of government’s share in the economy at the expense of individual liberty.

However, in such an environment the money supply will most likely not only be prevented from shrinking but expanded even further — as increasing the money supply will most likely be seen as appropriate for lowering borrowers’ real debt burden.

The growing influence of government over the money supply therefore runs the risk of causing even bigger misuse of the printing press going forward, a finding that has all too often been confirmed by government’s meddling in monetary affairs.

However, any deflationary forces in the ongoing demise of the fiat-money system will most likely prove to be temporary in nature. In fact, the real threat is and remains high, even very high inflation.

Thorsten Polleit is Honorary Professor at the Frankfurt School of Finance & Management. Send him mail. See Thorsten Polleit’s article archives.

Notes

[1] See in this context, for instance, Kemmerer, E. W. (1944), Gold and the Gold Standard, The Story of Gold Money, Past, Present and Future, 1st ed., McGraw-Hill Book Company, Inc., New York and London, esp. pp. 118; also Rothbard, M. N. (2008 [1963]), What Has Government Done To Our Money?, Ludwig von Mises Institute, Auburn, Alabama, esp. pp. 90.

[2] Johnson, K., Small, D. Tryon, R. (1999), “Monetary Policy and Price Stability,” Board of Governors of the Federal Reserve System, International Finance Discussion Papers, Number 641, July, p. 29.

How Mises Rebuilt Economics

By Hans-Hermann Hoppe

What is the logical status of typical economic propositions such as the law of marginal utility (that whenever the supply of a good whose units are regarded as of equal serviceability by a person increases by one additional unit, the value attached to this unit must decrease as it can only be employed as a means for the attainment of a goal that is considered less valuable than the least valuable goal previously satisfied by a unit of this good) or of the quantity theory of money (that whenever the quantity of money is increased while the demand for money to be held in cash reserve on hand is unchanged, the purchasing power of money will fall)?

In formulating his answer to this question, Ludwig von Mises faced a double challenge. On the one hand, there was the answer offered by modern empiricism. The Vienna Ludwig von Mises knew was in fact one of the early centers of the empiricist movement: a movement which was then on the verge of establishing itself as the dominant academic philosophy of the Western world for several decades, and which to this very day shapes the image that an overwhelming majority of economists have of their own discipline.[1]

Empiricism considers nature and the natural sciences as its model. According to empiricism, the aforementioned examples of economic propositions have the same logical status as laws of nature: Like laws of nature they state hypothetical relationships between two or more events, essentially in the form of if-then statements. And like hypotheses of the natural sciences, the propositions of economics require continual testing vis-à-vis experience. A proposition regarding the relationship between economic events can never be validated once and for all with certainty. Instead, it is forever subject to the outcome of contingent, future experiences. Such experience might confirm the hypothesis. But this would not prove the hypothesis to be true, since the economic proposition would have used general terms (in philosophical terminology: universals) in its description of the related events, and thus would apply to an indefinite number of cases or instances, thereby always leaving room for possibly falsifying future experiences. All a confirmation would prove is that the hypothesis had not yet turned out wrong. On the other hand, the experience might falsify the hypothesis. This would surely prove that something was wrong with the hypothesis as it stood. But it would not prove that the hypothesized relationship between the specified events could never be observed. It would merely show that considering and controlling in one’s observations only what up to now had been actually accounted for and controlled, the relationship had not yet shown up. It cannot be ruled out, however, that it might show up as soon as some other circumstances have been controlled.

The attitude that this philosophy fuels and that has indeed become characteristic of most contemporary economists and their way of conducting their business is one of skepticism: the motto being “nothing can be known with certainty to be impossible in the realm of economic phenomena.” Even more precisely, since empiricism conceives of economic phenomena as objective data, extending in space and subject to quantifiable measurement — in strict analogy to the phenomena of the natural sciences — the peculiar skepticism of the empiricist economist may be described as that of a social engineer who will not guarantee anything.[2]

The other challenge came from the side of the historicist school. Indeed, during Mises’s life in Austria and Switzerland, the historicist philosophy was the prevailing ideology of the German-speaking universities and their establishment. With the upsurge of empiricism this former prominence has been reduced considerably. But over roughly the last decade historicism has regained momentum among the Western world’s academia. Today it is with us everywhere under the names of hermeneutics, rhetoric, deconstructionism, and epistemological anarchism.[3]

For historicism, and most conspicuously for its contemporary versions, the model is not nature but a literary text. Economic phenomena, according to the historicist doctrine, are not objective magnitudes that can be measured. Instead, they are subjective expressions and interpretations unfolding in history to be understood and interpreted by the economist just as a literary text unfolds before and is interpreted by its reader. As subjective creations, the sequence of their events follows no objective law. Nothing in the literary text, and nothing in the sequence of historical expressions and interpretations is governed by constant relations. Of course, certain literary texts actually exist, and so do certain sequences of historical events. But this by no means implies that anything had to happen in the order it did. It simply occurred. In the same way, however, as one can always invent different literary stories, history and the sequence of historical events, too, might have happened in an entirely different way. Moreover, according to historicism, and particularly visible in its modern hermeneutical version, the formation of these always contingently related human expressions and their interpretations is also not constrained by any objective law. In literary production anything can be expressed or interpreted concerning everything; and, along the same line, historical and economic events are whatever someone expresses or interprets them to be, and their description by the historian and economist is then whatever he expresses or interprets these past subjective events to have been.

The attitude that historicist philosophy generates is one of relativism. Its motto is “everything is possible.” Unconstrained by any objective law, for the historicist-hermeneutician history and economics, along with literary criticism, are matters of esthetics. And accordingly, his output takes on the form of disquisitions on what someone feels about what he feels was felt by somebody else — a literary form which we are only too familiar with, in particular in such fields as sociology and political science.[4]

I trust that one senses intuitively that something is seriously amiss in both the empiricist as well as the historicist philosophies. Their epistemological accounts do not even seem to fit their own self-chosen models: nature on the one hand and literary texts on the other. And in any case, regarding economic propositions such as the law of marginal utility or the quantity theory of money their accounts seem to be simply wrong. The law of marginal utility certainly does not strike one as a hypothetical law subject forever for its validation to confirming or disconfirming experiences popping up here or there. And to conceive of the phenomena talked about in the law as quantifiable magnitudes seems to be nothing but ridiculous. Nor does the historicist interpretation seem to be any better. To think that the relationship between the events referred to in the quantity theory of money can be undone if one only wished to do so seems absurd. And the idea appears no less absurd that concepts such as money, demand for money, and purchasing power are formed without any objective constraints and refer merely to whimsical subjective creations. Instead, contrary to the empiricist doctrine, both examples of economic propositions appear to be logically true and to refer to events which are subjective in nature. And contrary to historicism, it would seem that what they state, then, could not possibly be undone in all of history and would contain conceptual distinctions which, while referring to subjective events, were nonetheless objectively constrained, and would incorporate universally valid knowledge.

Like most of the better known economists before him, Mises shares these intuitions.[5] Yet in quest of the foundation of economics, Mises goes beyond intuition. He takes on the challenge posed by empiricism and historicism in order to reconstruct systematically the basis on which these intuitions can be understood as correct and justified. He thereby does not want to help bring about a new discipline of economics. But in explaining what formerly had only been grasped intuitively, Mises goes far beyond what had ever been done before. In reconstructing the rational foundations of the economists’ intuitions, he assures us of the proper path for any future development in economics and safeguards us against systematic intellectual error.

Empiricism and historicism, Mises notes at the outset of his reconstruction, are self-contradictory doctrines.[6] The empiricist notion that all events, natural or economic, are only hypothetically related is contradicted by the message of this very basic empiricist proposition itself: For if this proposition were regarded as itself being merely hypothetically true, i.e., a hypothetically true proposition regarding hypothetically true propositions, it would not even qualify as an epistemological pronouncement. For it would then provide no justification whatsoever for the claim that economic propositions are not, and cannot be, categorically, or a priori true, as our intuition informs us they are. If, however, the basic empiricist premise were assumed to be categorically true itself, i.e., if we assume that one could say something a priori true about the way events are related, then this would belie its very own thesis that empirical knowledge must invariably be hypothetical knowledge, thus making room for a discipline such as economics claiming to produce a priori valid empirical knowledge. Further, the empiricist thesis that economic phenomena must be conceived of as observable and measurable magnitudes — analogous to those of the natural sciences — is rendered inconclusive, too, on its own account: For, obviously, empiricism wants to provide us with meaningful empirical knowledge when it informs us that our economic concepts are grounded in observations. And yet, the concepts of observation and measurement themselves, which empiricism must employ in claiming what it does, are both obviously not derived from observational experience in the sense that concepts such as hens and eggs or apples and pears are. One cannot observe someone making an observation or measurement. Rather, one must first understand what observations and measurements are in order to then be able to interpret certain observable phenomena as the making of an observation or the taking of a measurement. Thus, contrary to its own doctrine, empiricism is compelled to admit that there is empirical knowledge which is based on understanding — just as according to our intuitions economic propositions claim to be based on understanding — rather than on observations.[7]

And regarding historicism, its self-contradictions are no less manifest. For if, as historicism claims, historical and economic events — which it conceives of as sequences of subjectively understood rather than observed events — are not governed by any constant, time-invariant relations, then this very proposition also cannot claim to say anything constantly true about history and economics. Instead, it would be a proposition with, so to speak, a fleeting truth value: it may be true now, if we wish it so, yet possibly false a moment later, in case we do not, with no one ever knowing anything about whether we do or do not. Yet, if this were the status of the basic historicist premise, it, too, would obviously not qualify as an epistemology. Historicism would not have given us any reason why we should believe any of it. If, however, the basic proposition of historicism were assumed to be invariantly true, then such a proposition about the constant nature of historical and economic phenomena would contradict its own doctrine denying any such constants. Furthermore, the historicist’s — and even more so its modern heir, the hermeneutician’s — claim that historical and economic events are mere subjective creations, unconstrained by any objective factors, is proven false by the very statement making it. For evidently, a historicist must assume this very statement to be meaningful and true; he must presume to say something specific about something, rather than merely uttering meaningless sounds like abracadabra. Yet if this is the case, then, clearly, his statement must be assumed to be constrained by something outside the realm of arbitrary subjective creations. Of course, I can say what the historicist says in English, German, or Chinese, or in any other language I wish, in so far as historic and economic expressions and interpretations may well be regarded as mere subjective creations. But whatever I say in whatever language I choose must be assumed to be constrained by some underlying propositional meaning of my statement, which is the same for any language, and exists completely independent of whatever the peculiar linguistic form may be in which it is expressed. And contrary to historicist belief, the existence of such a constraint is not such that one could possibly dispose of it at will. Rather, it is objective in that we can understand it to be the logically necessary presupposition for saying anything meaningful at all, as opposed to merely producing meaningless sounds. The historicist could not claim to say anything if it were not for the fact that his expressions and interpretations are actually constrained by laws of logic as the very presupposition of meaningful statements as such.[8]

With such a refutation of empiricism and historicism, Mises notices, the claims of rationalist philosophy are successfully reestablished, and the case is made for the possibility of a priori true statements, as those of economics seem to be. Indeed, Mises explicitly regards his own epistemological investigations as the continuation of the work of western rationalist philosophy. With Leibniz and Kant he stands opposite the tradition of Locke and Hume.[9] He sides with Leibniz when he answers Locke’s famous dictum “nothing is in the intellect that has not previously been in the senses” with his equally famous one “except the intellect itself.” And he recognizes his task as a philosopher of economics as strictly analogous to that of Kant’s as a philosopher of pure reason, i.e., of epistemology. Like Kant, Mises wants to demonstrate the existence of true a priori synthetic propositions, or propositions whose truth values can be definitely established, even though in order to do so the means of formal logic are insufficient and observations are unnecessary.

My criticism of empiricism and historicism has proved the general rationalist claim. It has proved that we indeed do possess knowledge which is not derived from observation and yet is constrained by objective laws. In fact, our refutation of empiricism and historicism contains such a priori synthetic knowledge. Yet what about the constructive task of showing that the propositions of economics — such as the law of marginal utility and the quantity theory of money — qualify as this type of knowledge? In order to do so, Mises notices in accordance with the strictures traditionally formulated by rationalist philosophers, economic propositions must fulfill two requirements: First, it must be possible to demonstrate that they are not derived from observational evidence, for observational evidence can only reveal things as they happen to be; there is nothing in it that would indicate why things must be the way they are. Instead, economic propositions must be shown to be grounded in reflective cognition, in our understanding of ourselves as knowing subjects. And secondly this reflective understanding must yield certain propositions as self-evident material axioms. Not in the sense that such axioms would have to be self-evident in a psychological sense, that is, that one would have to be immediately aware of them or that their truth depends on a psychological feeling of conviction. On the contrary, like Kant before him, Mises very much stresses the fact that it is usually much more painstaking to discover such axioms than it is to discover some observational truth such as that the leaves of trees are green or that I am 6 foot 2 inches.[10] Rather, what makes them self-evident material axioms is the fact that no one can deny their validity without self-contradiction, because in attempting to deny them one already presupposes their validity.

Mises points out that both requirements are fulfilled by what he terms the axiom of action, i.e., the proposition that humans act, that they display intentional behavior.[11] Obviously, this axiom is not derived from observation — there are only bodily movements to be observed but no such thing as actions — but stems instead from reflective understanding. And this understanding is indeed of a self-evident proposition. For its truth cannot be denied, since the denial would itself have to be categorized as an action. But is this not just plain trivial? And what has economics got to do with this? Of course, it had previously been recognized that economic concepts such as prices, costs, production, money, credit, etc., had something to do with the fact that there were acting people. But that all of economics could be grounded in and reconstructed based on such a trivial proposition and how, is certainly anything but clear. It is one of Mises’s greatest achievements to have shown precisely this: that there are insights implied in this psychologically speaking trivial axiom of action that were not themselves psychologically self-evident as well; and that it is these insights which provide the foundation for the theorems of economics as true a priori synthetic propositions.

It is certainly not psychologically evident that with every action an actor pursues a goal; and that whatever the goal may be, the fact that it was pursued by an actor reveals that he must have placed a relatively higher value on it than on any other goal of action that he could think of at the start of his action. It is not evident that in order to achieve his most highly valued goal an actor must interfere or decide not to interfere — which, of course, is also an intentional interference — at an earlier point in time in order to produce a later result; nor is it obvious that such interferences invariably imply the employment of some scarce means — at least those of the actor’s body, its standing room, and the time absorbed by the action. It is not self-evident that these means, then, must also have value for an actor — a value derived from that of the goal — because the actor must regard their employment as necessary in order to effectively achieve the goal; and that actions can only be performed sequentially, always involving a choice, i.e., taking up that one course of action which at some given time promises the most highly valued results to the actor and excluding at the same time the pursual of other, less highly valued goals. It is not automatically clear that as a consequence of having to choose and give preference to one goal over another — of not being able to realize all goals simultaneously — each and every action implies the incurrence of costs, i.e., forsaking the value attached to the most highly ranking alternative goal that cannot be realized or whose realization must be deferred, because the means necessary to attain it are bound up in the production of another, even more highly valued goal. And lastly, it is not evident that at its starting point every goal of action must be considered worth more to the actor than its cost and capable of yielding a profit, i.e., a result whose value is ranked higher than that of the foregone opportunity, and yet that every action is also invariably threatened by the possibility of a loss if an actor finds, in retrospect, that contrary to his expectations the actually achieved result in fact has a lower value than the relinquished alternative would have had.

All of these categories which we know to be the very heart of economics — values, ends, means, choice, preference, cost, profit and loss — are implied in the axiom of action. Like the axiom itself, they are not derived from observation. Rather, that one is able to interpret observations in terms of such categories requires that one already knows what it means to act. No one who is not an actor could ever understand them, as they are not “given,” ready to be observed, but observational experience is cast in these terms as it is construed by an actor. And while they and their interrelations were not obviously implied in the action axiom, once it has been made explicit that they are implied, and how, one no longer has any difficulty recognizing them as being a priori true in the same sense as the axiom itself is. For any attempt to disprove the validity of what Mises has reconstructed as implied in the very concept of action would have to be aimed at a goal, requiring means, excluding other courses of action, incurring costs, subjecting the actor to the possibility of achieving or not achieving the desired goal and so leading to a profit or a loss. Thus, it is manifestly impossible to ever dispute or falsify the validity of Mises’s insights. In fact, a situation in which the categories of action would cease to have a real existence could itself never be observed or spoken of, since to make an observation and to speak are themselves actions.

All true economic propositions, and this is what praxeology is all about and what Mises’s great insight consists of, can be deduced by means of formal logic from this incontestably true material knowledge regarding the meaning of action and its categories. More precisely all true economic theorems consist of

  1. an understanding of the meaning of action,
  2. a situation or situational change — assumed to be given or identified as being given — and described in terms of action-categories, and
  3. a logical deduction of the consequences — again in terms of such categories — which are to result for an actor from this situation or situational change.

The law of marginal utility, for instance,[12] follows from our indisputable knowledge of the fact that every actor always prefers what satisfies him more over what satisfies him less, plus the assumption that he is faced with an increase in the supply of a good (a scarce mean) whose units he regards as of equal serviceability, by one additional unit. From this it follows with logical necessity that this additional unit can then only be employed as a means for the removal of an uneasiness that is deemed less urgent than the least valuable goal previously satisfied by a unit of such a good. Provided there is no flaw in the process of deduction, the conclusions which economic theorizing yields, no different in the case of any other economic proposition from the case of the law of marginal utility, must be valid a priori. These propositions’ validity ultimately goes back to nothing but the indisputable axiom of action. To think, as empiricism does, that these propositions require continual empirical testing for their validation is absurd, and a sign of outright intellectual confusion. And it is no less absurd and confused to believe, as historicism does, that economics has nothing to say about constant and invariable relations but merely deals with historically accidental events. To say so meaningfully is to prove such a statement wrong, as saying anything meaningful at all already presupposes acting and a knowledge of the meaning of the categories of action.

Hans-Hermann Hoppe, an Austrian School economist and anarchocapitalist philosopher, is professor emeritus of economics at UNLV, a distinguished fellow with the Ludwig von Mises Institute, and founder and president of The Property and Freedom Society. Send him mail. See Hans-Hermann Hoppe’s article archives.

Notes

[1] On the Vienna Circle see V. Kraft, Der Wiener Kreis (Vienna: Springer, 1968); for empiricist-positivist interpretations of economics see such representative works as Terence W Hutchison, The Significance and Basic Postulates of Economic Theory [Hutchison, an adherent of the Popperian variant of empiricism, has since become much less enthusiastic about the prospects of a Popperized economics — see, for instance, his Knowledge and Ignorance in Economics — yet he still sees no alternative but to cling to Popper’s falsificationism anyway.]; Milton Friedman, “The Methodology of Positive Economics,” in idem, Essays in Positive Economics; Mark Blaug, The Methodology of Economics; a positivist account by a participant in Mises’s Privat Seminar in Vienna is F. Kaufmann, Methodology of the Social Sciences; the dominance of empiricism in economics is documented by the fact that there is probably not a single textbook, which does not explicitly classify economics as — what else? — an empirical (a posteriori) science.

[2] On the relativistic consequences of empiricism-positivism see also Hoppe, A Theory of Socialism and Capitalism (Boston: Kluwer Academic Publishers, 1989), chapter 6; idem, “The Intellectual Cover for Socialism.”

[3] See Ludwig von Mises, The Historical Setting of the Austrian School of Economics (Auburn, Ala.:Ludwig von Mises Institute, 1984); idem, Erinnerungen (Stuttgart: Gustav Fischer, 1978); idem, Theory and History, chapter 10; Murray N. Rothbard, Ludwig von Mises: Scholar, Creator, Hero (Auburn, Ala.: Ludwig von Mises Institute, 1988); for a critical survey of historicist ideas see also Karl Popper, The Poverty of Historicism; for a representative of the older version of a historicist interpretation of economics see Werner Sombart, Die drei Nationalökonomien (Munich: Duncker & Humblot, 1930); for the modern, hermeneutical twist Donald McCloskey, The Rhetoric of Economics (Madison: University of Wisconsin Press, 1985); Ludwig Lachmann, “From Mises to Shackle: An Essay on Austrian Economics and the Kaleidic Society,” Journal of Economic Literature (1976).

[4] On the extreme relativism of historicism-hermeneutics see Hoppe, “In Defense of Extreme Rationalism”; Murray N. Rothbard, “The Hermeneutical Invasion of Philosophy and Economics,” Review of Austrian Economics (1988); Henry Veatch, “Deconstruction in Philosophy: Has Rorty Made it the Denouement of Contemporary Analytical Philosophy,” Review of Metaphysics (1985); Jonathan Barnes, “A Kind of Integrity” Austrian Economics Newsletter (Summer 1987); David Gordon, Hermeneutics vs. Austrian Economics (Auburn, Ala.: Ludwig von Mises Institute, Occasional Paper Series, 1987); for a brilliant critique of contemporary sociology see St. Andreski, Social Science as Sorcery (New York: St. Martin’s Press, 1973).

[5] Regarding the epistemological views of such predecessors as J. B. Say, Nassau W. Senior, J. E. Cairnes, John Stuart Mill, Carl Menger, and Friedrich von Wieser see Ludwig von Mises, Epistemological Problems of Economics, pp. 17–23; also Murray N. Rothbard, “Praxeology: The Methodology of Austrian Economics,” in Edwin Dolan, ed., The Foundations of Modern Austrian Economics (Kansas City: Sheed and Ward, 1976).

[6] In addition to Mises’s works cited at the outset of this chapter and the literature mentioned in note 40, see Murray N. Rothbard, Individualism and the Philosophy of the Social Sciences (San Francisco: Cato Institute, 1979); for a splendid philosophical critique of empiricist economics see Hollis and Nell, Rational Economic Man; as particularly valuable general defenses of rationalism as against empiricism and relativism — without reference to economics, however, — see Blanshard, Reason and Analysis; Kambartel, Erfahrung und Struktur.

[7] For an elaborate defense of epistemological dualism see also Apel, Transformation der Philosophie, 2 vols, and Habermas, Zur Logik der Sozialwissenschaften.

[8] See on this in particular Hoppe, “In Defense of Extreme Rationalism.”

[9] See Mises, The Ultimate Foundation of Economic Science, p. 12.

[10] See Kant, Kritik der reinen Vernunft, p. 45; Mises, Human Action, p. 38.

[11] On the following see in particular Mises, Human Action, chapter 4; Murray N. Rothbard, Man, Economy, and State (Los Angeles: Nash, 1962), chapter 1.

[12] On the law of marginal utility see Mises, Human Action, pp. 119–27 and Rothbard, Man, Economy, and State, pp. 268–71.

Defending the Gunslinger

by Gareth Brickman
From misesdaily.org

“Gunslinger” typically refers to the men of the Old West who gained a reputation for being skilled and dangerous with a gun. Among the ranks of these popular legends are outlaws and lawmen alike, although we know today that the line was often blurred, hence the enduring reputation of the gunslinger as an unpredictable and wily opportunist.

For the purposes of this article, however, I use “gunslinger” to refer to an armed man who offers the service of his presence and skills to protect a client’s material interests.

The Old West was no country for unarmed men, if Hollywood is to be believed. All manner of disputes, from accusations of cattle rustling and cardsharping to infidelity and the simple bar brawl, would be settled in the street with a guns-drawn duel. Reality, however, is far more banal and the “Wild” West was as orderly, if not more so, than contemporary society. The myth of anarchic chaos prevailing during this period of history most likely arises from popular legends surrounding the gunslingers of folklore rather than the way societies really functioned in places where state power was diluted or nonexistent.

In the Old West it was not out of place for most men to be heavily armed at all times. But even if most men were armed and state law enforcement existed, why would there be a demand for gunslingers as providers of security and protection? The gunslinger provides solutions to several problems:

First, he is willing to absorb the risks his employer would take trying to protect his material interests personally. He is more than a mere scarecrow; he is actively placing himself in the way of physical harm as a service to others. Yet the common revulsion against self-interest and the profit motive means he will not be called a hero for doing so, unlike his badge-bearing contemporaries in government.

Second, he is a specialist. The gunslinger, as his name implies, will probably be more skilled and accurate with a firearm than you are. From experience he will be more aware of, and more prepared to respond to, potentially dangerous people or situations. He spends his time focusing on security so his employer doesn’t have to. In these ways the gunslinger’s presence provides his employer with the peace of mind to focus his energies and attentions on other more desirable and productive endeavors.

Finally, the gunslinger actually acts in part for the broader, law-abiding community. By deterring criminal actions, or engaging against and halting criminal activity, he is defending the interests of the community as a whole at no additional expense to the people within it. Conversely, the government can only attempt to provide such services by requisitioning resources from the community forcefully.

But is there a place for gunslingers in modern society?

Few countries and places in the world have gun laws relaxed enough to even allow their own citizens to own weapons, let alone brandish them in public. Fortunately there is still private provision of security services, even in countries where the police are exceptionally well funded and crime rates are relatively low. Clearly then the market not only demands private security provision, but actually needs it to function.

The common reasons given for the failure of socialized policing to effectively protect citizens and property from criminals, and to apprehend said criminals, is that there is a dire lack of resources, training, and personnel. More funds and voluntary help from the public are constantly proposed as the means to fighting the scourge of crime.

Yet these same labor and resource factors hardly seem to hamper provision of services by the private security industry. In the United States, well over a million people are employed by it, and the industry has been steadily growing since the 1980s. In contrast, there are approximately 800,000 government police officers and their pay scales are more than double that of their private-sector counterparts. Lack of resources and trained personnel is a hollow excuse for why government policing is a failure. We must look at allocation of these factors to discover the problem.

The Austrian School of economics explains that one of the fundamental reasons why socialized provision of goods and services cannot compare with market provision is that governments are simply not capable of efficiently coordinating and allocating capital, labor, and resources.

Governments lack the incentives of market providers to control costs, meet the needs of clients, and keep track of what the competition is doing in order to maintain profitability. Failure to manage these matters in the private sector leads to losses and, ultimately, insolvency.

Governmental bureaucracies, on the other hand, are prone to waste and graft and only have political edicts as incentive to be efficient. The operators of socialized agencies only have their superiors’ requirements to meet, not the consumers’, and never have to be concerned with competition, making a profit, or performing under budget, all in the face of insolvency.

In short, because the revenue of bureaucracy is a matter of political allocation rather than customer allocation the incentives simply do not exist for government providers to perform nearly as efficiently and successfully as market providers.

Private providers offer a myriad of services and personnel to cater to the needs of clients and always have to be mindful of the quality of their service and the effectiveness of their pricing. From armed guards escorting valuables to unarmed guards patrolling property to armed responders who will reach your location in times of need within minutes, the private provision of protection is a vast network of specialized services that the government, even in wealthy nations, can scarcely compete with. And that is one of the secrets for success.

As Ludwig von Mises relates, “The only source from which an entrepreneur’s profits stem is his ability to anticipate better than other people the future demand of the consumers.” It is simply amazing then that, even in the face of vastly superior resources and a collective presupposition that a primary role of government is protection, the private sector not only manages to outperform but constantly outwit and outcompete the most formidable of competitors.

But where does all this leave the lone gunslinger? Things are looking up for him. Following the 2008 recession, employment in the private security industry has picked up again. The police state, having expanded ostensibly to thwart specters like drugs and terrorism, can no longer adequately meet the market’s needs for protection of person and property. And now that many indebted state and local governments are facing budget constraints, services are being cut back, basic policing among them. While government cops spend their time chasing ghosts and babysitting unarmed protesters, the market for protection slowly moves back into more capable hands.

In the classic Charles Portis western novel True Grit, a character observes, “The civilized arts of commerce do not flourish there.” He was referring to territories outside the authority of the government. Contrary to this opinion, it is a miracle that commerce flourishes anywhere near where the government has influence, but it’s thanks in part to the likes of the gunslinger that property rights are respected and the gears of commerce able to turn.

Gareth Brickman writes for the Ludwig von Mises Institute South Africa.

Emersonian Individualism

By Allen Mendenhall
From Mises Daily


Ralph Waldo Emerson is politically elusive. He’s so elusive that thinkers from various schools and with various agendas have appropriated his ideas to validate some activity or another. Harold Bloom once wrote, “In the United States, we continue to have Emersonians of the Left (the post-Pragmatist Richard Rorty) and of the Right (a swarm of libertarian Republicans, who exalt President Bush the Second).”[1] We’ll have to excuse Bloom’s ignorance of political movements and signifiers — libertarians who exalt President Bush, really? — and focus instead on Bloom’s point that Emerson’s influence is evident in a wide array of contemporary thinkers and causes.

Bloom is right that what “matters most about Emerson is that he is the theologian of the American religion of Self-Reliance.”[2] Indeed, the essay “Self-Reliance” remains the most cited of Emerson’s works, and American politicians and intellectuals selectively recycle ideas of self-reliance in the service of often disparate goals.

Emerson doesn’t use the term “individualism” in “Self-Reliance,” which was published in 1841, when the term “individualism” was just beginning to gain traction. Tocqueville unintentionally popularized the signifier “individualism” with the publication of Democracy in America. He used a French term that had no counterpart in English. Translators of Tocqueville labored over this French term because its signification wasn’t part of the English lexicon. Emerson’s first mention of “individualism” was not until 1843.

It is clear, though, that Emerson’s notion of self-reliance was tied to what later would be called “individualism.” Emerson’s individualism was so radical that it bordered on self-deification. Only through personal will could one realize the majesty of God. Nature for Emerson was like the handwriting of God, and individuals with a poetical sense — those who had the desire and capability to “read” nature — could understand nature’s universal, divine teachings.

Lakes, streams, meadows, forests — these and other phenomena were, according to Emerson, sources of mental and spiritual pleasure or unity. They were what allowed one to become “part and parcel with God,” if only one had or could become a “transparent eyeball.” “Nothing at last is sacred,” Emerson said, “but the integrity of your own mind.” That’s because a person’s intellect translates shapes and forms into spiritual insights.

We cannot judge Emerson exclusively on the basis of his actions. Emerson didn’t always seem self-reliant or individualistic. His politics, to the extent that they are knowable, could not be called libertarian. We’re better off judging Emerson on the basis of his words, which could be called libertarian, even if they endow individualism with a religiosity that would make some people uncomfortable.

Emerson suggests in “Self-Reliance” that the spontaneous expression of thought or feeling is more in keeping with personal will, and hence with the natural world as constituted by human faculties, than that which is passively assumed or accepted as right or good, or that which conforms to social norms. Emerson’s individualism or self-reliance exalted human intuition, which precedes reflection, and it privileged the will over the intellect. Feeling and sensation are antecedent to reason, and Emerson believed that they registered moral truths more important than anything cognition could summon forth.

Emerson’s transcendentalism was, as George Santayana pointed out in 1911, a method conducive to the 19-century American mindset.[3] As a relatively new nation seeking to define itself, America was split between two mentalities, or two sources of what Santayana called the “genteel tradition”: Calvinism and transcendentalism.

The American philosophical tradition somehow managed to reconcile these seeming dualities. On the one hand, Calvinism taught that the self was bad, that man was depraved by nature and saved only by the grace of God. On the other hand, transcendentalism taught that the self was good, that man was equipped with creative faculties that could divine the presence of God in the world. The Calvinist distrusted impulses and urges as sprung from an inner evil. The transcendentalist trusted impulses and urges as moral intuition preceding society’s baseless judgments and prevailing conventions.

What these two philosophies had in common was an abiding awareness of sensation and perception: a belief that the human mind registers external data in meaningful and potentially spiritual ways. The Calvinist notion of limited disclosure — that God reveals his glory through the natural world — played into the transcendentalists’ conviction that the natural world supplied instruments for piecing together divinity.

The problem for Santayana is that transcendentalism was just a method, a way of tapping into one’s poetical sense. What one did after that was unclear. Santayana thought that transcendentalism was the right method, but he felt that Emerson didn’t use that method to instruct us in practical living. Transcendentalism was a means to an end, but not an end itself.

According to Santayana, Emerson “had no system” because he merely “opened his eyes on the world every morning with a fresh sincerity, marking how things seemed to him then, or what they suggested to his spontaneous fancy.”[4] Emerson did not seek to group all senses and impressions into a synthetic whole. Nor did he suggest a politics toward which senses and impressions ought to lead. Santayana stops short of accusing Emerson of advancing an “anything-goes” metaphysics. But Santayana does suggest that Emerson failed to advance a set of principles; instead, Emerson gave us a technique for arriving at a set of principles. Emerson provided transportation, but gave no direction. This shortcoming — if it is a shortcoming — might explain why Bloom speaks of the “paradox of Emerson’s influence,” namely, that “Peace Marchers and Bushians alike are Emerson’s heirs in his dialectics of power.”[5]

For Emerson, human will is paramount. It moves the intellect to create. It is immediate, not mediate. In other words, it is the sense or subjectivity that is not yet processed by the human mind. We ought to trust the integrity of will and intuition and avoid the dictates and decorum of society.

“Society,” Emerson says, “everywhere is in conspiracy against the manhood of every one of its members.” Society corrupts the purity of the will by forcing individuals to second-guess their impulses and to look to others for moral guidance. Against this socialization, Emerson declares, “Whoso would be a man, must be a nonconformist.”

Emerson’s nonconformist ethic opposed habits of thinking, which society influenced but did not determine. Emerson famously stated that a foolish consistency is the hobgoblin of little minds. What he meant, I think, is that humans ought to improve themselves by tapping into intuitive truths. Nature, with her figures, forms, and outlines, provides images that the individual can harness to create beauty and energize the self. Beauty therefore does not exist in the world; rather, the human mind makes beauty out of the externalities it has internalized. Beauty, accordingly, resides within us, but only after we create it.

Here we see something similar to Ayn Rand’s Objectivism stripped of its appeals to divinity. Rand believed that reality existed apart from the thinking subject, that the thinking subject employs reason and logic to make sense of experience and perception, and that the self or will is instrumental in generating meaning from the phenomenal world.

Like Emerson, who did not want to deny the self by sacrificing it to social criteria for moral rightness or propriety, Rand believed that the self was the basis of ethics. The moral purpose of the individual, for her, entailed the rational pursuit of self-interest and happiness. This pursuit is possible only in certain systems of human organization, and the one Rand deemed most suitable for human flourishing was capitalism (which arguably is not a system but a result of spontaneous orders or a framework enabling spontaneous orders). In capitalism, art prospers because human creativity prospers; capitalism enables beauty, images, and shapes that help us to refine our metaphysics and to represent “the real.”

Even Ludwig von Mises seems to have been influenced, if not directly by Emerson, then by those who were influenced by Emerson. Mises criticizes the “doctrines of universalism, conceptual realism, holism, collectivism, and some representatives of Gestaltpsychologie” for maintaining that “society is an entity living its own life, independent of and separate from the lives of the various individuals.”[6] When Mises criticizes universalism and collectivism as “systems of theocratic government,”[7] he turns to William James, himself an Emersonian and one who influenced Henry Hazlitt.[8] James supplies Mises with an argument for distinguishing religion from theocracy, and Mises seems to support James’s notion of religion as, in Mises’s words, “a purely personal and individual relation between man and a holy, mysterious, and awe-inspiring divine Reality.”[9] Although Mises never cites Emerson in Human Action, Mises does trope Emerson by discussing the “Creative Genius,” the man “whose deeds and ideas cut out new paths for mankind.”[10]

Art and beauty have the potential to stimulate sensation and emotion; they have the potential to substantiate the extraordinary powers of human intellect. Just as Rand believed in the heroism of the individual, so Emerson believed that a self-reliant mind with a poetical sense could not only trust his impressions about the external world, but also act upon that trust. That does not mean that the individual is necessarily unbounded, only that the individual establishes his own boundaries and sets his own priorities.

Emerson and Rand celebrate the ability of the human mind to create beauty, to generate meaning, to produce tangibles from intangibles, and to construct realities based on which and because of which we are prepared to act. This function of the imagination — is it too much to call it genius? — is not realized by everyone. Some go through life without self-examination and without questioning their surroundings or envisioning new surroundings, new possibilities, and new ways of thinking. These individuals lack or repress imagination and creativity. Even writers like Walt Whitman never demonstrate the powers of selfhood, the sheer strength of human will.

Whitman obstructed the will to make himself receptive to everything and everyone. He buried the will beneath a mountain of abstractions and random experiences. Santayana explains that in Whitman “democracy is carried into psychology and morals” insofar as the “various sights, moods, and emotions are given each one vote; they are declared to be all free and equal, and the innumerable commonplace moments of life are suffered to speak like the others.”[11] The slave driver is as much a part of Whitman as the slave.

Whitman never distinguishes between good and bad, right and wrong, practical and impractical, reality and fancy. He never discriminates. He becomes, in Santayana’s words, an “unintellectual,” “lazy,” and “self-indulgent” pantheist because he merely internalizes all things, accords them equal weight, refuses to challenge their validity or viability and so expresses poetry that is presentist and value-free, so much so that it degenerates into gushes of arbitrary feeling.[12]

Emersonian individualism is not arbitrary in this sense. It is purposeful. It differentiates and distinguishes between people and groups, good and evil, referents that are conducive to poetry and referents that are not. Whitman delighted in popularity. Emerson delighted in standing apart from others. “It is easy in the world to live after the world’s opinion,” Emerson once said, adding, “it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.”

If we take Emerson at his word, he does not seem to care whether he is misunderstood. Indeed, he submits that Pythagoras, Copernicus, Galileo, and Newton were misunderstood. “Is it so bad, then, to be misunderstood?” Emerson asked, and then answered, “to be great is to be misunderstood.”

Emerson is still misunderstood, but his influence on American thought is unmistakable. He refused to tacitly accept inherited and imported orthodoxies, although he was bent on validating traditional notions of truth using new methods. Those who have inveighed against Emerson too often misconstrue or misrepresent his nuanced philosophy.

Emerson is not easy to understand. His texts demand many rereadings. His essays experimented with new techniques for clarifying old ideas, to which he gave exhilarating expression in the vocabularies of individualism and self-reliance. Perhaps the most telling legacy of this winsome philosopher is that so many people claim that “Emerson was one of us.” The term “us” suggests that there’s still more to learn from Emerson, that the ethic of self-reliance continues to struggle against presumptions and habits of thinking. To say that Emerson is “one of us” is to miss the points Emerson made. One ought to read Emerson not because one is told to do so, but because one wills oneself to do so.

Allen Mendenhall is a PhD student in English at Auburn University. He’s a writer, attorney, and adjunct law professor. He holds a BA from Furman University, MA and JD from West Virginia University, and LLM from Temple University Beasley School of Law. Visit his website at AllenMendenhall.com. Send him mail. See Allen Mendenhall’s article archives.

You can subscribe to future articles by Allen Mendenhall via this RSS feed.

Notes

[1] Harold Bloom. Where Shall Wisdom Be Found? (Riverhead Books, 2004), p. 190.

[2] Bloom at 190.

[3] See George Santayana, “The Genteel Tradition,” in The Genteel Tradition in American Philosophy and Character and Opinion in the United States,” edited by James Seaton (Yale University Press, 2009), p. 9.

[4] Santayana at 9.

[5] Bloom at 198.

[6] Ludwig Von Mises. Human Action. The Scholar’s Edition. Auburn, AL: Ludwig Vone Mises Institute, 1998) at 145.

[7] Mises at 150-51.

[8] See Allen Mendenhall. “Henry Hazlitt, Literary Critic.” Mises Daily. June 6, 2011.

[9] Mises at 156.

[10] Mises at 138.

[11] Santayana at 12.

[12] Santayana at 12-13.

How The Stimulus Racket Works

By Charlie Virgo
From Mises Daily

Have you ever noticed that the failed policies of politicians never really seem to be brought to light? How is it that despite their obvious shortcomings, the same policies are implemented time and time again? These interventions rarely have the promised effects, but they are somehow still deemed a success. In his book The Vision of the Anointed, Thomas Sowell explains the process by which politicians and their supporters are able to either create or take advantage of crises in order to increase their involvement in society. I thought it would be worthwhile to review this pattern as it applies to a more recent issue: the stimulus and bailout packages.

Starting with President Bush, and continuing into President Obama’s term, the US government has paid out more than $11 trillion in stimulus money. Remarkably, there are many who believe that amount still wasn’t sufficient. This article will show that both presidents believed the additional government intervention would solve our economic problems, not realizing that it was exactly that type of thinking that had led to the problems in the first place. The stimulus package has already passed through Dr. Sowell’s pattern of failure, making it an excellent example for review. In even more current issues, such as the debt-ceiling increase, we are already beginning to see the early phases of the pattern. It is important that we understand this pattern so that we can identify and oppose measures that are contrary to the principles of true economic progress.

Dr. Sowell has divided the pattern of policy failure into four sections: the “crisis,” the “solution,” the results, and the response. He defines them in this way:

Stage One: The “Crisis.” Some situation exists, whose negative aspects the anointed propose to eliminate. Such a situation is routinely characterized as a “crisis,” even though evidence is seldom asked or given to show how the situation at hand is either uniquely bad or threatening to get worse.

Stage Two: The “Solution.” Policies to end the “crisis” are advocated by the anointed, who say that these policies will lead to beneficial result A. Critics say that these policies will lead to detrimental result Z. The anointed dismiss these critical claims as absurd and “simplistic,” if not outright dishonest.

Stage Three: The Results. The policies are instituted and lead to detrimental result Z.

Stage Four: The Response. Those who attribute detrimental result Z to the policies instituted are dismissed as “simplistic” for ignoring the “complexities” involved, as “many factors” went into determining the outcome. No burden of proof whatever is put on those who had so confidently predicted improvement. Indeed, it is often asserted that things would have been even worse were it not for the wonderful programs that mitigated the inevitable damage from other factors.

Stage One: The “Crisis”

President Bush introduced his bailout legislation on the grounds that “without immediate action by Congress, America could slip into a financial panic.” Banks and other lending institutions possessed “toxic assets” and were either unable or unwilling to lend. By purchasing the assets and providing capital to the troubled banks, President Bush believed that it would steady the economy. In his official address he explained, “In the short term, this will free up banks to resume the flow of credit to American families and businesses, and this will help our economy grow.” He claimed to support free enterprise, but then allowed banks to avoid the consequences of their bad investments.

In the case of the stimulus, the “crisis” was twofold: a high unemployment level and a lack of consumption. Neither of these situations actually constituted a crisis, however. In fact, they can be explained as necessary steps toward economic recovery. As the Austrian business-cycle theory explains, the higher unemployment rate is mostly due to malinvestment caused by the previous years of credit expansion. When the credit stream dried up, inefficient and unnecessary businesses were forced to close shop. Barring intervention, the capital that the bankrupt firms held would be freed up and the remaining companies would then be able to expand their operations, leading to hiring.

In this way we can see that unemployment is a byproduct of the healing process for the economy, as we correct what was wrong with it. Focusing on the unemployment rate is putting the cart before the horse. There was never any need for action on the part of the government; everything would have sorted itself out naturally. President Obama didn’t see it that way, however. In a February 2009 op-ed defending the stimulus plan, he wrote,

By now, it’s clear to everyone that we have inherited an economic crisis as deep and dire as any since the days of the Great Depression.…

And if nothing is done, this recession might linger for years. Our economy will lose 5 million more jobs. Unemployment will approach double digits. Our nation will sink deeper into a crisis that, at some point, we may not be able to reverse.

He wasn’t alone in this belief, either. He was merely echoing the words that President Bush spoke six months earlier. Many people believed that if something swift wasn’t done, the economy was going to tank indefinitely. These were the same people who didn’t see the recession coming in the first place, but that point never seems to have been brought up. To solve the “crisis,” the “experts” explained that a stimulus would solve all of our woes. The fact that another stimulus was even being discussed is evidence that President Bush’s stimulus and bailout were failures, but again, that seemingly wasn’t a point worth mentioning at the time.

Stage Two: The “Solution”

With such dire descriptions of the economy, one might think the goals for the stimulus would be tempered. President Obama defended his position for a $787 billion stimulus in the February 2009 op-ed, saying

We will create or save more than 3 million jobs over the next two years, provide immediate tax relief to 95 percent of American workers, ignite spending by businesses and consumers alike, and take steps to strengthen our country for years to come.…

It’s a strategy that will be implemented with unprecedented transparency and accountability, so Americans know where their tax dollars are going and how they are being spent. (emphasis added)

To his credit, President Obama was more specific with his goals than President Bush was, but were they realistic goals? In addition to these promises from President Obama, his top economic advisors guaranteed that the stimulus plan would prevent unemployment from reaching 8 percent. The stimulus plan was to be a cure-all for the economy. Fortunately, there were voices of reason ringing out as well. The Mises Institute put together a “Bailout Reader” providing various articles on the fallibility of the stimulus plan. Opponents of the stimulus and bailout plans were criticized as old-fashioned or ignorant. This is part of the pattern of failure. If you disagree with the leftists and their “solution,” you are labeled as simple. Just read the comments section of any article regarding the tea party to see what I mean. Or we can go back to the president’s piece.

In recent days, there have been misguided criticisms of this plan that echo the failed theories that helped lead us into this crisis — the notion that tax cuts alone will solve all our problems; that we can meet our enormous tests with half-steps and piecemeal measures …

In other words, the stimulus plan is perfect and therefore any criticisms of it must be misguided echoes of failed theories.

Stage Three: The Results

Before analyzing the results of the stimulus plan, it is beneficial to review the results that were expected from it:

  1. 5 million jobs created
  2. Unemployment won’t rise above 8 percent,
  3. Immediate tax relief to 95 percent of American workers,
  4. Ignite business and consumer spending,
  5. Strengthen the economy for years to come, and lastly
  6. Unprecedented accountability and transparency.

So how did the stimulus plan do?

During President Obama’s first year in office, 4.2 million jobs were actually lost. This is more an indictment of President Bush’s policies, though, because President Obama’s ideas still hadn’t been fully enacted. However, in the two years since the stimulus plan only 722,200 jobs have been created. In other words, there is a net job loss of 3.5 million within the time frame that President Obama himself established. It was January 2009 when Obama’s guarantee regarding the unemployment rate was given. As the following chart shows, the rate had been increasing steadily since early 2008 and was quickly approaching 8 percent by early 2009. It didn’t take long for the 8 percent promise to be broken, though, and the unemployment rate has been in the 9 percent range longer than it was in the 8 percent range. The most recent data puts the rate at 9.1 percent.

Of the $787 billion (later to be calculated at $830 billion), 35 percent was dedicated to tax cuts. To the president’s credit, 94.3 percent of working Americans saw a decrease in their taxes due to the stimulus plan. This is a faux tax cut, however, since the money that would have been collected in taxes is now being collected through lending. And how will the lending be paid back? Through taxes, of course. While this sounds suspiciously having your cake and eating it too, in our case we are only evaluating whether the stated goal was met, and in this case it was.

Consistent with the Keynesian belief that the government should step in if national consumption dips, it was important for the Obama administration to point out that consumer and business spending would increase with his plan. The stimulus would act as a “spark plug” to “jump-start” the economy. Once again, however, we see that the exact opposite has happened. Instead of increasing the amount of spending, the stimulus actually increased the amount of savings. Legitimate savings are, of course, the true path to recovery, so it’s not a bad thing that the savings rate is increasing. Once again, though, my purpose is not to debate whether savings are good or bad, but whether the stimulus supporters’ predictions came true.

So far we’ve seen that of four goals, only one was reached. As we review President Obama’s pledge to “strengthen our country for years to come,” the tally will decrease further to one of five. The recent debt-ceiling debates have highlighted just how fragile our economy is. Even though the debt burden was increased, all three major rating companies (Standard & Poor’s, Moody’s, and Fitch) warned of a downgrade if serious efforts weren’t made to reduce the deficit. Standard & Poor’s acted on that warning, downgrading the United States for the first time in its history.

“Passing the stimulus plan didn’t strengthen our economy; it simply deadened the country’s nerves to numbers that we hadn’t seen before.”

Staying true to the pattern of failure, however, White House officials blamed “flawed math” for the downgrade, not the $14.5 trillion in debt. The fact that there are record numbers of people receiving food stamps certainly isn’t indicative of a strengthening economy either. Passing the stimulus plan didn’t strengthen our economy; it simply deadened the country’s nerves to numbers that we hadn’t seen before. The stimulus plan was the beginning of a spending spree that still hasn’t ended. Thus, it’s difficult to understand how the stimulus has strengthened our economy for years to come.

Last but not least, President Obama promised that there would be “unprecedented transparency and accountability” while the stimulus dole was being issued. However, Senators John McCain and Tom Coburn issued a report in August 2010 highlighting 100 stimulus projects that surely wouldn’t have been approved in a truly transparent government. The projects range from studies on the effects of cocaine on monkeys ($71,623 granted) to replacing a five-year-old, 0.25-mile sidewalk to be more compliant with disability requirements ($90,000 granted). Even if we assumed that the projects were needed, are the price tags really indicative of the projects’ value?

Finding the transparency he promised in his campaign has been difficult for President Obama. A recent report on the issue found that “to date only 1 percent of 500,000 meetings from the president’s first eight months have been released, and thousands of known visitors (including lobbyists) are missing from the lists.” Transparency is promised because transparency sounds good. But unfortunately it is just one more stimulus goal that never came to fruition.

Stage Four: The Response

The final tally of the official goals achieved is one out of five, or 20 percent. It is worth noting that of the four failed goals, three were never even close. The jobs target was a full 8.5 million off, the US credit rating has been downgraded, and only 1 percent of White House visitor logs have been released.

To any normal person, these would be sufficient reasons to acknowledge their mistake and move on. Politicians and pundits are unable to do that, however, and many actually believe the stimulus was a success! This is possible, of course, because the original goals are no longer being considered. In the link above, Harold Meyerson explains that the stimulus was a success because “it manifestly did arrest the slide.”

Arresting the slide was not the point of the stimulus. If we were only concerned with arresting the slide, the stimulus wouldn’t have needed to be so large. The stimulus was pegged at $787 billion because of the goals that President Obama laid out. Imagine a parent gives his son $20 so he can go to a steakhouse with his friends. Instead, the son goes to the grocery store and sees a cup of ramen on sale for 59 cents. Since the dinner money came from his dad, he ignores the price and spends the entire $20 on the cup of ramen. When his father asks why he didn’t spend the $20 on a steak, the son just replies, “What’s it matter? I still got dinner, didn’t I?”

This is the same logic that Meyerson is using. But paying $20 for Ramen is not the same thing as paying $20 for a steak, even if they both count as dinner. We paid $787 billion dollars for a specific set of goals, and only one of them was achieved. Obama also resorted to this logic: “Now, without relitigating the past, I’m absolutely convinced, and the vast majority of economists are convinced, that the steps we took in the Recovery Act saved millions of people their jobs or created a whole bunch of jobs.” Whether he is convinced of it or not doesn’t really matter, because the numbers are readily available, and the numbers show that the stimulus plan was a colossal failure.

Conclusion

I realize that the majority of this article focuses on statements and policies related to President Obama. In no way should this be construed as a statement of support for President Bush’s policies. President Hoover laid the groundwork for President Roosevelt’s New Deal, and I feel that the same thing has happened with Bush and Obama. The fact that the economy was still damaged enough to allow President Obama to pass his newest deal is a clear indication that President Bush’s stimulus and bailout failed.

“The bailout and the stimulus plans
were destined to fail from the beginning.”

Some may argue that the promises made by President Bush and President Obama were simply political rhetoric, and that they shouldn’t be taken too seriously. They might have a point if that rhetoric hadn’t been followed up with dangerous legislation. The bailout and the stimulus plans were destined to fail from the beginning. They allowed companies to avoid the consequences of their actions, circumventing a critical component of capitalism and a market economy. By removing the consequences, we have given corporations a green light to do whatever they wish.

Both President Bush and President Obama believed their interventions would heal the economy. You simply can’t fix a problem by doing what caused it in the first place. It wasn’t the first time the government attempted to replace the market, and unfortunately it probably won’t be the last. By recognizing Dr. Sowell’s pattern of failure, however, we become better equipped to anticipate and recognize the opposition’s plan. Knowing this, we can educate those around us on the true effects of the policy and the proper course of action.

 

Charlie Virgo is a finance major at the University of Phoenix. His introduction to economics came from reading Economics in One Lesson by Henry Hazlitt.