Homo Economicus Paradox (Part I)

Homo Economicus Paradox Series


  1. Introduction
  2. Decision recurses

n my last dispatch, I laid out a plan to explore how the application of consensus economic theory to public policy has gone astray. Today, I’ll begin tackling a fundamental tenet of economic thinking: the rational investor. Economists rarely admit this assumption is troublesome—the late Herbert Simon being the notable exception. Simon's writings on bounded rationality form the motivation for this series. Most prominent economists remain dogmatic in their contention that humans can be modeled as rational agents. Barring near complete relaxation of rationality's very meaning, it's rubbish.

The title refers to the fictitious species Homo Economicus, the economist’s shorthand for the rational investor. Homo Economicus is a perfectly rational operator, i.e., her trades and investments are always based upon the soundest of logic (given currently available information). Let's first examine two common attacks on this hypothesis:

  1. information is not disseminated in a perfectly uniform manner
  2. humans are incessantly clasped by manias, panics, and poor statistical reasoning

Information propagates irregularly; naturally evolving incentive structures favor its undemocratic flow. Humans have recognized that raw data and information have intrinsic value since at least the dawn of reason . Moreover, everyone's price differs, depending on how actionable it would be to them (perceived utility). Parties with more skin in a certain game are willing to pay for better, faster access to information (David Ricardo’s bond market hijinks surrounding the battle of Waterloo are a classic example, as are high-frequency traders paying high fees for low-latency access to exchanges so they can front-run others' trades). Asymmetric access to information is a given.

When the stakes are particularly high, poisoning the public wells with misdirection and outright deception can be sufficiently advantageous to warrant the risk. While economic models can be constructed that tolerate deception, it is no small task. Deception isn't easily reducible to a metric; there are myriad ways, both direct and convoluted, to deceive or mislead.

As for point (2), emotionality and irrationality in economic decision making are expounded most eloquently by both Kindleberger, and Chancellor. From a cognitive perspective, Kahneman offers us some insight into why humans behave in a manner at odds with math and statistics. There's no use belaboring the point; this is is well-trodden ground. We're not 'rational' in the strict sense that most of us would consider its definition. What it truly means to be rational is a topic I must save for another day.

While I’m sympathetic to these age-old arguments, they are superfluous . By fixating on human nature, we ignore elemental aspects of reality. We must spelunk deeper to understand what it means for humans—and ultimately machines—to be rational. This will require a lengthier exposition of mathematical and physical limits than is reasonable for one post. We'll, consequently, explore these limits, and discuss a more meaningful definition of 'rational', as a series of posts.


Going beyond the well-worn, there is a new-old idea that will force a reevaluation of economic forecasting in coming years. Attributable to a great 18th century mind, Laplace, it was lost to the footnotes of history until recently. I'm referring to the attacks leveled by Ole Peters and Nobel Physics Laureate Murray Gell-Mann. Their contentions rest on an interesting property of dynamics: ergodicity. It's a deep topic, so I will only summarize it here.

Consider a coin flip game where each flip costs you 10% of your wealth, but you can win back 20%, thus netting out to either down 10% or up 10% depending on the outcome of the flip. This is a fascinating game due to its counterintuitive outcome. After each flip, you expect there's a 50% chance of ending up 10%, and a 50% chance of ending down 10%. Most people, when trying to understand how they will fare, tend to look at the average outcome of a single flip, then extrapolate to many flips. From the perspective of averages, this game looks like an exact break-even.

Yet, the game, when repeated many times, has a counterintuitive outcome. Given the average outcome of a single game, most assume that repeating the game infinitely will similarly average out to no net change in wealth. If you fire up a computer simulation, however, the results are surprising. In the fullness of time, wealth always goes to zero.

Illustration of one player's wealth taking a meandering path as it decays towards destitution. Note: if we were to zoom in on the first 100 plays, there is a time when the player is significantly in the black (this is a semi-log chart). As Nassim Taleb often cautions, beware attributing to talent that which may be mere luck. If we had extrapolated from those first 100 plays, we could easily have built a false narrative of a comeback story. This narrative is false in two senses: 1) we would be ascribing a narrative to a wholly random process, and 2) we would be applying bell curve statistics to a non-ergodic observable.

As anyone who has lost money in the stock market knows, recovering from a major loss is considerably harder than you initially expected. This asymmetry in our expectations could be avoided by thinking in terms of logarithms rather than percentages, but I digress.

If you're adept at algebra, the reason why the 10%-up/10%-down game is a loser should be pretty obvious. (Non-mathematicians can safely skip this section.) According to the average-based rationale I explained earlier, you would expect that the values after win and loss to average out to prior wealth. Hence, your wealth after winning or losing the game has to be $11/10$ or $9/10$ of your original wealth, respectively ($\frac{11/10 + 9/10}{2} = 1$, meaning that—on average—you break even after each game).

Since we know this game leads to total losses, clearly there is something wrong with the average-based rationale. The answer lies in how we calculate winnings. We're multiplying those fractions together—not adding them—when determining net winnings. For example, if we win three times, and lose three times, we'd calculate our new wealth as follows:

$ Wealth_{new} = Wealth_{old} * \left(\frac{11}{10}\right)^3 * \left(\frac{9}{10}\right)^3 $

Assuming a fair coin, as the number of flips grows extremely large, there will be an equal number of $11/10$ and $9/10$ terms in the product (as heads and tails outcomes will balance out in the fullness of time). When the exponents are the same, an equivalent way to express this is to group each win-loss pair together:

$ Wealth_{new} = Wealth_{old} * \left(\frac{11}{10} * \frac{9}{10}\right)^3 $

But, hold on a second, 'those terms are not reciprocals?', you may inquire. Indeed:

$ Wealth_{new} = Wealth_{old} * \left(\frac{99}{100}\right)^3 $

If instead of three heads and three tails, we flipped a trillion heads and a trillion tails, we do not get a number anywhere close to 1, but $\left(\frac{99}{100}\right)^{1,000,000,000,000}$ instead. That number is very, very far from the $1$ we would need for our old wealth and new wealth to be the same. Instead, our wealth after playing two trillion times is, on average, extremely close to zero. Thus, we're surely not breaking even on each round, despite what the averages are saying about each individual game.

In reality, if we want to break even under continuous game play, then the winnings and losings would have to be reciprocals, e.g., $10/9$  and $9/10$ of our wealth, not the $11/10$ and $9 /10$ that average-based reasoning would predict. In other words, the winnings have to not only break even, but also make up for the losses of the other half of play.


The digression into ergodicity probably feels unrelated. Unfortunately, it's an illustration of a common logical error in economic reasoning. Much of modern economics is based upon the assumption that the average outcome of a single game in isolation can be extrapolated. As the above example illustrates, this is faulty reasoning. Another way to put this is the Kelly Criterion is correct. As evolution impels us all, the primacy of living another day applies equally to preservation of capital.

Investing, as life, is a repeated game. Our ultimate goal is to end as in-the-money as possible. Evaluating a strategy on the basis of averages ('expected value' if you want to get technical) is to ignore that outcomes of economic activity are almost never shaped like bell curves. Wealth evolves differently through time than averages would suggest.

Your individual best strategy is not likely defined by aggregate statistics of "optimal" strategy. Switching our frame of reference from bell curve statistics to thinking of our own random walk through time is difficult. This is when market quants apply analytical techniques with fancy names, such as Monte Carlo simulation. In contrast, my point is that we can simplify our outlook merely by ignoring the faulty reasoning based on averages.

In repeated games, as in real life, staying alive is sine qua non. This is why Charlie Munger stresses the importance of not making stupid decisions. As Nassim Taleb would say, we should decide how to proceed via negativa. Trying to predict the optimal path forward is a fool's errand. Yet, predicting—and subsequently avoiding—bad outcomes is often tractable.

For the record, while I am roasting the decision theory you may have learned in business school, I am not impugning capitalism. Regardless of your stance on modern economics, it is a fundamental truth that investment is an exceptionally efficient mechanism by which people with a surfeit of money and a dearth of time/ideas barter with those who have a surplus of ideas and time, but scarce capital. Winning the game is not mere formula; life is too complex for simple statistical measures to be our arbiters.

The idea that we should want to apply optimization techniques at the outset of every economic decision is unhelpful at best. We are Bayesian learners: we continuously refine our probabilistic understanding of the world around us via collection and analysis of data. Much of this is—and as I'll argue in subsequent posts, must be—subconscious. We learn as situations unfold before our senses. The most rational strategy in the world is to embrace uncertainty—with faith that in due time our course of action can be adjusted as present becomes past, and clarity of circumstances consequently refines our understanding.

While I fully agree with the arguments presented above, I think there is some value in ignoring them for the moment. Instead, we'll move forward under the assumption that economic systems are built from a field of rational actors all observing econometrics having the ergodic property —a proof by contradiction. With that plan in mind, I'll close out this first despatch.

Part II