Archive for March, 2009

Consumption is exogenous to Capitalism

Monday, March 30th, 2009

“I don’t know which is worse… that everyone has his price, or that the price is always so low.”
— Hobbes

The capitalist framework is quite general, because as much as capitalist theories dictate when one should invest, they are completely agnostic as to what/when one should consume.

The idea of consumption is exogenous to capitalism and can be defined to be anything. For investment to take place, all that is required is a sufficiently low discount rate. For example, let’s say you want to save starving children in Africa. If you value feeding 10000 starving children in a year more than you value feeding 1000 starving children now, you have a 900% discount rate and should invest in anything with greater than 900% annual return. This particular hurdle rate would almost be impossible to overcome, but most altruistic goals have lower discount rates than that.

Extra credit: does morality have a discount rate?

A little knowledge can be a dangerous thing

Saturday, March 28th, 2009

On Monday’s Econtalk, Taleb said that knowledge was harmful. Here’s my take.

Formalisms are born in academia, where they establish boundaries within which solutions are sought. Those boundaries make it possible for ideas to engage, by directing people to compete on the same problems. Boundaries are necessary for focus, they make it easier to share and hybridize ideas. The real game (of life) is broken up into sub-games which academics play in for status and respect.

The sub-games of academia have independent  intellectual histories and traditions. Within each sub-game, there is enough competition that solutions are dependable and valid. However, to actually use an academic result in the real world, one has to make sure the sub-game itself is set up correctly. Sub-game incorrectness takes longer to catch – paradigms have inertia. Assumptions are often accepted by players to make the competition a good game with clear winners, not so much to reflect reality.

Alas, the academic quantification of risk in the finance sub-game has falsely enthralled us all, when in fact it is a weak shadow of real world uncertainty.

The Instrumentality of Risk Adversity

Saturday, March 28th, 2009

In decision theory, risk adversity is a terminal goal

In decision theory, risk adversity is represented by concavity in the utility function. Concave functions bend downwards, so a straight line drawn between any two points would lie below the function line, and averaging any two function values by any weight would have a smaller value than the function at the averaged position.

concavedef

Px*f(x)+Py*f(y) < f(Px*x+Py*y)

For a given average outcome, the expected utility is higher if there are fewer possible outcomes. Maximizing expected utility then makes the agent prefer a smaller variance in outcomes. This is how decision theory generates risk adverse behavior.

Presented in this way, the desire to reduce variance is fundamental, an ends in itself, and not subject to justification.

Risk adversity seems to be instrumental in reality

When I consider a risky choice, however, I don’t undergo anything like the process of utility maximization. If I am highly uncertain, I may

  1. delay the decision and acquire more information
  2. avoid the choice to avoid complicating the context of future decisions
  3. avoid the choice because of the fear that uncertainty makes me susceptible to fraud, since other agents have freedom in the uncertain space.

These reasons all point to uncertainty as an indicator of incomplete information, and as a cue to spend more on information or a chance to avoid future information cost, something a priori excluded if you assume information costs to be zero, as does canonical economic theory.

Presented in this way, the desire to reduce uncertainty is instrumental, a means of bettering the expected outcome, not a goal in itself.

Conclusion

Risk adversity’s relationship to the management of ignorance cannot be denied. Characterizing risk adversity as the maximization of concave utility functions misses this point. Exogeny is useful for compartmentalizing and communicating theories, but in making ignorance exogenous you restrict an agent from choosing to learn more or to avoid the cost of future learning. In the presence of this restriction, the model will not display a realistic reaction to risk.

Related

Applied Abstraction – the terminal / instrumental dichotomy

Saturday, March 28th, 2009

Note: I use the words “value” and “goal” interchangeably in this post.

Terminal values are pursued for their own sake. Instrumental values are pursued as a means to attaining terminal values, or other instrumental values closer to the terminal values.

For example, if I had the ultimate goal of maximizing profit, I might break that up into the instrumental goals of increasing revenue and decreasing cost. Depending on the situation, those two instrumental goals can be broken up into even more specific instrumental sub-goals.

Instrumental sub-goals help to reduce repetition in thought. Conditions change in such a way as to change the actions demanded by an instrumental sub-goal, but not change the validity of the way in which the terminal goal is divided into sub-goals.

Irrationality is often caused by holding on to instrumental sub-goals even though they no longer serve your terminal goal. This is what Eliezer calls thought caching.

Stockholder Perks

Friday, March 27th, 2009

This came in the mail today:

2008brkreport2008brkmeeting

Applied Abstraction – the qualitative / quantitive dichotomy

Sunday, March 15th, 2009

Nominal – numbers are either the same or not

Ordinal – if different, one number is bigger than the other

Interval – differences in numbers can be compared

Ratio – zero is a significant point

The properties described are nested – nominality is needed for ordinality, which in turn is needed for interval-ality, in turn needed for ratio-ality.

Anything interval level and above is considered quantitative. Most qualitative measures serve only nominal function, i.e. whether something is the same or not.

More formally: http://en.wikipedia.org/wiki/Level_of_measurement [HT: Soo Mei]

Applied Abstraction – the transient / steady state dichotomy

Sunday, March 15th, 2009

The language for this one comes out of physics. It is a general phenomenon seen in many places. In chemistry the words used are kinetics / thermodynamics.

Equations of Motion – When we understand what causes change in a system, we are able to predict what the state of the system will be in the future given its state now. The rules which dictate the change are called equations of motion, or EOM.

Information Loss – Sometimes, EOM leak information. This means that where it was possible to distinguish between two possible worlds at time t1, you find them indistinguishable at some later time t2. Funnels are a good example of this – when you use a funnel you are able to care less about where the liquid is initially, because it all goes to the same narrow end.

Transience / Steady State – When an EOM discards all the information it has, no matter where you start out you will end up at the same place. Most of the time, however, EOMs don’t lose all the information, and the movements which are subject to rubbing out, or friction, are called transient movements, while whatever persists in the long run is called the steady state.

Applied Abstraction – The Dichotomy Series

Wednesday, March 4th, 2009

This is an index for an upcoming series of posts on the various dichotomies used to apply abstraction to modeling and problem solving. The list below will gradually be replaced with links.

Each pair of terms is hopefully balanced, being specific to the same degree as the other. The different pairs are not exactly orthogonal, but are hopefully orthogonal enough for the collection as a whole to serve as a typology.

I finally get Fractional Reserve Banking

Wednesday, March 4th, 2009

The MIT Alum club in New York organizes an event called finance brunch, which forwarded me this article last weekend:

http://www.truthsetsusfree.com/ModernMoneyMechanics.pdf

There is a blow-by-blow account inside on the mechanics of the balance sheet when a bank makes loans and takes deposits.

The basic idea is this: ΔAssets = ΔLiabilities is always true because it is an accounting identity. However, liquid assets = liquid liabilities is not always true.

When a bank lends out money, it merely creates a number in its ledger. The (asset~liability) pair is (loan~checking account). When the borrower or a depositor withdraws currency, however, it is (reserves~checking account) that is deducted. The reserve requirement is the required ratio of reserves to checking accounts, basically a requirement that banks always be able to pay cash for a certain fraction of its checking account balances.

Paying for your Economics / Computational assumptions

Wednesday, March 4th, 2009

In a digital computer, wires have one of two states, 0 or 1. This is not natural, however – many possible voltages can be put on the wires, but only a small subset of these possible voltages ever occur inside a digital computer, being those voltage values which can be unambiguously interpreted as either 0 or 1. This is a design goal – the components of a digital computer work in such a way as to maintain the interpretability of the wire voltages, and also to maintain the  interpretability of the transistor physics as logic gates.

This idealization of the computer’s voltage states is used to standardize the components and the component interactions. That standard behavior then forms the basis for logical induction, such that we can design circuits which as a whole have too many possible states to exhaustively test, yet can be known to be correct by virtue of the correct interactions between its parts. The whole can be known to be correct because the parts are correct. There are many devices that can be interpretable as logic gates – different implementations of the same idea. Implementations include domino tiles, water channels, game of life, wooden blocks. The reason we choose silicon is because silicon devices maintain interpretability even at high frequency, and can be made very small.

Economists have theories about how economics processes work, and many of these theories start with assumptions like “no transaction cost” and “no externality”. These don’t have to be taken as passive observations about reality. Rather, they should be taken as conditions worth bringing about, because of their utility in market processes.

One case which interests me greatly is the government intervention used to create the neutrality of money. Brad Delong describes Keyne’s General Theory as trying to make Say’s Law true.