Archive for October, 2008

Bad Questions

Tuesday, October 28th, 2008

Ideas derive meaning from the expectations they generate. I successfully transfer an idea to you when you are able to generate the same set of expectations due to the idea. Whether you agree is a different issue, you can understand but disagree. In fact, understanding is a prerequisite to disagreeing – to constantly disagree due to misunderstanding is a waste of time.

Evolution, Economics, and Normative Standards

Sunday, October 26th, 2008

I used to be uncomfortable with how circular Darwinian evolution seems. It had seemed like, by defining fitness as whatever survives, I could reduce Darwinian evolution to a truism. This is indeed possible if you don’t actually delve into the actual implementation details, of understanding the necessity of genotype-phenotype correspondence, and of the mapping of genotypical stability to phenotypical fitness-function proximity. I wonder how many out there have felt like they understand evolution without seeing how it could possibly be wrong, i.e. without the ability to identify the counterfactuals that the idea eliminates. I think this is because the power of evolution as an optimizing process brings it close enough to the optimizer that is our intelligence, such that we cannot decisively shatter it in all scenarios – i.e. we cannot always predict what evolution will come up with, even though we outperform it on most problems.

In the same way, I’ve been struck by how circular standard economic theory is. The argument for allowing all voluntary transactions is this – that if the transaction is voluntary, and both participants agree to it, it must improve both of their well-beings. This seemed like an overwhelmingly powerful argument to me at first glance, so powerful that I realized that my understanding of it must be wrong.

The conditionals on the correctness of this argument are important, because economics is a process of social computation and optimization powerful enough to defeat our ability to predict its results – we must know that it is correct by proof, and not by enumerating all possible scenarios.

I later learned about externalities – not all transactions are voluntary. At one extreme, you have armed robberies, where the transaction can clearly be inefficient because the robber could have less use for an item than the robbee. At the next level, you have the standard tragedy of the commons. At the other extreme, I shift my attention away from the word voluntary and to the word participant. What does it mean to be participant to a transaction?

There are two senses in which a person is a particpant in a voluntary economic transaction. Firstly, his well-being is affected by it, and secondly, he has veto power over it. Every transaction for which these two senses identify the same set of people improves well-being. The problem is, for my well-being to be affected by a transaction, all that is necessary is for me to know about it. I can choose to be unhappy over many transactions controlled by others.

If I witness a person overpaying for a product in a store, it makes me unhappy, yet that is clearly not considered an externality in conventional economics texts. Likewise, people resent the inequality of income, and yet that is not considered an externality in the conventional sense either. The externality concept clearly contains hidden normative assumptions which I have yet to understand.

To Be Continued…

On Blogging

Sunday, October 26th, 2008

Ben Casanocha describes Seth Godin on blogging:

Seth says that blogging is not about the size of your audience but the “meta cognition” of thinking about what you’re going to say and then saying it, having to explain yourself. Tom says that blogging is the most important professional thing he’s done in the last 15 years. Note: Seth says “blogging is free.” Not true. Blogging takes time. Time is money.

Ah meta-cognition. No wonder I’ve had difficulty blogging. I’ve been suspicious of all things meta for a while now, ever since deciding that fundamental understanding was not as relevant as it felt, and reading GEB and realizing how unstable introspection was liable to be, and then coming across Yudowsky’s treatises-for-the-purpose-of-epistemic-health (here) and realizing how many of the mistakes I made myself.

With meta-levels as depth, my searches have been too deep and not wide enough. As I have opened my eyes to the evidence, I have realized how little the type of depth I favored corresponded to a real reduction in decisional uncertainty. Deep thinking can still be good, but not deep thinking of this sort. Bad assumptions and bad recursive structure, in other words. Depth is not virtue in of itself, it can only be if the assumptions and algorithms followed are sound – to understand is to compress, and to compress one is refining the mapping and better using the primitives in the representation for a given reality. To do that, one has to understand both reality and the primitives. Doing so separates specification from implementation – not every thought a brain thinks is about itself.

Make these!

Tuesday, October 14th, 2008

Lumbar support straps are not common as it is, and I’d have to try one out to find out why, but I would really like to see one combined with a simple massage device, perhaps for use on public transportation.

When SMSing multiple people, I wish there were a “Reply to All” function. This would be relatively easy to implement in the same manner as conference calling services, by having each participant SMS the same number and extension, and thereafter receiving all messages from said cluster. Having a web-accessible trail for such a service would be a bonus.

Learning from Repetition

Monday, October 13th, 2008

Gerald Sussman teaches a class at MIT called The Structure and Interpretation of Classical Mechanics, where students study classical mechanics with the aid of powerful Scheme notation.

Using Scheme has the benefit of exposing just how impressionistic popular calculus notation is. For example, that partial derivatives operate on function positions is something that emerges straight from the notation – making it much easier to understand the distinct between taking a derivative and evaluating it at a point or on a plane, and clarifying the distinction between a variable and a number.

The crucial ingredient is the compiler which processes the custom notation, an evaluator which is uniform across space and time. Even though the compiler is effectively a black box, the extent to which we are certain of its spatial and temporal invariance and its simplicity (i.e. it is incapable of willfully conspiring against you, or of distinguishing between non-special variable names), it provides for an effective learning environment.

In a way, this is a specific demonstration of a more general principle, that it is only possible to learn about the world to the extent that it repeats. The usual assumptions are those of uniformity over space and time. Such assumptions can turn out to be wrong, of course – the Lucas Critique warns against the use of historical data without properly conditioning the correlations against hidden contexts / structural variables.