Trading Strategies

Everyone has their own pet strategy for making money on Wall Street, but so far, I haven’t seen any systematic analysis of which ones work. People, as often as not, just guess at random and hope for the best. To help eliminate this gaping void in our knowledge, I propose a competition in trading strategies. Proposed rules:

- Any publicly traded asset is fair game. This includes stocks, bonds, commodities, derivatives, etc.

- You cannot borrow more money to invest; you’re stuck with what you have at the start. Leverage is cheating, and it’s also dangerous (margin calls were a primary cause of the Great Depression).

- The winning strategy is the one that makes the most money from a starting asset base, over a set time period, using historical data.

- There should be different categories for different starting asset bases ($100K, $1M, $10M, etc.) and different time scales (2 years, 5 years, etc.)

- Strategies may not use hindsight. For this reason, naming specific companies or assets (eg., GE stock or IBM bonds) is not allowed. Absolute times (eg., buy tech stock in 1996 and sell in 2000) are also not allowed.

- Strategies must be sane enough to be describable in a dozen or so pages of text, which must be understandable by anyone reasonably well versed in economics. This does not include citations, references to prior work, justification, the research process used, etc., just the details of how to make money.

I will personally fund such a competition with $250, so long as the rules are reasonably similar. Any other takers, or rules I should include?

EDIT: I am aware that many traders put a lot of emphasis on “instinct” or having a “feel for the market”. Large-scale markets developed too recently to allow for the evolution of specialized adaptations, so our gut feelings are more than likely to be wildly inaccurate.

Hedge Funds

“The public’s out there throwing darts at a board, kid, I don’t throw darts at a board; I bet on sure things.” – Gordon Gekko

Hedge funds, by and large, are an obscenely profitable industry; worldwide, hedge funds now manage around US $3 trillion in capital. A hedge fund can easily make as much money in a single quarter as a traditional investment would in an entire year. For now, hedge funds are unavailable to those of us with less than US $1 million in liquid assets, due to SEC regulations; sooner or later, someone is going to figure out how to allow the middle class to invest in these funds, but that’s another topic.

Hedge funds make most of their money, in general, through finding imbalances in the market and then exploiting those imbalances. Finding market imbalances requires some work, but it’s not impossible or even exceptionally difficult. If no market imbalances existed, every possible publicly observable variable would have no correlation with the future price of an investment, which is obviously absurd; conversely, every correlation is a potentially exploitable imbalance.

The main reason why most investors don’t beat the market is the failure to look for such imbalances, not the failure to find them. Most people, and even most brokers, pick investments by using a hodgepodge of inconsistent heuristics which are never written down and never tested. Human instinct can be shown to be horrendously wrong on a large number of objective tests, so this is not surprising for the field of cognitive psychology, but few investors have seriously studied the underlying concepts.

Once an imbalance is identified, it can be exploited through the other primary instrument of funds: massive leverage. The public is now at least somewhat familiar with the concept of buying on margin, as many middle-class Americans own stock, and margin calls were identified as a primary cause of the Depression. Leverage in general is less well known, but it amounts to the same thing: borrowing many dollars in loans for each dollar in capital, investing it, and then siphoning off the difference between investment return and interest rate. Leverage can increase returns, but it also increases risk, as you can wind up losing more money than you started with. Hedge funds try to decrease this risk by diversifying into many different investments, but it isn’t a perfect system, and many do go bust.

Conservation of Ranking

Suppose that you want to rank the members of a set X = {x1, x2, x3, x4} relative to each other. You can, say, assign x1 = 40, x2 = 30, x3 = 20, and x4 = 10 by some measurement metric Y. Or, you can assign x1 = 80, x2 = 60, x3 = 40, and x4 = 20. Or you can assign x1 = 4, x2 = 3….

No matter which one you choose, as long as the metrics are all multiples of each other, the elements of the set are still in the same position relative to each other (which is all we care about, by assumption). In math-speak, the measurement metric Y is invariant under a positive linear map, as the important properties don’t change when you throw in arbitrary constants. It’s generally inconvenient to use huge constants, so people usually renormalize Y to some neat number, such as 1 or 100.

Humans, however, aren’t born with this kind of math built in, and so you can pull tricks by failing to renormalize. Very few people will notice if your percentages add up to 105%. Without quantitative analysis, it’s even worse, as you can use qualifiers (“important”, “big”, “profitable”, “useful”, etc.) to stop people from renormalizing without setting off alarm bells. This is a very, very old trick; the best way to counter it, generally, is to quantify the metric and then make sure it’s renormalized during every step. Some cases where this comes in handy:

- Probability. For reasons of mathematical sanity, probabilities are always renormalized to 1, although there are some cases where you can get away with other numbers (eg, Bayes’ Theorem still gives the same result if you multiply the prior by 100). Nevertheless, quacks worldwide still fail to renormalize by claiming that they predicted every result with high accuracy.

- Utility. Utility functions are invariant under positive linear maps, and they can generally be renormalized to whatever you like (finite numbers are usually necessary, as explained here). Making your life wonderful by assigning a high utility to everything is the same mistake as an amateur economist failing to account for opportunity costs.

- Priority. If you generalize priority to quantity of resources allocated, rather than using a simple preference ranking, it should be invariant. I am still stunned by how many managers insist that every task is extremely important; this corresponds to a crisis of Bayesian affirmation.

- Grades of all varieties. Failure to renormalize is well known as grade inflation. Note that renormalization is not bell curve grading; it corresponds to, say, the fungibility of x/4.0 GPAs and x/100 averages.

- Competitions of all varieties. Professional sports are generally immune to renormalization failures, as there can obviously be only ten teams in the top ten. However, this phenomenon is rampant on school sports teams, thanks to the “self esteem” culture.

FAI Knowledge Survey

The problem of recruiting FAI researchers has been discussed in greater detail elsewhere, but currently, we don’t even have a clear idea of how easy/difficult recruiting will be. Therefore, I suggest that SIAI or a team of volunteers draw up a dozen or so questions, with known, well-defined answers, such that being able to solve the questions is an obvious pre-requisite to doing useful FAI work. Such a survey can be distributed extremely easily, at little-to-no cost, using already-existing networks such as SL4.

The point of this should not be to judge the hireability of any one specific individual, which also depends on ethics and ability to work in teams, among many other factors. It should, however, be useful in determining how widely distributed background knowledge is; hopefully, it should also tell us which fields are not well-covered by existing literature and which may need further exposition.