# Kelly Criterion - In English

This is a follow-up post about the Kelly Criterion that was made simpler, as the original was a bit hard to digest. This explanation explains the theory in much easier terms for those of us not as math inclined.

**Kelly made simpler**

Erik Reuter 6/30/97

The thread called "Kelly made simple" was not simple enough for me. Whatever David D'Aquin was demonstrating went totally past me. Most of the problem seemed to be terminology and definitions. So, I've read a Kelly FAQ to learn some of the jargon and assumptions, and here I will summarize my understanding. I think I understand the basics, but if I say something wrong, I hope someone will correct me. This is mostly a check to make sure I understood what I read, so please feel free to comment or elaborate on anything I say.

Consider a gambling game where you start with a bankroll B and have a probability p of winning V units and a probability (1 - p) of losing 1 unit.

The expected value of the win per unit bet is

W = p (V) + (1 - p)(-1) = p (V + 1) - 1

and if you bet a fraction f of the bankroll N times, then the expected value of the ending bankroll is

E[f, W, N, B] = B (1 + f W) ^ N , 0 <= f <= 1

Assuming W > 0, it seems natural to ask how much to bet on each play of the game, given that N, W and B are known. Clearly, to maximize E[f] we should choose f = 1, that is, bet the entire bankroll each time. With f =1, you will usually go broke for moderate to large values of N, unless you have a p close to 1. But those times that you don't go broke you will win a large amount. The latter offsets the former and results in maximum expected value of the final bankroll for f = 1.

For some people, maximizing E[B, f] is not the most desirable goal. Kelly considered choosing a (somewhat arbitrary) utility function of which the expected value is maximized:

u[x] = Log[x]

where x is the bankroll and u[x] is the utility of the bankroll. With this utility function, as the bankroll grows large the marginal increase in utility diminishes (you reach a region of diminishing returns). As the bankroll decreases to near 0, each small decrease in bankroll becomes a great loss in utility.

The expected value of u[B] for one play of the gambling game is

K[f, V, p, B] = p Log[B (1 + f V)] + (1 - p) Log[B (1 - f)] = p Log[1 + f V] + (1 - p) Log[1 - f] + Log[B]

We can find the maximum expected value of utility u[B], that is, maximum K[f], by taking the derivative of K[f] with respect to f, setting it equal to zero, and solving for f (the check that it is indeed a maximum and not a minimum or saddle point is left for the reader!).

K'[f_max] = 0 = p V / (1 + f V) - (1 - p) / (1 - f)

f_max = ( p (V + 1) - 1 ) / V = W / V

So, the fraction of the bankroll to bet which maximizes the expected value of the logarithm of the bankroll is equal to the expected win of a unit bet divided by the payoff for a unit bet.

David D'Aquin demonstrated in his post that for a few numerical examples,

B (1 + f) ^ (p N) (1 - f) ^ (N - p N)

was maximized by this choice of f. It can easily be shown that this is indeed the case by differentiating and solving. I think he claims this is the median value of the distribution of final bankrolls, but I don't know how to show this algebraically. I also don't understand the relevance of this value, whether it is the median or not, to Kelly betting. Perhaps he could explain.

Erik Reuter