Membership:
Latest: Squally
New Today: 0
New Yesterday: 0
Overall: 535 People Online:
Members: 0
Visitors: 5
Total: 5Who Is Where: Visitors: 01: Home
02: Home
03: Home
04: All Forums
05: Home Staff Online:

No staff members are online!

All Forums > WHATS ON YOUR MIND > SPORTS SHIT > A quantitative introduction to the Kelly criterion

Posted: Sat Feb 15, 2014 5:40 pm Post subject: A quantitative introduction to the Kelly criterion

Part I -- Expected Value vs Expected Growth

A question I'm often asked is how exactly expected value differs from expected growth. The difference is somewhat subtle but understanding it is essential to risk management in general and the Kelly criterion in particular.

The question frequently arises (as it did here in the the next-to-last paragraph) in the context of the idea that betting one's entire bankroll implies -100% bankroll growth. (That's 100% bankroll shrinkage -- a bankroll that shrinks to $0.) What's more, if you bet your entire bankroll in one go, 100% bankroll shrinkage is implied regardless of both the probability of the bet winning (as long as it wins less than 100% of time) and the odds paid out on the bet (as long as the odds are less than infinity).

Think about that for a moment, because it's an important point: If when you bet you wager your entire bankroll each time then you expect your bankroll to eventually shrink to zero.

Well, at least it should be important, but the truth is doesn't really get us any closer to understanding what exactly bankroll growth is and how it differs from expected value. We’ll get back to this later.

Let's start with a brief review of expected value. I described it in relative depth here. And I quote:

Originally Posted by Ganchrow
The notion of expectation is central to probability and statistics and may be thought of as an average with an extra syllable. If you were to flip a coin 10 times then you could expect it would land on heads 5 times and you could expect it would land on tails 5 times. In reality of course the coin’s not always going to land on heads exactly 5 times out of 10 (in fact it would only do so about 24.6% of the time), but if you were to repeat the experiment (flipping a coin ten times) many, many times over then on average it would land on heads 5 times each trial.

The same thought process is also applicable to sports. If the Yankees can be expected to win a particular game 60% of the time, then this would mean that if the exact same game were repeated under the exact same conditions across many, many parallel universes, we would expect the Yankees to win 60% of those encounters.

So let’s say you bet $1 straight up that the Yankees are going to win that game. Now that’s quite obviously a good bet. But just how “good” is it?

That’s where expectations come in with sports betting. If you made the same bet in each of those parallel universes you’d win $1 60% of the time, and lose $1 40% of the time. Now let’s say that there are actually 1,000,000 of these universes. Exactly how much money would you make? Well, in 600,000 of those universes you’d make $1 for a total of $600,000 dollars, and in the remaining 400,000 of those universes you’d lose $1 in each game for a total of $400,000 dollars. So you'd receive $600,000 and would pay out $400,000 meaning that your total profit would be $200,000. Winning $200,000 across 1,000,000 means on average you would have won $200,000 / 1,000,000 games = $.20 per game.

Now of course 1,000,000 is just a made up number in this context. There aren’t really 999,999 other universes where we could make such a bet. This bet can only be made once. But that doesn’t actually matter in the world of statistics. Whether you can make this bet only one time or you can make it multiple times the expectation per game is precisely the same, namely 20%.

So to summarize, the expected value of a bet is the amount we would receive on average if we were to repeat the exact same bet a very large number of times. As such, the expected value of a bet is a a metric by which one might judge the relative attractiveness of that bet. If one bet has an expected value of 5% (meaning that for every $10,000 we bet we would expect to win $500) and another has an expected value of 10% (meaning that for every $10,000 we bet we would expect to win $1,0000), then we would tend to think that one would prefer the latter bet to the former.

But there's a bit of a difficulty here -- namely, expected value ignores any consideration of the relative likelihoods of given outcomes alone. For example a $10,000 bet on a 0.0000000000000000000000000000000000001% likelihood event paying out at +110,000,000,000,000,000,000,000,000,000 ,000,000,000,000 odds corresponds to an expected value of 10% (+$1,000). But who among us would be willing to essentially throw away $10,000 on such a long shot? To put it in perspective you'd be about 1,870 times more likely to win the the New Jersey State Lottery five times in a row, than you would be to win this particular bet. Does it really matter that if by some fluke of nature you actually did win you'd have an unfathomably huge amount of money? If you're like most people, the answer is probably not.

So now here's the difficulty ... there's no way whatsoever to account for this very real phenomenon of preferences by appealing to the theory of expected value alone.

(Enter stage right, expected bankroll growth.)

One major problem with the proposed bet is that for most people, $10,000 represents a rather large chunk of one’s bankroll to be throwing away on a bet that’s nearly certain to lose. But while a $10,000 bet is probably too large a quantity to risk on this bet, there’s still a sufficiently small dollar amount that most people would be willing to risk to make this bet. Granted, for most people that dollar amount would be somewhere in the neighborhood of a tiny fraction of a penny, but it nevertheless would still be a positive dollar amount.

The fundamental issue with bets such as these is that, despite being positive EV, placing them is an excellent way to go broke. The apparent contradiction is easily reconciled. If you were to repeat this bet once in each of a gigantically huge number of parallel universes, in nearly all of the universes you’d lose your bet, but in a tiny, tiny, tiny, tiny, tiny fraction of those universes you’d have win the bet and that win quantity would make up for all the losses plus an additional 10% of the amount risked.

The fact is that most people just aren’t willing to live through billions of trillions worth of bets just to have a vanishingly minuscule probability of winning a huge odds bet once. So while the bet may have positive expected value, the expected outcome is for your bankroll to shrink by $10,000 each time the bet’s made. If your bankroll were $1,000,000 and you made the bet 100 times, you could expect to be broke after the 100th bet (even though your expected value would be 10% × $1,000,000 = +$100,000).

So let’s look at some more practical numbers. Assume you’re considering at a bet that wins with 50% probability and pays out at odds of +200. Further assume your total bankroll is $100,000 and that you want to place 1% of your bankroll on this wager.

Question: Where do you expect your bankroll to be after 2 wagers?

Answer: There are 4 possible outcomes after placing two wagers:

Win both bets.
Win 1st bet, lose 2nd bet
Lose 1st bet, win 2nd bet
Lose both bets

Now because winning and losing the bet are both equally likely, all 4 outcomes occur with equal probability, namely 25%. Recall that you’d be betting 1% of your bankroll on each bet and would be paid off at odds of +200. Therefore, your ending bankroll under each of the 4 outcomes would be:

(The derivation of these equations is simple. Every time you win your bankroll would grow to 102% of its previous value, and every time you lose your bankroll would shrink to 99%.) The expected value from betting in this manner would be 25%×$104,040 + 25%×$100,980 + 25%×$100,980 + 25%×$98,010 = $101,002.50. To calculate expected growth, we would first need to recognize that given our 50% win probability, our expected outcome would be to win a bet and to lose a bet (# of wins = 50% × 2 bets, # of losses = 50% × 2 bets). Therefore our expected growth would be that associated with that outcome (with expected growth, the relative ordering of wins/losses is irrelevant), namely $100,980.

Therefore, the expected value from the two bets is $1,002.50 or 1.0025%, and the expected growth is $980 or 0.9800%. Notice that expected value is higher than expected growth -- this is what you’re always going to see. Expected value will always be higher than expected growth (except for probabilities of 0 or 100%, we’ll they’ll be equal) because a few relatively large, relatively uncommon outcomes will increase EV. Another way to think about this is by realizing that the worst case scenario is losing everything one time over., while the best case scenario would be winning your bankroll infinity times over – in other words you while your maximum possible profit is unlimited, your maximum possible loss is limited to your bankroll.

So in this instance our expected outcome would be a bankroll of:
B* = $100,000 × (1 + 2×1%)2×50% × (1 - 1%)2×50% = $100,980,
implying expected bankroll growth of
E(G) = $100,980/$100,000 = 0.9800%

It should be readily apparent our expected outcome after n bets would be a bankroll of:
B* = $100,000 × (1 + 2×1%)n×50% × (1 - 1%)n×50% = $100,000 × (100.48881%)n,
implying expected bankroll growth of
E(G) = (100.48881%)n -1.

By extension, our expected outcome after just 1 bet would be:
B* = $100,000 × (1 + 2×1%)50% × (1 - 1%)50% = $100,488.81

And our expected bankroll growth would be
E(G)= (1 + 2×1%)50% × (1 - 1%)50% - 1 = 0.48881%

(This last result bears a little discussion. We can talk about expected growth after only 1 bet in the same manner as we can talk about expected value after just one bet. In the same way as we’d never see a real result equal to our expected value, we’d never actually see growth after one bet equal to expected growth. This should cause absolutely no concern.)

So let’s generalize our results with expected outcomes and growth. Given a starting bankroll of B0, decimal odds of O, a win probability of p, and a bet size of X (as a percentage of starting bankroll, B0), the bankroll associated with the expected outcome from placing the bet would be:
B* = B0 * (1 + (O-1) * X)p * (1 - X)1-p

And expected growth would be:
E(G) = (1 + (O-1) * X)p * (1 - X)1-p - 1
Expected value, you’ll recall, would be:
EV = p*(O-1)*X - (1-p)*X = (pO - 1)*X

Q: So let’s look at a concrete example: What are the expected value and bankroll growth associated with a bet equal to 1% of bankroll paying out at -110 and winning with probability 54%?

So think about these results for a moment. We have a positive expectation bet and hence, quite naturally, the more we bet on it the more we expect to make. However, if we were to wager too much on this bet then we’d expect our bankroll to shrink by 2.1510% per wager (were we to place this positive expectation bet 32 times, for example, we’d expect our bankroll to depreciate roughly a half).

So this should help elucidate the huge odds bet above. No matter how positive EV a bet might be, if you bet too much on it then you expect your bankroll to shrink. This is the concept to which people are referring when they talk about "money management". Even if you could pick NFL spreads at 75% (which you can’t), were you to bet too much, you'd expect to head towards bankruptcy.

So as a limiting case let’s look at one more example, the example of betting one’s entire bankroll mentioned at this start of this article: win probability = p, bet size = 100% of bankroll.

EV = 100% × (pO - 1) = pO – 1 (EV > 0 for p > 1/O)
E(G) = (1 + (O - 1))p × (0)1-p - 1 = -100% (for p < 1 and O < ∞)

So what does this tell us? Well for one thing it tells us that even if you were the “best handicapper ever”, were you to risk your entire bankroll on every bet, you would expect to go broke. More generally, it illustrates the concept that looking solely at expected value as a metric for the attractiveness of a given bet is not the proper way to maintain long term growth.

_________________ “We make a living by what we get. We make a life by what we give.”

Posted: Sat Feb 15, 2014 5:41 pm Post subject: Re: A quantitative introduction to the Kelly criterion

A quantitative introduction to the Kelly criterion

Part II -- Maximizing Expected Growth

In Part I of this series we introduced the concept of expected growth, where we discussed why a bettor might reasonably choose to gauge the relative attractiveness of a given bet by considering its expected growth. In Part II of the series we'll look at how a bettor might use the notion of expected growth to determine how large a bet to place on a given event. This is the very essence of the Kelly criterion.

There are two extremes when it comes to placing positive expectation bets. On the one hand you have people like my aunt, who’s so afraid of risk that I doubt she’d even bet the sun would rise tomorrow (“But what if it didn’t? I could lose a lot of money!”). On the other hand you have people like my old college buddy Will, whose gambling motto was “Get an advantage, and then push it.”

One Saturday night during the spring term of my sophomore year, Will decided he was going to run a craps game. He put the word out to a number of the bigger trust fund kids and associated hangers-on and let the dice fly. After maybe 4 or 5 hours, Will was up close to $8,000, which was far from an insignificant amount for us at the time. One player, an uppity gap-toothed British guy named Dudley, whose own losses accounted for most of that $8K, loudly proclaimed that he was sick of playing for small stakes and wanted some “real” action. He told Will he was looking to bet $15,000 on one series of rolls. Will paused for a moment and then quickly agreed. He just couldn’t back down from the challenge. It didn’t matter that this represented all of Will’s spending money for the entire semester -- the odds were in his favor and he knew it and as far as he was concerned the choice was clear.

So what happened? Well to make a long story short, the guy picked up the dice and without a word silently rolled himself an 11. Will paid him the next Monday and wound up having to work at the campus bookstore for the rest of the semester. I remember a few weeks later I ran into Will at work and we got to talking while he moved boxes around trying to look busy. I asked him if he and Dudley and were still friends.

“Sure,” he said, “But the guy’s a moron. Didn’t he realize the odds were in my favor?”

So there you have it. Will was quick to label Dudley a moron because he made a negative EV bet. What Will failed to realize, however, was that this guy certainly had the means to make $15,000 bets, and ultimately wouldn’t have been all that impacted by the result were he to have lost. Will on the other hand, had no business making a $15,000 bet that he stood to lose close to half the time. It didn’t matter that if he made the same bet 10,000 times over he’d almost certainly have come out well ahead, it only took making the bet one time to bankrupt him for the semester and render him incapable of staking any more craps games at all.

Dudley might very well have been foolish for having offered to make the negative EV bet, but Will on the other hand was foolish for having risked such a large chunk of his bankroll on the positive EV bet in the first place. Never mind that losing the bet forced Will to work in the bookstore, never mind that losing the bet forced Will to switch from his Heineken bottles to Milwaukee’s Best cans, losing the bet had probably the worst effect possible on an advantage bettor – decimating his bankroll.

Hopefully, this example helps illustrate a key concept that was touched on in the last article. Specifically, that expected value and expected growth are both key components of proper long-term wagering. Most bettors instinctively recognize the importance of expected value – most everyone realizes that betting 2-1 odds on a fair coin flip is “smart”, while betting 1-2 odds on a fair coin flip is not. But very few people consider as much as they should the expected growth of their bankroll due to their wagers they make. When a bettor places too much importance on the expected value and not enough on expected growth, he puts himself in danger of winding up in the same predicament as Will – pushing around boxes at the Brown Bookstore and trying to look busy, despite having made a indisputably “smart” bet when only considering EV alone.

But let’s go back to Will’s initial decision to make the $15,000 bet. Certainly it’s pretty clear that making the bet was a mistake, but it should also be clear that because the bet had positive EV there was obviously a certain (lower) risk amount for which Will would have been making the right decision in accepting the wager. For a person with unlimited access to funds, the decision of how much to bet on a positive EV wager is easy – bet as much as possible. But for a person with a limited bankroll who wants to survive until the next day so he can continue staking craps games, the decision isn’t quite so obvious. That’s where Kelly comes in.

You’ll recall from Part I of this article the equation for expected growth:

E(G) = (1 + (O-1) * X)p * (1 - X)1-p - 1

Where X represents the percentage of bankroll wagered on the given bet and O the decimal odds.

For a player like Will, who has his basic necessities already paid for (food, shelter, clothing), his only real goal is to grow his bankroll as much and as quickly as possible. As such, Will’s objective would be to maximize the expected growth of his bankroll. The size of the bet (always given as a percentage of the player’s total bankroll) is known as the “Kelly Stake” and is a function of the bet’s payout odds and either win probability or edge1.

Mathematically , the formula for the Kelly stake is derived using calculus2. The actual mechanics are rather unimportant, but the result is that in order to maximize the growth of one’s bankroll when placing only one bet at a time, one should bet a percentage of bankroll equal to edge divided by decimal odds minus 1. (This is assuming the player has a positive edge. If he doesn’t his optimal bet is zero.) In other words:
Kelly Stake as percentage of bankroll = Edge / (Odds – 1) for Edge ≥ 0

Put in terms of win probability the equation becomes:3
Kelly Stake as percentage of bankroll = (Prob * Odds – 1) / (Odds – 1) for Probability * Odds ≥ 1

Let’s take a look at a few examples:

Given a bankroll of $10,000 and an edge of 5%, then on a bet at odds of +100 one should wager 5% / (2-1) = 5% of bankroll, or $500.
Given a bankroll of $10,000 and a win probability of 55%, then on a bet at odds of -110, one should wager $10,000 * (55% * 1.909091 - 1) / (1.909091-1) = 5.5% of bankroll, or $550.
Given a bankroll of $10,000 and a win probability of 25% then on a bet at odds of +350, one should wager $10,000 * (25% * 4.5 - 1) / (4.5-1) ≈ 3.57% of bankroll, or about $357.
Given a bankroll of $10,000 and a win probability of 70% then on a bet at odds of -250, one should not wager anything because edge = win prob*odds = 70%*1.4 = 98% < 1.

Let’s look at all this a little more closely. Consider a bet at even odds (decimal: 2.0000) -- in this case, the bankroll growth maximizing Kelly equation simplifies to:

K(even odds) = Edge/(2-1) = Edge for Edge ≥ 0
In other words, when betting at even odds, the expected bankroll growth maximizing bet is equal to the percent edge on that bet. So if you have an edge of 5% on a bet at +100, then you should be wagering 5% of your bankroll. If your edge were only 2.5% then you should be wagering 2.5% of your bankroll. Now let’s consider a bet at -200, or decimal odds of 1.5:
K(-200 odds) = Edge/(1.5-1) = 2*Edge for Edge ≥ 0

So this means that for a bet at -200, the expected bankroll growth maximizing bet size would be twice the edge on the bet. Similarly, for a bet at -300, one should bet three times the edge, and for a bet at -1,000 one should bet ten times the edge.

This fits rather well with the manner in which many players size their relative bets on favorites. For a bet at a given edge if they were to bet $100 at +100, they’d bet $150 at -150, $200 at -200, $250 at -250, etc.

Now let’s consider bets on underdogs (that is, bets on money line underdogs -- bets paying greater than even odds). In the case of a bet at +200:
K(+200 odds) = Edge/(3-1) = ½*Edge for Edge ≥ 0
The optimal bet size is only half the edge. Similarly at a line of +300, the optimal bet size would be a third of edge, at +400 a quarter the edge, etc.

Now this is quite different from the manner in which many players choose to structure their underdog bets. If they were to bet $100 on a line of +100, they might also bet $100 on a bet with the same edge at +400. For a player wanting to maximize his bankroll growth, this is inappropriate behavior because it attributes, relatively , excessively large amounts to underdog bets. Assuming constant EV an expected growth maximizing player should only bet half of his +100 bet size at +200, and only a quarter his +100 bet size at +4004.

So what we see in the case of any bet (be it on an underdog or a favorite) is that the player should bet an amount such that the percentage of his bankroll he stands to win is the same as his percent edge. In other words, a player betting at an edge of 2% should place a bet to win 2% of his bankroll. This means that at -200 he’d be risking 4% of his bankroll, while at +200 he’d only be risking 1% of his bankroll. The rationale behind this should be clear when you consider the following example:

For a player betting at an edge of 5% and odds of -200, the proper Kelly stake is 10%. Over 100 bets, he has an expected return of 64.7% with a 36.7% probability of not turning a profit and a 3.4% probability of losing two-thirds or more of his stake.

For a player betting at the same 5% edge but at odds of +400, were he to bet the 10% stake of the -200 player, while he’d have the identical 64.7% expectation, he’d have a 73.5% probability of no profit, while his probability of losing two-thirds or more of his stake would be 55.8%.

Generalizing, for two same-sized bets of equivalent (positive) EV repeatedly made over time, there’s a higher probability associated with losing a given amount of money when making the longer odds bet.

Once again, we keep returning to the same simple but often overlooked point – expected value isn’t everything. Due to the fact that longer odds (for a given edge) imply greater a probability of loss, the Kelly bettor will bet less on longer odds and more on shorter odds. Any time an advantage player loses money he’s giving up opportunity cost as that represents money he can’t wager on +EV propositions down the line. As such the Kelly player will (for a given edge) always seek to minimize his loss probability over time by selecting the shorter odds bet, even though that necessitates risking more to win the same amount.

Taking the logic a step further, a Kelly player should be willing to even accept lower edge in order to play at shorter odds. For example:

At odds of -200 (decimal:1.500) and an edge of 4%, the win probability would be p = (1+4%)/1.5 ≈ 69.33%, and Kelly stake would be K = 4%/(1.5-1) = 8%. This represents expected bankroll growth of:
(1+(1.5-1)*8%)69.33%*(1-8%)1-69.33% -1 ≈ 0.1624%
At odds of +400 (decimal: 5.0000) and an edge of 10%, the win probability would be p = (1+10%)/5 = 22%, and Kelly stake would be K = 10%/(5-1) = 2.5%. This represents expected bankroll growth of:
(1+(5-1)*2.5%)22%*(1-2.5%)1-22% -1 ≈ 0.1221%

So what this tells us is that a Kelly player would prefer (and by a decent margin) 4% edge at -200 to 10% edge at +400.

In this article we’ve introduced Kelly staking. This represents a methodology for sizing bets in order to maximize the expected future growth rate of a bankroll5. The bet sizes determined by Kelly will necessarily not maximize expected value, because doing so would require betting one’s entire bankroll on every positive EV wager that presented itself. This would eventually lead to bankruptcy and the inability to place further positive EV wagers.

We’ve seen that Kelly may also be utilized to gauge the relative attractiveness of several bets. What we see is that for a given edge, an expected growth maximizing bettor will prefer the bet with shorter odds (in other words, the bigger favorite). This result, derived entirely from first principles, may be surprising to some advantage players who’ve come to find wagers on underdogs generally more profitable than bets on favorites. While our conclusion in no way precludes the possibility that underdogs may in general provide superior return opportunities than favorites, the fact that for two bets of equal expected return the bet on the favorite will yield greater expected bankroll growth is indisputable and needs to be acknowledged by all those seeking to manage bankroll risk.

In Part III of this series we’ll discuss how one may generalize Kelly so it may be applied to a greater range of circumstances including multiple simultaneous bets, multi-way mutually exclusive outcomes, and hedging.
Click to hide footnotes

Technically, because odds, edge, and win probability are linked by way of the equality Odds * Prob = 1 + Edge, any two of these variables could be used to determine the Kelly stake.
The calculus is rather simple. We need to maximize E(G) = (1 + (O-1) * X)p * (1 - X)1-p - 1 with respect to X, subject to X lying on the unit interval [0,1]. To simplify the analysis, however, we can take the natural log of both sides of the equality and seek to maximize the log of expected growth. This is equivalent because the log function is monotonically increasing. So our problem becomes:
Maximize wrt X:
log(Growth) = p*log(1 + (O-1) * X) + (1-p)*log(1 - X)
s.t. 0 ≤ X ≤ 1

setting to zero and solving yields:
X = (Op-1)/(O-1)

with d2log(G)/dX2 ≤ 0
for all feasible 0 ≤ X < 1
This may also be extended to include bets that include a third push outcome where the at-risk amount is returned to the bettor in full (such as in the case of an integer spread or total). In order to generalize this article to include bets with ternary outcomes, one need only consider the "probability of winning conditioned on not pushing" instead of pure "win probability".

In general, given a win probability of PW, a loss probability of PL, and a push probability of PT (where PW + PL + PT = 1), then the probability of winning conditioned on not pushing would be:
P*W = PW / (1 - PT)
and the probability of losing conditioned on not pushing would be:
P*L = PL / (1 - PT)

So assuming decimal odds of O, Edge would be:
Edge = O × PW / (1 - PT) - 1
-or-
Edge = O × PW - (1 - PT)
which in either case is just the same as:
Edge = O × P*W - 1

And the Kelly stake would remain unchanged as:
Kelly Stake as percentage of bankroll = Edge / (Odds – 1) for Edge ≥ 0
So why do so few players do this? It’s my opinion that the only explanation for this inconsistent behavior (risking the same amount on all underdogs while betting to win the same amount on favorites) is the manner in which US-style odds are quoted. Odds of -200 imply one would need to bet $200 to win $100 so it would seem to make sense to bet in increments of that $200. Odds of +200 imply one would need to bet $100 to win $200, and so it would seem to make sense to bet in increments of that $100. What if, however, US odds on under dogs were also quotes as negative numbers? What if a +200 underdog were written as a -50 underdog (meaning a player would need to risk $50 to win $100) and a +400 dog as a -25 dog? The two methods for expressing odds are obviously identical, but it’s my belief that if odds were quoted in this manner you’d have far fewer bettors undertaking the questionable practice of betting an equivalent dollar amount on all underdogs.
An equivalent way of looking at this is that Kelly maximizes both the bettor’s median and modal future bankroll over a large number of bets. In other words, applying expected bankroll growth to the current bankroll yields both most likely bankroll outcome (the mode) and the outcome which has an equal likelihood of being outperformed and underperformed.

_________________ “We make a living by what we get. We make a life by what we give.”

Posted: Sat Feb 15, 2014 5:43 pm Post subject: Re: A quantitative introduction to the Kelly criterion

The difference between expected value and growth is the same as that between an arithmetic and a geometric mean.

You can think of the expected value of a bet as the arithmetic mean of all outcomes were you to repeat the same dollar-value bet an infinite number of times.

Expected growth, on the other hand, corresponds to the geometric mean outcome you'd obtain were you to repeat the same percentage-of-bankroll bet an infinite number of times.

This is a subtle but extremely importance difference.

The best way to see the difference is by considering a bet of 100% of one's bankroll. The win probability and payout odds are irrelevant to the discussion, just long as they're less than 100% and infinity respectively. For the sake of this discussion we'll assume the win probability is 99% and the bet is made at +100 (decimal: 2.0000). This bet has an expected value of 2*99% - 1 * 100% = 98% of bankroll, and corresponds to expected growth of (1+(2-1)*100%)99% * (1-100%)1% = -100% (this latter figure implies that as your number of sequential bets increases, your probability of going bankrupt approaches certainty). We'll assume the starting bankroll is $100.

After 1 bet, there's a 99% probability of winning and ending up with a bankroll of $200, and a 1% probability of ending up with a bankroll of zero (in which case betting would stop as the player would have no more money with which to play).

After 10 bets, there's a 1-99% ≈ 90.4% probability of ending up with a bankroll of 210* $100 = $102,400 and a roughly 9.6% probability of ending up bankrupt.

After 1,000 bets, there's a 1-99%1,000 ≈ 0.00431712% probability of ending up with a bankroll of 21,000* $100 ≈ $1.07151 × 10303 and a 99%1,000 ≈ 99.995683% probability of ending up bankrupt.
What we see is that the expected return per bet is always constant at 98%. However, the expected average growth rate per bet is -100%.

Another way to think of expected growth after a large number of bets is that it represents the single most likely outcome. Expected value, on the other hand, represents the average outcome regardless of its relative likelihood.

_________________ “We make a living by what we get. We make a life by what we give.”

Posted: Sat Feb 15, 2014 5:46 pm Post subject: Re: A quantitative introduction to the Kelly criterion

Great set of posts here, but a quick question: how do you take into account the probability of a push for calculating the optimal bet amount? As it stands now the formula simply assumes no win is a loss, but when dealing with point spreads without a half point this isn't always the case.
In the case of a single bet it's very straightforward. Every time win probability is used it's simply replaced with the probability of winning conditioned on not pushing.

In general, given a win probability of PW, a loss probability of PL, and a push probability of PT (where PW + PL + PT = 1), then the probability of winning conditioned on not losing would be:
P*W = PW / (1 - PT)
and the probability of losing conditioned on not pushing would be:
P*L = PL / (1 - PT)

So assuming decimal odds of O, Edge would be:
Edge = O × PW / (1 - PT) - 1
-or-
Edge = O × PW - (1 - PT)
which in either case is just the same as:
Edge = O × P*W - 1

And the Kelly stake would remain unchanged as:
Kelly Stake as percentage of bankroll = Edge / (Odds – 1) for Edge ≥ 0

_________________ “We make a living by what we get. We make a life by what we give.”

W1 = odds-1 on bet 1, P1 = probability that bet 1 wins, F1 the stake on this bet in % of total BR.
W2 = odds-1 on bet 2, P2 = probability that bet 2 wins, F2 the stake on this bet in % of total BR.
W3 = odds-1 on bet 3, P3 = probability that bet 3 wins, F3 the stake on this bet in % of total BR.

Maximize E (geometric growth) subject to 0<=F1<=1, 0<=F2<=1, 0<=F3<=1, 0<=F1+F2+F3<=1. (W and P constant of course)

(easy to generalise to 1500 simultaneous bets)

This is unfortunately a rather complex optimization problem to solve, especially as the number of bets grow large.

Are there a simpler way of expressing the problem that I am missing?
I suspect this formulation with 1500 variables will be impossible to solve in practice, definitely with excel but also with an optimzation program such as Cplex.

Any approximation that errs on the conservative side would be useful too.

_________________ “We make a living by what we get. We make a life by what we give.”

You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum You cannot attach files in this forum You cannot download files in this forum