|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Introduction |
|
|
|
|
I have intended to
work out the probability properties of the game MonopolyŽ since I first
studied Markov chains in 1985. It
took 16+ years. I'm glad to say,
however, that I did not squander an opportunity to be the first. Unbeknownst
to me, Professor Irvin Hentzel had published an analysis in the Saturday
Review of the Sciences of April 1973.
Since then there have been several, notably by Ian Stewart, in
Scientific American, April 1996. Here
is a quick list of analyses of this problem on the Web (they all worked on
12/30/2001): |
|
|
|
Allan Evans |
http://www.cms.dmu.ac.uk/~ake/monopoly.html |
|
|
Durango Bill |
http://www.oocities.org/durangobill/MnplyStats.html |
|
|
Ian Stewart |
http://www.math.yorku.ca/Who/Faculty/Steprans/Courses/2042/Monopoly/Stewart2.html |
|
|
Irvin Hentzel |
http://www.public.iastate.edu/~hentzel/monopoly/homepage.html |
|
|
Jim & Mandy |
http://home.att.net/~dambrosia/programming/games/monopoly/index.html |
|
|
Truman Collins |
http://www.tkcs-collins.com/truman/monopoly/monopoly.shtml |
|
|
Truman Collins's
analysis appears to have been crowned the champ, and my work here largely
corroborates his. I was able to
advance the ball a little bit by resolving an issue he had left open. There is still plenty to be done, and
difficult issues to wrangle, before the entire Monopoly strategy problem can
be said to have been solved. But I've
reached my limit and hereby cheerfully hand off to someone whose grasp of
probability and optimization modelling is wider and deeper than mine, and who
has more time on his/her hands. Ph.D.
candidates, this is your cue. |
|
|
|
Basics |
|
|
Markov chains (also
spelled Markoff chains, but not to be confused with mark-off chains such as
Sam's Club or Home Depot) give a method for thinking about the stochastic
properties of a finite-state system.
That is, if the probability of moving from state I to state J does not
depend on any information from prior states visited, then the system can be
modelled as a Markov chain. If there
are N possible states, then the probabilistic properties are entirely
captured by a full set of functions Pr(go to state J|currently in state I), 1
<= I,J <= N. This information
can be conveniently arranged into an NxN matrix. |
|
Clearly this applies to
the problem of predicting positions in a board game. |
|
|
To me, the amazing
thing about Markov chains is that the convenient matrix tabulation turns out
to have analytical teeth. If the
transition probabilities have been stored in a matrix M = {Pr(go to J|begin
at I)}, then the probabilities of transitioning from I to J in two moves are
simply M*M in normal matrix multiplication, in three moves, M*M*M, and so
on. The second magical fact is that
this process converges -- raising M to a sufficiently large power gives N
identical rows of transition probabilities; i.e. for any I, Pr(J|I) is the
same. This is called the
"ergodic" or "steady state" probability distribution associated
with this Markov process. |
|
Recommendation: I found Cinlar, Introduction
to Stochastic Processes, very valuable, especially
for those who, like me, have a surer grasp of linear spaces than of
probability. |
|
|
|
MonopolyŽ |
|
|
Markov chains help to
define a research agenda for studying MonopolyŽ. Specify the transition
probabilities, crank up the Markov matrix, and voila! The steady-state probabilities give the
long-run probability of landing on, say, St. James Place, on the next roll
(not conditional on where one is sitting). This would seem to be a relatively
easy, if tedious, exercise. From that
information, derive optimal strategies.
Unfortunately, there are several complicating factors: |
|
|
|
|
Chance and Community Chest |
|
|
|
Ten of 16 Chance
cards, and two of 16 Community Chest cards, will send you places. This is not a conceptual problem, but it
does add a layer of tedium in calculating the transition probabilities. In fact, it is frequently possible to
return to the origin square in one roll.
For example, there is an "Advance to St. Charles' Place"
Chance card. Therefore, because you
can roll 11 from St. Charles Place and hit Chance, there is a small
probability (=(2/36)*(1/16)) that you will be returned to St. Charles Place
on the same turn, albeit $200 richer. |
|
|
|
|
Going to Jail |
|
|
|
You
can go to Jail by hitting the "Go To Jail" square, by getting a
"Go To Jail" card in Chance or Community Chest, or by rolling
doubles three times in a row.
Modelling the first two is straightforward. |
|
|
Doubles |
|
|
|
Doubles are
particularly problematic. The
"Jail on third doubles" rule appears to negate the attractive
features of the Markov chain. From my
home, Marvin Gardens, whether you go to Jail or to Park Place can depend on
whether your doubles roll is the third in a row or not, not merely on your
position. An easy fixup beckons: it might seem that the probability of
having rolled doubles twice before the current roll would simply be 1/36
(=(1/6)*(1/6)). However, this is not
correct in general, because sometimes certain sequences of doubles could not
have taken you to the square you occupy.
For example, you cannot roll "2" then "4" to reach
Pennsylvania Avenue from Water Works, because the "2" would take
you straight to Jail. |
|
|
I confess that I
missed this subtlety, and learned of it from Truman Collins' page. He gives an approximate solution using
Monte Carlo methods (I.e. he wrote a program to run thousands of simulated
games, yielding estimates of the probabilities). I give an analytical solution, based on the following
reasoning: |
|
|
Write the vector of
steady-state probabilities Pr(go to Jail due to the doubles rule|start at I)
as V. In steady state, the
probability of being on any square J after two doubles is the sum of the
probabilities of being on each upstream candidate square, I, multiplied by
the conditional probability of reaching square J with two doubles (i.e. Z =
{Pr(reach J after two doubles|start at I)}, 1 <= I,J <= N). Clearly Z defines a Markov chain in its
own right, and (1/6)(ZV') gives the steady-state probability of going to Jail
due to the doubles rule. But this
means that (1/6)(ZV') = V'. |
|
|
This formulation
suggests an iterative method. I
applied such a method, with initial specification of V as a uniform 1/216
(=(1/6)^3). I used that vector to
calculate the implied steady-state probabilities, and from that calculated
the implied probabilities of being at State I after two doubles, and
substituted that updated estimate for the initial probability vector, and so
on. I did not attempt to prove
conditions under which this method would converge, but do report that it did
converge, and fairly quickly. These
results are very close to those of Truman Collins (see tab "long-run
probs"), each method corroborating the other. |
|
|
|
|
|
|
Staying in Jail |
|
|
|
|
You can opt to get out of Jail immediately, by
paying a $50 fine, or to stay for up to three turns, by not paying the fine.
If you get doubles, though, you're out of jail. In the early game, during the scramble to acquire property,
getting out quickly is optimal; later, however, when you're avoiding your
rivals' hotels, Jail is a welcome haven.
I give the short-jail probabilities, but for further analysis assume
that the long-jail probabilities are the most relevant. |
|
|
|
|
Property Values |
|
|
|
Actual property
values are a can of worms affected by all of the probabilistic considerations
above, parameters such as the number of players, and various strategic
decisions made by those players.
Those strategic decision are, in turn, driven by property values. I leave it to someone else to devise
general strategies. Here, I merely
apply the steady-state probabilities to the known value parameters (they are
printed on the title deeds) to derive a simple valuation of houses and hotels on the various properties and
color-groups, and the implied payback period for each. These values are calculated in terms of
"rival-rolls," i.e. the expected revenue of a hotel on Illinois
Avenue per rival-roll is the amount, on average, that you will collect each
time any one or your opponents rolls
the dice. |
|
|
Caveats: |
|
|
|
The actual price of
the property is not included in the value calculation for the color-groups,
but is included in the valuation of Railroads and Utilities (because these
values are driven solely by the number of similar properties owned). |
|
|
|
Rents due for
Railroads and Utilities are slightly higher when one has been sent there by
certain Chance cards. This factor is
ignored. |
|
|
|
It is assumed that you
have acquired the complete color-group, so as to permit construction of
houses and hotels. |
|
|
|
Chance and Community
Chest each include a card imposing a tax on buildings. This factor is incorporated into the
calculation. |
|
|
|
The probabilities used
here are the "long-jail" probabilities, reflecting the assumption
that when such things matter, the optimal strategy is to hang out in jail as
much as possible. The exact threshold
for switching from "short-jail" to "long-jail" is well
beyond my ken. |
|
|
|
Payback Periods |
|
|
|
Payback periods give
the expected number of rival-rolls it takes to recoup the cost of improvements (if a color-group). They are calculated in the same units; a payback period of 30
means that after your opponents have rolled the dice 30 times, you can expect
to have recouped the costs of erecting the 3 houses, or hotel, or whatever. |
|
|
|
|
Conclusions: |
|
|
|
|
|
1. Don't buy the Dark Purples. |
|
|
|
2. Don't buy the Utilities. |
|
|
|
3. The Railroads
aren't so great, either. |
|
|
|
4. Orange and Dark Blue (Boardwalk, anyway)
appear to be the best in terms of payback period, but be aware that the fact
there are only two Dark Blues works against their overall benefit. |
|
|
|
5. The rest of the color groups are all
roughly similar in terms of payback period, considering all the unmodelled
factors. Greens aren't so great. |
|
|
|
6. Building only one or two houses on a given
site is a poor investment. You need
at least three. |
|
|
|
7. Three houses usually appears to be
optimal. The payback periods for
three houses, four houses and a hotel are about the same (Although,
interestingly, on the cheaper sides of the board, hotels are slightly better
than three houses, but on the tonier sides,
vice versa. If you think of
"hotel" as "tenement" this makes perfect sense.). An additional consideration, not
explicitly modelled here, is that adding the houses, or especially upgrading
to a hotel, carries significant risk -- if you have to tear down the
buildings to pay a debt, you get only half the purchase price back. Better to keep extra cash on hand. |
|
|
|
|
|
Comment: |
|
|
|
|
I
haven't played much Monopoly as an adult, but as kids no one I
knew ever played the "optimal" strategy of hiding out in Jail -- my
brothers and I would pay our $50 and get back in the game. I just don't remember Jail being such a
common destination. I suppose that
this is because under the "short-jail" strategy, Jail is not quite
twice as likely as any other square, but there's still only a 1-in-25 chance
of doing time. I do vividly remember
that it was hard to prosper owning the Dark Purples and the Greens. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|