# The Grey Labyrinth is a collection of puzzles, riddles, mind games, paradoxes and other intellectually challenging diversions. Related topics: puzzle games, logic puzzles, lateral thinking puzzles, philosophy, mind benders, brain teasers, word problems, conundrums, 3d puzzles, spatial reasoning, intelligence tests, mathematical diversions, paradoxes, physics problems, reasoning, math, science.

 Discuss Least Unique Game here Goto page Previous  1, 2, 3  Next
Author Message
wavebasher
Icarian Member

Posted: Fri Dec 15, 2006 10:00 pm    Post subject: 41

 Neo wrote: http://www.greylabyrinth.com/puzzles/puzzle.php?puzzle_id=206 Enjoy!

Pick from all positive integers, with the probability of picking n
p(n) = 2^(-n).

Proof: if P(1) < 1/2 (for both other players), then choosing 1 every time wins.
Given P(1) = .5, then if p(2)<1/4, 2 every time wins.And so on. This sets minimum values of p(n).

If p(1)>.5 , p(2)>.25, etc, then for some n, p(both 1)+p(both 2) +..
up to both n-1 adds up to more than 1/3, in which case picking n every time wins.

Nice puzzle!
dnwq
Icarian Member

Posted: Sat Dec 16, 2006 2:47 am    Post subject: 42

 A-man wrote: You missed my point. As I stated if the goal was to minimize loses (which is something very different than winning) and every player is just as clever as each other, then why fight it? Why bother to rotate winning - just take every turn as a draw.

I think we are supposed to assume (however ridiculously) that players dedicate themselves wholly to the game - meaning that effort taken to maximise return may be taken to cost nil.

Complicating models with Real Life misses the whole point of a model.
wavebasher wrote:
 Neo wrote: http://www.greylabyrinth.com/puzzles/puzzle.php?puzzle_id=206 Enjoy!

Pick from all positive integers, with the probability of picking n
p(n) = 2^(-n).

Proof: if P(1) < 1/2 (for both other players), then choosing 1 every time wins.
Given P(1) = .5, then if p(2)<1/4, 2 every time wins.And so on. This sets minimum values of p(n).

If p(1)>.5 , p(2)>.25, etc, then for some n, p(both 1)+p(both 2) +..
up to both n-1 adds up to more than 1/3, in which case picking n every time wins.

Nice puzzle!

Cleanly defeated by a tacit collusion by the remaining two players to play (1,{crazy large number}), or ({crazy large number},1). Collusion maintained under profit-sharing and threat of defection. Game is infinite, so collusion will arise eventually and crush this strategy.

1. Any strategy that plays integers out of a fixed set of integers with fixed probabilities for each integer will not work. This includes sets of integers with one element (e.g., the all-3 strategy).

Proof: Your opponents can alternate playing 1 and the largest integer in your set + 1 (to be absolutely sure of preventing you from winning). Failing that (as in wavebasher's strategy), they can play 1 and a large number (to be almost sure of preventing you from winning that round). Payoff negative in the long run.

However:

2. Against strategies that adapt to previous plays, no two player collusion can exist. This, of course, means that no winning strategy exists, and payoffs are zero in the long run. There is no hope of achieving a positive payoff.

Proof: The excluded player can mimic the strategy of any of the other two to remove any collusive benefit. Bhoogter noted this as well.

The best strategy must consider previous plays. No, I've no idea how it might function either, but I suppose that perhaps no best strategy exists (which is entirely possible. Ever played Rock, Paper, Scissors?).
_________________
+---
Griffin
Daedalian Member

 Posted: Sat Dec 16, 2006 5:56 pm    Post subject: 43 I think wavebasher may have a solution to a slight variant on the given puzzle: what if it were just a one-shot game instead? Then this problem of your enemies conspiring against you couldn't come up, because they really couldn't share any information until the game is all over. The solution to such a puzzle would be a strat that guarantees 1/3 of the pot (in expected value), no matter what strategies your enemies play. I think wavebasher's strat might be it (though I'm not completely sure). For the given problem, I agree with Chuck
wavebasher
Icarian Member

Posted: Sun Dec 17, 2006 9:29 pm    Post subject: 44

dnwq wrote:
 A-man wrote: You missed my point. As I stated if the goal was to minimize loses (which is something very different than winning) and every player is just as clever as each other, then why fight it? Why bother to rotate winning - just take every turn as a draw.

I think we are supposed to assume (however ridiculously) that players dedicate themselves wholly to the game - meaning that effort taken to maximise return may be taken to cost nil.

Complicating models with Real Life misses the whole point of a model.
wavebasher wrote:
 Neo wrote: http://www.greylabyrinth.com/puzzles/puzzle.php?puzzle_id=206 Enjoy!

Pick from all positive integers, with the probability of picking n
p(n) = 2^(-n).

Proof: if P(1) < 1/2 (for both other players), then choosing 1 every time wins.
Given P(1) = .5, then if p(2)<1/4, 2 every time wins.And so on. This sets minimum values of p(n).

If p(1)>.5 , p(2)>.25, etc, then for some n, p(both 1)+p(both 2) +..
up to both n-1 adds up to more than 1/3, in which case picking n every time wins.

Nice puzzle!

Cleanly defeated by a tacit collusion by the remaining two players to play (1,{crazy large number}), or ({crazy large number},1). Collusion maintained under profit-sharing and threat of defection. Game is infinite, so collusion will arise eventually and crush this strategy.

1. Any strategy that plays integers out of a fixed set of integers with fixed probabilities for each integer will not work. This includes sets of integers with one element (e.g., the all-3 strategy).

Proof: Your opponents can alternate playing 1 and the largest integer in your set + 1 (to be absolutely sure of preventing you from winning). Failing that (as in wavebasher's strategy), they can play 1 and a large number (to be almost sure of preventing you from winning that round). Payoff negative in the long run.

However:

2. Against strategies that adapt to previous plays, no two player collusion can exist. This, of course, means that no winning strategy exists, and payoffs are zero in the long run. There is no hope of achieving a positive payoff.

Proof: The excluded player can mimic the strategy of any of the other two to remove any collusive benefit. Bhoogter noted this as well.

The best strategy must consider previous plays. No, I've no idea how it might function either, but I suppose that perhaps no best strategy exists (which is entirely possible. Ever played Rock, Paper, Scissors?).

A collusion between all-1 and all-2 will always win against any solo strategy, adaptive or not. My solution assumed that the rules prohibit collusion.
Chuck
Daedalian Member

 Posted: Sun Dec 17, 2006 10:06 pm    Post subject: 45 I don't think it's collusion to take advantage of another player's playing pattern even if doing so also benefits another player.
Daedalian Member

 Posted: Sun Dec 17, 2006 11:16 pm    Post subject: 46 I don't think you can really break up a collusive strategy through mimicry. I mean, if B and C are alternating 1 and 2 to collude against A, what does A do? If A copies B, then it's actually just like A and C are colluding against B. Then B does the same thing and copies C to try to break up A's and C's collusion? This never leads to any semblance of equilibrium. The only feasible way to break up a collusion is by using a strategy that would offer one of the other players an alternate collusive strategy that would give that player a greater payoff than the 1.5 from the simple collusion (while you would be getting less than 1.5, but it's at least better than the 0 you were getting). But I can't find a strategy that can actually achieve that.
dnwq
Icarian Member

Posted: Mon Dec 18, 2006 4:00 am    Post subject: 47

wavebasher wrote:
dnwq wrote:
 A-man wrote: You missed my point. As I stated if the goal was to minimize loses (which is something very different than winning) and every player is just as clever as each other, then why fight it? Why bother to rotate winning - just take every turn as a draw.

I think we are supposed to assume (however ridiculously) that players dedicate themselves wholly to the game - meaning that effort taken to maximise return may be taken to cost nil.

Complicating models with Real Life misses the whole point of a model.
wavebasher wrote:
 Neo wrote: http://www.greylabyrinth.com/puzzles/puzzle.php?puzzle_id=206 Enjoy!

Pick from all positive integers, with the probability of picking n
p(n) = 2^(-n).

Proof: if P(1) < 1/2 (for both other players), then choosing 1 every time wins.
Given P(1) = .5, then if p(2)<1/4, 2 every time wins.And so on. This sets minimum values of p(n).

If p(1)>.5 , p(2)>.25, etc, then for some n, p(both 1)+p(both 2) +..
up to both n-1 adds up to more than 1/3, in which case picking n every time wins.

Nice puzzle!

Cleanly defeated by a tacit collusion by the remaining two players to play (1,{crazy large number}), or ({crazy large number},1). Collusion maintained under profit-sharing and threat of defection. Game is infinite, so collusion will arise eventually and crush this strategy.

1. Any strategy that plays integers out of a fixed set of integers with fixed probabilities for each integer will not work. This includes sets of integers with one element (e.g., the all-3 strategy).

Proof: Your opponents can alternate playing 1 and the largest integer in your set + 1 (to be absolutely sure of preventing you from winning). Failing that (as in wavebasher's strategy), they can play 1 and a large number (to be almost sure of preventing you from winning that round). Payoff negative in the long run.

However:

2. Against strategies that adapt to previous plays, no two player collusion can exist. This, of course, means that no winning strategy exists, and payoffs are zero in the long run. There is no hope of achieving a positive payoff.

Proof: The excluded player can mimic the strategy of any of the other two to remove any collusive benefit. Bhoogter noted this as well.

The best strategy must consider previous plays. No, I've no idea how it might function either, but I suppose that perhaps no best strategy exists (which is entirely possible. Ever played Rock, Paper, Scissors?).

A collusion between all-1 and all-2 will always win against any solo strategy, adaptive or not. My solution assumed that the rules prohibit collusion.

Nope. You're confusing between one-shot and repeated, I think. All information is transmitted through previous plays, so anything that one player knows, the others do. For any given round, since I'm going to lose anyway, I could mimic one of the players with indifference to payoffs. But, if I keep doing that, the player whom I'm mimicking also loses all the time. I have nothing to lose, so I'm not going to stop; since the collusion is maintained under profit-sharing, the collusion will break down.

So an adaptive strategy can defeat all-1 and all-2*. The collusion is a tacit collusion, of course, which is an entirely different animal from the "secret offline collusions" mentioned in the puzzle.

Edit*: the collusive strategy of alternating playing 1 and 2, sychronised with other players. I think we have a conflict of notation here!

 fadeblue wrote: I don't think you can really break up a collusive strategy through mimicry. I mean, if B and C are alternating 1 and 2 to collude against A, what does A do? If A copies B, then it's actually just like A and C are colluding against B. Then B does the same thing and copies C to try to break up A's and C's collusion? This never leads to any semblance of equilibrium.

There are several points to be made here:

- A losing player has no interest in maintaining an equilibrium in which he loses. Even utter confusion (with an expected nil payoff) is better than always losing. A's strategy is to force B to always lose, too, so that the equilibrium disappears. That's the whole point.

- It's too bad you stopped at that step, because if you extended your logic, you would see an equilibrium: A copies B, so C wins (with A and B losing). If B did copy C, then A wins (with B and C losing). Since this is identical to the previous situation, we conclude that C will then copy A, with B winning... etc. How is this not an equilibrium?

See also Chuck's post (#17). Duly note that B will not copy C (i.e., your logic does not hold), so Chuck's proposed equilibrium is not reached.

- Your logic doesn't lead to your conclusion, but suppose your conclusion is correct. Then there is no payoff to collusion, so collusion will never occur. Then no two player collusion will exist, anyway.

- A copies B. A and B will then simultaneously break the equilibrium (both roll dice, perhaps? Or windbasher's mixed strategy).

Proof: Here's the normal form, with upper row and left column representing continuing to "collude" with C, and lower row and right column representing breaking the collusion.
 Code: +-----+-----+ |   -1|   +2| |-1   |-1   | +-----+-----+ |   -1|    0| |+2   | 0   | +-----+-----+

... where it is clearly Nash to break the collusion, repeated or not. Payoff for breaking is assumed to be 0 (i.e., signal confusion) here, but as long the expected payoff is better than -1, the result holds.

How C responds depends on how A and B play. Anybody care to work it out?
_________________
+---

Last edited by dnwq on Tue Dec 19, 2006 3:04 am; edited 1 time in total
Guest

 Posted: Tue Dec 19, 2006 1:13 am    Post subject: 48 I think much of the above post makes sense but we should still with the assumption that players can not communicate with each other so there will be no deals between any players. Given this the equilibrium of the game is for all players to pick either 1 or 2 randomly (ie 50% probability for each), the reason for this is once we arrive at this equilibrium there is no incentive for any other player to shift from it, the expected return of each player will stay at 1 regardless of what proportion of 1 and 2's they choose to play as long as the other two players are choosing randomly. Others Me 1 50.00% 45.00% 2 50.00% 55.00% You Player 2 Player 3 1 1 1 11.25% 1 0.1125 1 1 2 11.25% 0 0 1 2 1 11.25% 0 0 1 2 2 11.25% 3 0.3375 2 1 1 13.75% 3 0.4125 2 1 2 13.75% 0 0 2 2 1 13.75% 0 0 2 2 2 13.75% 1 0.1375 100.00% 1 Now what holds us at Equilibrium? basically if one player tries to deviate from the random rule then they get not positive effect, see above I decided to play 45% 1's and 55% as 2's, so while I get not bonus another player has the ability to use this information after some time when my strategy becomes clear to change theirs in such a way as they start having an expected return of greater than 1, at this point I would have an incentive to return to the equilibrium strategy of 50/50 to remove their gains and my loses.. thus Equilibrium is for all at least 2 players to play 50/50 with the third player having no impact by changing their strategy and they get punished if they continue to not play 50/50 which holds the equilibrium. Thus your answer is that eventually you arrive at the point where your best strategy is to play 50/50 which means you all walk away with the roughly the same amount of money.
Chuck
Daedalian Member

 Posted: Tue Dec 19, 2006 1:38 am    Post subject: 49 If the other two players are each playing 1 and 2 with 50% probability then I should play 3 all the time and win half the games while they each win ¼ of the games.
Daedalian Member

 Posted: Tue Dec 19, 2006 2:23 am    Post subject: 50 Well, what I was thinking was this: In that normal form representation, what are the payoffs for A and B if they choose to collude together against C? Wouldn't the payoffs be greater than 0? (since we're assuming B and C colluding against A gives them positive payoffs)
dnwq
Icarian Member

Posted: Tue Dec 19, 2006 3:03 am    Post subject: 51

 Jade* wrote: I think much of the above post makes sense but we should still with the assumption that players can not communicate with each other so there will be no deals between any players.

Tacit collusion does not require offline communication nor deals - hence "tacit". All "communication" is set up using play history, and all enforcement via the implict (and credible) threat of removing future profit.

The puzzle only bans secret offline communication. "Tacit collusion" is a game-theoretic term, and is different from the normal interpretation of "collusion", which seems to involve shadowy meetings in dark places.

 fadeblue wrote: In that normal form representation, what are the payoffs for A and B if they choose to collude together against C? Wouldn't the payoffs be greater than 0? (since we're assuming B and C colluding against A gives them positive payoffs)

If A can telepathically arrange a collusion with B, then the payoff could be greater than 0 in the long run.

Of course, there's no reason to believe that A and B could arrange a collusion, even if they wanted to, so... perhaps you could elaborate?
_________________
+---
Daedalian Member

Posted: Tue Dec 19, 2006 5:19 am    Post subject: 52

 dnwq wrote: If A can telepathically arrange a collusion with B, then the payoff could be greater than 0 in the long run. Of course, there's no reason to believe that A and B could arrange a collusion, even if they wanted to, so... perhaps you could elaborate?

Well, how did B and C form the collusion against A in the first place? Or can we just assume that B and C couldn't have arranged to collude, so we don't need to worry about breaking up collusions?

Additionally, if we believe that A and B can also both somehow agree to stop colluding with C, why can't they agree to collude?
Guest

 Posted: Tue Dec 19, 2006 11:07 am    Post subject: 53 Thanks to the person who corrected my earlier post, it was rather late and I have the flu so wasn't thinking, however I have the answer now. First I must counter all collusion arguments otherwise you will never agree, to do this consider if two players were to even agree some how to use team work against one player and as suggested play 1 when there 'teammate' plays two and the reverse thus stopping the third player from winner. Now the third player will just punish one of the other players by following their strategy meaning that they never win also, which would give an incentive to the punished player to change his plan given he gets no benefit from his 'team mate' winning every game. You will see this is the case with any effort to work with another player, you can always punish one and he will stop because he starts losing all the time. The second reason you should not consider people working together is that as all the players are equal who would get to team up against who, as they all have an equal chance including it in the model doesn’t really change anything. An argument that player A should side with B and be counted by why B and not C? Ok hoping you see the logic above I will return to the Equilibrium model I purposed earlier, however I will include 3 this time as I foolishly excluded it last night. The Argument is basically the same as I posted earlier in that once you reach the equilibrium an individual player has no incentive to change and can be punished if he does. The equilibrium strategy is to play 50% 1’s, 25% 2’s and 25% 3’s. If two players are playing this the third can play any strategy they like and they will still have an expected return of 1, however they can be punished by the other players if they play something different and the other players eventually figure it out. Thus once you get to this point you will stay there and all players will walk away with about the same as they entered the game with.
Chuck
Daedalian Member

 Posted: Tue Dec 19, 2006 1:27 pm    Post subject: 54 If two players are playing 1, 2, and 3 with 50%, 25%, and 25% probability then I can play 4 all the time and win 37½% of the time leaving each of them winning 31¼% of the time. If I play 1, 2, and 3 with 50%, 25%, and 25% probability then the other two can alternate 1,2 and 2,1 which gives them each a win half the time and leaves me with nothing. I can punish one of them and break it up by giving the other one all the wins but until I do, they get all of the money. They could do this any time they detect that I'm playing a probabilistic strategy which means I'll lose money in the long run if I keep trying it.
ChienFou

 Posted: Tue Dec 19, 2006 5:22 pm    Post subject: 55 Gave this to my son-in-law over dinner. He rapidly arrived at wavebasher's solution, stated "It's Nash equilibrium, there might be others" and then opened a 2nd bottle of my Hermitage 95. Since he's buying a large house in Central London based on his estimate of whether the other players are playing optimally, since he is a management consultant manque, and since I would never play a prop game with him for money; I imagine he's right.
Guest

 Posted: Tue Dec 19, 2006 6:03 pm    Post subject: 56 Don’t know why but I thought 4 wouldn't make sense.. I will hide behind my ill health and include 4 & 5 into the model later today. As for the other two player colluding this is improbably, firstly it would begin with Player A playing a strategy other than the equilibrium strategy trying to communicate that strategy over time to one of the other players hoping the first one to get it will help him, however once someone does get it they will have an incentive not to help them but to play a strategy which punishes them and offers better rewards for himself. Once I have reworked out the equilibrium I will provide an example of this by showing what would be the optimal strategy if one player started to play differently, I suspect it’s not a colluding strategy and the best response to that for player A will be to either move closer to or back to the equilibrium. As for the Nash Equilibrium that is exactly what I am saying, that once equilibrium is reached no one has an insentive to move from it although I have not penned the exact strategy yet but will get it shortly.
Guest

 Posted: Tue Dec 19, 2006 6:54 pm    Post subject: 57 Much to my surprise the nash equilibrium includes playing numbers much higher than I though, the following is the actual point at which there is no incentive to deviate from the equilibrium. 1 50.00% 2 25.00% 3 12.50% 4 6.25% 5 3.125% 6 3.125% I still believe in the same logic as I have said in my other two post just I have now include to option of any number which I mistaken left out earlier. Now to disprove this theory someone would need to propose a different strategy for Player A which one Player B (or C) would then act on to maximise his expected wealth and at the same time still mean Player A is making great than 1 expected returns and Player C (or B) has no ability to counter it.
Chuck
Daedalian Member

 Posted: Tue Dec 19, 2006 8:50 pm    Post subject: 58 If two players play your 1 to 6 strategy they'll tie 33.3984375% of the time so I'll always play 7 and get an average of 1.001953125 per play.
Daedalian Member

 Posted: Wed Dec 20, 2006 12:34 am    Post subject: 59 Perhaps all 3 players quickly arrive at the 1/2^n strategy, seeing as it is the strongest strategy for any individual player. This guarantees them all zero payoff in the long run, but at least it's not negative. They all realize that any two of them could collude to shut out the other player, but they all mutually understand that if any player attempts to form a collusion, the other two will collude against THAT player, thereby mutually enforcing cooperation. For example, player A might decide to initiate a possible collusion. While B and C are following the 1/2^n strategy, A switches to an "always 1" strategy. Interestingly, it still provides the same payoff for A, since he wins 1/4 of the time and ties 1/4 of the time. He's not losing anything, while inviting collusion from either B or C - for example, if B switches to an "always 2" strategy while C doesn't, then A and B will both win half the time. He hopes that if C tries to break it up by copying, then A and B will understand that they should switch to an alternating 1-2 strategy. But if neither B nor C is interested, they could either keep playing as they are, or collude against A as punishment for attempting collusion in the first place.
Guest

 Posted: Wed Dec 20, 2006 1:23 am    Post subject: 60 Yes you’re quite right, for some reason despite the pattern becoming obvious because of mistake in my formula I thought it ended at 6, in actual fact you should play each increasing number with exactly half the probability of the one above, this continues to the limit. 1's must be played 50% of the time. 1 50.00000000% 2 25.00000000% 3 12.50000000% 4 6.25000000% 5 3.12500000% 6 1.56250000% 7 0.78125000% 8 0.39062500% 9 0.19531250% 10 0.09765625% 11 0.04882813% 12 0.02441406% 13 0.01220703% etc.. All of the arguments made in posts above still hold I just made the same silly mistake again with where to cut it off. Taking is to the limit is far more logical, not sure what I was thinking.. Anyway as above please indicate strategies that can beat this given the other two players are playing it. Pretty sure the above is a Nash-equilibrium however that’s not to say it’s the only one, the puzzle is tricky because there are two equilibriums. As some have pointed out another strategy can also lead to an equilibrium at least in a short term sense. If all players tried to play the alternate 2’s & 1’s strategy then there is a 33% chance that you are the odd one playing ones when the others play twos and the reverse, this has a reward of 3 thus an expected value of 1. It is a Nash equilibrium of sorts but not really because in a single round the other players can not increase there expected returns by changing their strategy however the player(s) being punished (the two playing the same strategies) know that keeping things as they are will not be in their interests in the long term, thus one will change in the hope the other will do likewise and he will, not because he will get a better reward in the next game but because if he doesn’t he knows he will be punished again and he will return to losing every game. Thus even in this case as you tend towards infinity games you will always move to the Nash Equilibrium I started at the start of the post. I think this is therefore one of two answers depending on your taste for risk at the start of the gaming session.
Guest

 Posted: Wed Dec 20, 2006 1:34 am    Post subject: 61 You are correct fadeblue, I took so long writing my post I didn't reply to yours as it wasn't there when I started sorry. Basically I think we are on the same wave length now. It is just a question of how long it takes to get to that equilibrium. Once there I am quite sure that they will remain there because when one player leaves it they get no benefit and they get stabbed in the back by another play which pushes them back to the equilibrium. Basically said this in my earlier posts. This was a pretty cool puzzle I think. I am a big fan of game theory
Chuck
Daedalian Member

 Posted: Wed Dec 20, 2006 7:46 am    Post subject: 62 If the players are trying to maintain equilibrium to avoid punishment then they might as well all play 1 every round and get their dollars back rather than leave it up to chance by playing the 1/2^n strategy. If they're afraid to defect then there are lots of strategies that will work. But if they're not afraid to defect then the 1/2^n strategy isn't stable. One player can start alternating between 1 and 2 hoping that one other player will join him by playing 2 and 1 alternately. I might as well do this because I'm not hurt if neither of the others joins me. It doesn't pay for one of them to punish me because the punisher would lose money along with me. They might take turns punishing me but then I might choose the first punisher to counterpunish if this happens. Soon they'd learn that the first punisher would be seriously hurt if they try to punish me and stop trying because neither would want to start my punishment. So there would eventually be a temptation to join me and no threat of punishment. The other players might punish each other for joining me, but I might again counterpunish the punisher because I want to have one of them join me. They might be better off ignoring me. So eventually the solution is one player alternates between 1 and 2 while the other two players play the 1/n^2 strategy or some insane effort by two players to gain an advantage in which anything could happen depending on how stubborn they are with dealing out punishment and counterpunishment.
Guest

 Posted: Wed Dec 20, 2006 10:41 pm    Post subject: 63 How does one counter punnish when the act of punishing means you are already giving up everything? Say you play 2,1,2, then the second player plays the alternative version of that strategy then the third player plays the same as you so you both losing ever game, what would you do if he never changes??
Chuck
Daedalian Member

 Posted: Wed Dec 20, 2006 11:55 pm    Post subject: 64 You can counterpunish by always giving in at first and then always punishing your punisher soon after equilibrium has been regained. You'd be hoping that the other players notice that you always do this and will hesitate to punish you in the future. If they're afraid to punish you then maybe you can get more than your fair share occasionally without being punished for it as long as you don't get too greedy.
Chuck
Daedalian Member

 Posted: Thu Dec 21, 2006 2:37 am    Post subject: 65 After some experimentation, players A and B might arrive at the following pattern. Maybe one of them starts it and the other notices and joins in. Player A plays 1 2 1 2 3 Player B plays 2 1 2 1 3 Player C observes this and sees that he can win every fifth game and can't get anything in the other rounds. But he gets an average of 60¢ per round instead of his fair share of \$1.00. He doesn't like this and wants to break up the arrangement by punishing A or B. He can do this by copying one of them. Let's say he copies player A. The one who's not being copied, player B, gets lots of money but would not like to see the punishment succeed because the original arrangement is better for him in the long run than eventually getting is fair share. He might want to reward player A by suitable play but I see no way to do that. The only means of communications are playing styles so no secret plans can be made. Probably the best thing he can do is, during the punishment period, is switch to playing 1 when it's his turn to play 3 in order to keep player C from winning every fifth game. A and C would get nothing. Player B is happy with players A and C playing the same numbers but would be willing to switch back to winning two of every five rounds if C gives up and ends the punishment or if B gives up and is willing to let C replace him in getting two of every five rounds. Of course, player B or player C can switch to punishing player A but that would show a willingness to switch easily which would make player A reluctant to switch. So what should happen? A reasonable outcome would be for player A and player C to switch to the 1/2^n strategy so that everyone will get their fair share. But this can happen only if they do so at the same time. If only one abandons duplicating the other then it will be taken as a sign of surrender and acceptance of winning every fifth game. That's a poor outcome but better than the nothing he's been getting. It might come down to who can outlast the other. If the game is expected to last a long time then it's better to outlast the other player and end up winning two out of five games. It might end up with player B winning every game while the other two try to outlast each other. Should two players even start trying to win two out of five rounds each? Even if it doesn't work, there's still a 50% chance that the third player will choose to punish the other guy leaving you with taking everything for awhile. Even if you do get punished, you can give up and still get 60¢ per game which isn't nothing so it might be worth the risk. It might also end up being a chaotic mess in which case you'd probably get your fair share in the long run anyway.
dnwq
Icarian Member

Posted: Thu Dec 21, 2006 7:44 am    Post subject: 66

 Chuck wrote: You can counterpunish by always giving in at first and then always punishing your punisher soon after equilibrium has been regained. You'd be hoping that the other players notice that you always do this and will hesitate to punish you in the future. If they're afraid to punish you then maybe you can get more than your fair share occasionally without being punished for it as long as you don't get too greedy.

Punishment that involves a loss for the punisher is normally an incredible threat, so could you spell this out more clearly?

 Chuck wrote: After some experimentation, players A and B might arrive at the following pattern. Maybe one of them starts it and the other notices and joins in. ...

It works, but it does not maximise profits, so it isn't a best strategy.

The puzzle is unclear, so you have a choice between allowing your players some creativity and spontaneity (as in RL), or force them to be identical under the banner of "identical information sets" (as in game-theoretic modelling).

Since you have allowed creativity (B can notice A's pattern before C does), then A and B can collude using a pattern that C is unable to catch on to. Then there is no need to give C anything. A and B can keep introducing new patterns if C ever catches on.

On the other hand, if you disallow creativity, then C and B will notice A's pattern at the same time. B and C will simply collude against A's deterministic strategy.

So there's no need for such a complicated strategy. Incidentally, you could make an identical case for "1 2 1 2 1 2 3"/"2 1 2 1 2 1 3" and so on, reaching a collusion against C "in the limit".
_________________
+---
Guest

 Posted: Thu Dec 21, 2006 6:11 pm    Post subject: 67 I think most rational people would randomly pick one person and punish them everytime until the conformed if they were earning sub average returns. Given you play an infinate amount, anyone earning zero will eventually given in an return to the equilibrium strat no matter how stubborn you are, they might swap back occassionally to 1-2-1 strat to see what happens bu once you take into account the punishing they would end up lossing from this and eventually give up. However if we return to the original question, the optimal strategy is to play 1-2-1 or 2-1-2 to start with in the hope you are the one in control while the others battle it out over the alternative strategy. I think this is the final answer to the question. It is a good idea you had about the occasional 3 but given the infinite games I think any player would logically fight as long as they could to get their fair share so it makes no difference.
Chuck
Daedalian Member

 Posted: Thu Dec 21, 2006 11:25 pm    Post subject: 68 It makes sense to punish someone even if it gets you nothing if you're already getting nothing. Does it make sense to punish someone if you're getting 60¢ per game but want more, even though you'd lose the 60¢ and get nothing for awhile? How about if you're getting \$1.00 and want the \$1.20 you'd get if you won two out of five games? How long to you continue the punishment if the other player won't give in? The game isn't said to be infinitely long, just indefinitely long. We don't know when it will end. I'd rather have \$1.00 per round than 60¢ but I'd rather get 60¢ than nothing. My opponent is in the same position. It seems reasonable for each of us to get \$1.00. But my opponent wants as much as possible and would rather have the \$1.20 he'd get if I back down. Should I back down if he appears to be irrational? If so, then it benefits him to appear to be irrational. He should never back down. If I think that's what he's doing then I should settle for the 60¢, but I can't be sure that won't back down in the very next game. No matter what I decide, I might get less than if I made an different decision. I see no clearly best decision. \$1.00 each seems fair since everyone gets the same amount but should I insist on it no matter what? I might end up with nothing. If all of us are so equal minded that we all make the same decisions then there's no puzzle at all. It doesn't matter what I play because the other two will play the same thing for the same reason.
A-man
Icarian Member

 Posted: Fri Dec 22, 2006 6:34 am    Post subject: 69 Like I said several days ago, everyone play 1 and enjoy the pizza! _________________Men are apt to mistake the strength of their feeling for the strength of their argument.
Guest

 Posted: Sun Dec 24, 2006 11:18 am    Post subject: 70 Try to prove your answer step by step chuck, what would you play first and what are the odds/returns be if everyone else throught the same?
Chuck
Daedalian Member

 Posted: Sun Dec 24, 2006 2:26 pm    Post subject: 71 I don't have an answer. There might not be any best solution since anything I do might be defeated by cooperation between the other two players. They don't need collusion. One could start playing a strategy that, if the other player joined in, would benefit them both at my expense. If I knew for sure that the other players would always think the way I do then I'd just play 1 forever. There would be no reason for me to change since, if I did, they'd change too for the same reason. I might choose to let some chance device determine my number and get something different from them that might benefit me but it has just as much chance of hurting me so I wouldn't bother, and so neither would they for that same reason.
Guest

 Posted: Mon Dec 25, 2006 3:00 pm    Post subject: 72 Well assume that all other players think exactly as you do, then come up with what you think is the best strategy and then adjust it if you see a way that you can adapt it some way to better your first attept. Colluding happens only after many games are played and the other players see your strategy. Also all players have equal chance to collude so dont think of the game as you playing a particular player as this will only confuse matters.
Chuck
Daedalian Member

 Posted: Mon Dec 25, 2006 3:25 pm    Post subject: 73 If the other players think exactly as I do then there's nothing I can do to better my first attempt because they'll do it too. The puzzle says they're as rational and capable as I am. Does that mean identical in every way?
Fotiman
Icarian Member

Posted: Thu Dec 28, 2006 8:53 pm    Post subject: 74

The way I see it, there are 7 possible outcomes for each round:
 Code: +-----------------+ | P1  | P2  | P3  | +-----------------+ | Us+ | U-  | U-  | | U-  | Us+ | U-  | | U-  | U-  | Us+ | | S=  | S=  | S=  | | U+  | S-  | S-  | | S-  | U+  | S-  | | S-  | S-  | U+  | +-----------------+

Legend: P1 = Player 1, P2 = Player 2, P3 = Player 3, U = Unique, Us = Smallest Unique, S = Same as another player, + = Win, - = Loss, = = Draw

If all players were choosing a totally random number, the results would tend towards one of the first 3 outcomes (all unique). But since there's no way to communicate this to the other players, there's no way to assure that all players would be selecting totally randomly, and thus no way to maximize returns. So the logical alternative would be to try and convey a pattern to the other players that is beneficial to everyone, since working together to break even is the only way to maximize returns for yourself without great risk and luck. Being logical players, they'll be wanting the same thing. Since there's no real way to "win" in this game, the goal of maximizing your return should be to continuously break even.

In the following equations, x = the buy-in amount for each round, and n represents the number of players:
 Code: L = -1x + 0 = -1x W = -1x + nx = nx - x D = -1x + 1x = 0

Each of those equations accounts for the up front expense for each hand (-1x), and then adds on the result of the hand.

Substituting \$1 for x, and 3 for n, we get:
 Code: L = (-1*\$1) + 0 = -\$1 W = (-1*\$1) + (3*\$1) = +\$2 D = 0

In order to break even, a player must have no ultimate profit or loss. So:

0 = (n - 1)L + 1W

or

0 = 2L + 1W

For each win a player must also have n-1 losses, where n is the number of players. So for this example, for every win a player must have 3-1 (or 2) losses.

The number of draws are irrelavent, since they don't affect the profit or loss.

Going back to the original 7 possible outcomes:

4L + 2W + 1D = 0

But since we don't care about draws, we can try to establish a pattern using only 6 of the possible outcomes, and:

4L + 2W = 0

We can reduce that to:

2L + 1W = 0

We can now simplify the pattern so that only n number of rounds are required (where n is the number of players) for all players to break even. Suppose we changed the game to allow 4 players? In that case, 3L + 1W = 0. So an easy pattern to follow is just to repeat betting of 1 to n, with each player having a unique value. For example:

Round 1:
Player 1: 1
Player 2: 2
Player 3: 3

Round 2:
Player 1: 2
Player 2: 3
Player 3: 1

Round 3:
Player 1: 3
Player 2: 1
Player 3: 2

Repeat.

Since all logical players as rational as me will be working towards the same goal of breaking even, they should be able to follow this pattern easily. After only a few hands, it should be obvious what pattern I'm using (1 -> 2 -> 3 -> 1 -> 2 -> 3 -> ...) and they should be able to pick up on that and start using it themselves, giving each player 1 win for every 2 losses, and thus maximizing returns.
A-man
Icarian Member

 Posted: Sat Dec 30, 2006 6:09 am    Post subject: 75 Geez...that sure took a laborious route to state cycle your picks from 1 through 3. Not quite sure how intuitive it would be for all three players to come to that conclusion. I still believe that the game would sooner rather than later come down to each player always choosing 1._________________Men are apt to mistake the strength of their feeling for the strength of their argument.
raekuul
Lives under a bridge & tells stories.

 Posted: Sat Dec 30, 2006 8:27 pm    Post subject: 76 Whoo... this almost simulates the behavior of a three-firm oligopoly that is under government pressure not to overtly collude... Tacit collusion will in all likelihood result, as soon as al three firms realise that each firm is going to emulate what happened in the last period in order to minimise losses. The all "see" that the best option, no matter what they do, is to pick a low integer. they also "see" that their opponents will do the same. Therefore, they "see" that, as there is no way to create real collusion, all they can ever do is pick 1, hoping that someone will slip up, break the cycle, and put the others in the lead. At least, that's how I understand it.
Timbo*
Guest

 Posted: Wed Jan 03, 2007 8:03 pm    Post subject: 77 Pick 3 2/3 of the time with the other 1/3 being 1 or 2. Seems to be my approach. But since I am playing myself. I now have a random chance of winning each hand. And you wonder why the three gas stations at the intersection are all within 5 cents of each other. Even when one has a refinery plant shut down.
Arkive
Icarian Member

 Posted: Thu Jan 04, 2007 9:12 pm    Post subject: 78 I would like to add an interesting point that I don't think anyone has noticed. No where in the puzzle does it explicitly say that you know who played what number. It says the numbers are revealed and the winner is awarded his money, but with no communication possible, the only information you may be given is the numbers that were chosen and the fact of whether you won, lost or tied. With this in tow, explicit collusion becomes much more complicated, and even-so, the ability to counter any collusion strategy has been shown to be easily accomplished regardless. With this information, or perhaps the lack thereof, I'm not sure what strategy I would take. I still believe the even trade-off strategy I posted a while back makes the most sense, but if this puzzle still remains unsolved, I have to believe there to be a "winning" strategy, even if it is marginally beneficial with decreasing margins of profit proportional to the length of the game (i.e. you win a few hands, but break even for the remainder of the game).
Chuck
Daedalian Member

 Posted: Thu Jan 04, 2007 10:18 pm    Post subject: 79 If I start playing a strategy that one other player can take advantage of at the expense of third player, I don't really care which of them cooperates with me. The player who cooperates with me doesn't need to know which of the other two players he's joining. I don't think it matters much that we know who chose which number in each round. I suspect that the intended solution is the 1/(2^n) strategy. The puzzle doesn't actually say that we remember past plays. If we don't remember then the 1/(2^n) strategy is good.
Arkive
Icarian Member

Posted: Fri Jan 05, 2007 1:20 am    Post subject: 80

 Chuck wrote: I suspect that the intended solution is the 1/(2^n) strategy. The puzzle doesn't actually say that we remember past plays. If we don't remember then the 1/(2^n) strategy is good.

If we (and our opponents) are meant to be intuitive, with what other information would we base a long term strategy on then the past plays made by ourselves and our opponents? Are you implying that when "the numbers are revealed" as the puzzle states that we do not retain that information in future draws?
 Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year by All usersA-manAnonymousArkivebhoogterBicho the InhalerBraveHatChienFouChuckDeath MagednwqfadeblueFotimanGriffinIcarusjadesmarL'lanmalLoudmouthLeemarcellochiodimarcusIMiddle Aged GuymikegoomithNeoNsofraekuulTomkatwavebasher Oldest FirstNewest First
 All times are GMTGoto page Previous  1, 2, 3  Next Page 2 of 3

 Jump to: Select a forum Puzzles and Games----------------Grey Labyrinth PuzzlesVisitor Submitted PuzzlesVisitor GamesMafia Games Miscellaneous----------------Off-TopicVisitor Submitted NewsScience, Art, and CulturePoll Tournaments Administration----------------Grey Labyrinth NewsFeature Requests / Site Problems
You cannot post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum