Poker Bluffing Frequency
Note that this bluffing frequency means that 40 percent of the time we’re betting, our hand is a bluff. It does not mean that when we are dealt a bluffing hand, we decide to bet 30 percent of the time. However, that doesn’t mean that you should be playing according to these numbers. You need to be bluffing with a high enough frequency that it doesn’t work 100% of the time. Basically, you need to be pushing the boundary as much as you can to make sure that you’re getting away with as much free money as you possibly can.
- Poker Bluffing Frequency Calculator
- Poker Bluffing Frequency Meaning
- Poker Bluffing Frequency Games
- Poker Bluff Frequency
Bluffing – for the “layman”, it’s probably one of the first words that pop into their heads when they hear about poker. And in a certain way, they’re right to make that association.
See, bluffing is an essential tool to overrealise your equity in poker.
“Overrealising equity” is just a posh way of saying ‘win the pot more often with your hand than it would win if it went to showdown each hand’.
Let’s say you have KQ off-suit in middle position and you open, and a good reg calls in the cutoff. If we give them a reasonable calling range, you have about little over 52% equity going to the flop.
If you manage to win the pot more often than that, you have successfully overrealised your equity.
It is incredibly important to be able to do that, since that’s what makes or breaks a good poker player.
If you aren’t able to do that, then you are at the mercy of the deck. At that point, based on the assumption that other people are going to bluff you from time-to-time, are you really better off than just playing a slot machine or buying a scratch-off lottery ticket?
Two Ways of Achieving Equity Overrealisation
One is called “the blue line”, or “the showdown line”: this is when you make your opponent put more money into the pot with a worse hand by value betting intelligently. The naming of the lines come from HUD reports.
The other is called “the red line” or the “no showdown line”: this is when you make your opponent fold a better hand by bluffing well.
This time, we’ll be examining bluffing and, more specifically, how often it should be done.
First off, when discussing bluffing, we need to be familiar with another poker term: fold equity.
Fold Equity
Fold equity is the percentage of time your opponent is going to fold their hand to a bet or raise. Unlike “regular” equity, it can’t be calculated using combinatorics; it’s something very specific to poker strategy.
You need to take your opponent’s whole range and see what hands are there that they would be willing to let go of.
Let’s take another example: you open KQ suited from the middle position, and a good reg calls in the cutoff.
The flop comes 76A, two hearts. You’re out of position, the action is on you – however, you are the pre-flop aggressor. Should you take a stab? Now, you have to think about how those cards connect with the reg’s range.
Part of the reg’s possible hands are Aces with middling kickers and suited Aces with low kickers – it’s unlikely they’re going to fold either of those. If they made a set of 6’s or 7’s, evidently there’s no chance they’re folding either.
Flush draws are also going to call any reasonably sized bet, and if he happens to have 98 suited, so are the opened ended straight draws.
On the other hand, if they just have two broadway cards (KJ, QJ, QT) or a suited connector that missed (T9 suited, not of diamonds only has a gutshot) or a pocket pair that did not make a set and would be afraid of the Ace on the board, your bet is likely going to get through.
Keep in mind though that you are blocking some of the broadway card combos since you are holding KQ yourself.
You need to evaluate what percentage of their range is the above mentioned folding range in order to determine through EV calculations whether or not your bluff is profitable on the long run.
However, for this piece, we want to focus on general truths about over bluffing frequencies rather than specific hands.
These few paragraphs were just to make sure to cover the basic theories behind bluffing before diving into how often you should be making these moves.
The right bluffing frequency varies from street to street.
As a rule of thumb, you can bluff more frequently on earlier streets because you can improve your hand therefore you have extra equity in addition to your fold equity – that is unless you’re drawing dead — which is impossible pre-flop, the only street where that is the case. Even 7-2 off has 10-12% chance against Aces, depending on the suits.
This is noteworthy since the correct value bet to bluff ratio calculations are based on pot odds.
The assumption is whenever you bluff, you’ll win the pot 0% of the time if you get called; whenever you value bet you’ll win the pot 100% time if you get called.
Evidently, it’s not as clear-cut in actual poker games, especially the value betting part of that assumption is often far from the reality, every poker player value bets with a worse hand from time to time – however, this is a GTO calculation, not a real game of poker.
On earlier streets, even when caught bluffing, you’ll still improve to a better hand some percentage of the time, that is why – as we mentioned earlier – you can profitably bluff more often early in the hand.
So, the numbers below assume 0% raw equity – this usually means the river when you have no made hand.
If you bet 1/4 pot, you should have 17% bluffs and 83% value bets. | That is around 6 value bets for each bluff. |
If you bet 1/3 pot, you should have 20% bluffs and 80% value bets. | That is 5 value bets for each bluff. |
If you bet 1/2 pot, you should have 25% bluffs and 75% value bets. | That is 4 value bets for each bluff. |
If you bet pot, you should have 33.3% bluffs and 66.7% value bets. | That is 3 value bets for each bluff. |
If you bet 2X pot, you should have 40% bluffs and 60% value bets. | That is 5 value bets for every 2 bluffs. |
All of these are based on the pot odds you’re giving your opponent with your bet. The idea is if you bluff this often in proportion to your value bets, they don’t have an edge against you when deciding between a call and a fold.
Let’s say the pot is $100, you bet $50, half pot bet. Now, if you bluff 50% of the time your opponent can call every time and make a profit.
Here’s how it’s calculated: he’s paying $50 to win a pot that is going to be $150 (not including his $50 call which should not be included in the profit for the calculation); and – once again, assuming that every time you value bet and get called they lose – they’re going to win that pot 50% of the time. That means that on the long run he’s making (0.5X$150)-(0.5X$50)=$50 if he calls you.
However, if you bluff with the correct frequency, 25% of the time, your opponent still has to put in $50 to win a $200 pot, but he’ll only win 25% of the time. This means, using the same type of expected value calculation: (0.25X$150)+(0.75X-s$50)=0. He’s not winning or losing, he’s exactly breaking even.
This means that your bluffing frequency isn’t an indicator for your opponent when they’re about to make their call – and as a good, balanced player that is what you should strive for.
Also, this kind of thinking implies that the other player is also highly competent. You can’t expect casual online micro stakes players that play against you for the first time to really analyze your river half pot bet tendencies.
For lower stakes players, this is more of a guideline to have some bluffs in their range, but not too many.
Poker Bluffing Frequency Calculator
Although please note that all these calculations alone do not tell you how often you’re betting as a whole.
These are just ratios. Ratios can stay the same if you bet 100% of your range in the river or if you only bet 10%. The key is if you value bet often, you also need to keep your bluffs proportionate as well.
Evidently, if you you’re not an aggressive player and value bet less, you need to bluff less often too in order to remain unexploitable.
This post was written by Marton Magyar, the Beating Betting poker strategy contributer. Marton has also written for sites like HighStakesDB and PokerTube.
Feature image source: Flickr
If you've never played poker, you probably think that it's a game for degenerate gamblers and cigar-chomping hustlers in cowboy hats. That's certainly what I used to think. It turns out that poker is actually a very complicated game indeed.
The early forerunners of poker originated in Europe in the middle ages, including brag in England and pochen (meaning to bluff) in Germany. The French game of poque spread to America in the late 18th century, where it developed into modern poker. The derivation of the word poker suggests that it's mainly a game about bluffing, which is perhaps an indicator that it's a game of psychology and cunning rather than a sophisticated and challenging pastime. However, as I will show you below, bluffing is not a low and tricky manoeuvre, but a mathematically essential part of the game.
I'm going to teach you how to play one of the simplest possible versions of poker. Instead of using a full deck, we're going to use just three cards — A♠, K♠ and Q♠ — and we're only going to allow two players to play — let's call them John and Tom. Each player puts £1 on the table, and is then randomly dealt one of the three cards. Tom can now either bet £1 or check. If he checks, each player shows his card, and the player with the best one wins the £2 on the table (A♠ > K♠ > Q♠). If Tom bets, John can either fold, in which case Tom wins the £2 on the table and neither player has to show their card, or he can call by matching Tom's bet of £1, after which the cards are shown and the best card wins the £4 on the table.
- If Tom has A♠, he will bet, hoping that John calls. If John has A♠ he will call a bet from Tom. A player with A♠ knows that he has the best hand.
- If John has Q♠, he will fold if Tom bets — he knows that he has the worst hand.
- If Tom has K♠, he will check. If he bets, John will fold with Q♠ and call with A♠ — he cannot expect John to call with a worse hand or fold a better hand.
- What should Tom do with Q♠? If he bets, he may be able to get John to fold K♠, and he can't win by checking — he knows that he has the worst hand. A bet from Tom with Q♠ is a bluff.
- What should John do with K♠ if Tom bets? He'd like to fold if Tom has A♠, but if Tom is bluffing with Q♠, he'd like to call.
(Anybody who has played poker seriously should be able to recognise a bluff, a value bet, a hand with showdown value and a bluff catcher in this discussion.)
In the absence of any frantic ear tugging or eyelid twitching from either player, John and Tom need to think mathematically about this game in order to work out sensible strategies when they play repeatedly.
If Tom always bluffs with Q♠, John may notice that he is facing lots of bets, and decide to call whenever he has K♠. However, Tom may then realise that John is calling his bluffs every time and stop bluffing, so that John will give Tom an extra £1 whenever Tom has A♠, but Tom will save his money when he has Q♠. A rational response is for John to stop calling with K♠, but then Tom can start bluffing again, and round and round the cycle goes.
Unexploitable bluffing
Whenever a cycle like this arises in a game, the mathematician John Nash showed that there exists what's known as an unexploitable mixed strategy. Tom should bluff some fraction of the time that he has Q♠, whilst John should call to catch a bluff some fraction of the time that he has K♠. The strategy is called unexploitable because neither player can improve their win rate by choosing a different strategy, so neither has an incentive to deviate from it. Below I will explain the mathematics that leads us to conclude that the unexploitable mixed strategy for Tom is to bluff 1/3 of the time that he is dealt Q♠ and for John to call 1/3 of the time that he is dealt K♠. If either player uses this strategy, Tom makes a profit in the long run of about 5.5p per hand. The game is skewed in Tom's favour since he has the option of checking and showing down K♠, an option that John does not have. The game can be made fair by forcing John and Tom to alternate roles from hand to hand.
We can get some idea of the complexity of this game by looking at its decision tree, shown below. The tree shows all the possible paths the game can take. (If you don't want to see all the maths, skip ahead.)
The decision tree for the AKQ game.
The open circles in the decision tree represent points where different random events can happen. We'll call these random nodes. In the AKQ game, the random nodes represent John and Tom's cards: the first random node at the top represents John receiving one of the three cards and the second row of random nodes represents Tom receiving one of the remaining two cards. The probability of each card being dealt is given along the arrows. The solid nodes are decision nodes, where one of the players must choose an action. The only decisions are whether Tom bluffs with Q♠ (at frequency b) and whether John bluffcatches (calls) with K♠ (at frequency c).
Nobel laureate John Forbes Nash made fundamental contributions to game theory.
The additional amount that John wins at the end of the game on top of what he would have won had they shown their cards straight away (his ex-showdown winnings) is given at the bottom of each leaf of the decision tree. For example, if John has A and Tom has Q and decides to bluff (the path shown in red in the tree), then John will catch the bluff and win the £4 on the table, making a profit of £2. Had they shown their cards straight away, then John would have won the £2 on the table, making a profit of £1. Therefore John's ex-showdown winnings are £1.
By multiplying out the numbers along the arrows of each possible path through the decision tree, we can calculate the probability that the game takes this path. So the probability that John has A and Tom has Q and bluffs is Multiplying this probability by the number of the corresponding leaf gives us the value of the game to John. In our example this is From this value a computer (or a human for a small decision tree like this) can work out an optimal strategy. The dotted lines connect decision nodes where the decision is the same — these are called information sets. As you can see, the AKQ tree has nine nodes, ten leaves and two information sets.
In order to work out the amount that John can expect to win on average (if the game were played many, many times) in addition to what he would win at showdown (his ex-showdown winnings), we have to add up the results we get from the individual paths through the decision tree. This gives
The amount that Tom can expect to win, , is , since John’s loss is Tom’s gain, so Tom wants to make as small as possible.
If John calls with a fraction of his Kings greater than so that is positive, then
and the minimum is achieved when so Tom can maximally exploit him (minimise ) by never bluffing (). Similarly you can work out that if John calls less often than this, so that is negative, Tom can maximally exploit John by always bluffing (). The only way that John can defend himself against exploitation is to use . This is his unexploitable calling (bluff catching) frequency. His bluff catches with K♠ and value calls with A♠ are in the appropriate ratio to make Tom indifferent, i.e. however often he bluffs, his win rate does not change.
We can also rearrange the expression for to get
A similar argument shows that Tom’s unexploitable bluffing frequency is John can maximally exploit any deviation from this strategy by either always calling with K♠ (), if Tom bluffs too often, or always folding with K♠ (), if Tom doesn’t bluff often enough.
If either player uses his unexploitable strategy we have
so Tom wins on average £1/18 = 5.5p per hand.
Only the tip of the tree
A much more complex variant of poker is two player Fixed Limit Rhode Island HoldEm. In this game each player is dealt a card from a full deck, there are three rounds of betting, and a card is dealt on the table between each round. The players try to make the best three card hand. The decision tree for this game has about three billion nodes, and it's the largest game for which the unexploitable strategy has been calculated (see the further reading list below for more information).
The simplest poker variant that people actually play for money is two player Fixed Limit Texas HoldEm. Each player starts with two cards, there are four rounds of betting and a total of five cards are dealt on the table between each of these rounds. The decision tree for this game has about 1018 nodes (that's 1 with 18 zeros after it, or a billion billion). Although there are computer algorithms that can calculate a good approximation to the optimal strategy, the exact solution has yet to be found.
The most popular variant of poker nowadays is No Limit Texas HoldEm, in which the players can bet any amount, which adds another layer of complexity, but even this is simpler than Pot Limit Omaha, a game that is growing in popularity, in which each player is dealt four cards. In addition, poker is usually played by more than two players at a time, sometimes with as many as ten. Once you start to think about the strategic complexity of these larger games, it really becomes quite mind boggling ... and we've only talked about finding the unexploitable strategy. Real human players don't play an unexploitable strategy. They can be exploited for a greater profit by playing a non-optimal, exploitable counterstrategy. How can you find out what strategy a player is using and determine the appropriate counterstrategy? What if your opponent changes his strategy? Programming a computer to adjust its strategy to maximally exploit an opponent is much harder than using it to calculate the unexploitable strategy, and the best human players can outperform the best computer programs, something that is now impossible for even the very best chess players. Poker is the king of games, and I've only been able to scratch the surface of its mathematical structure in this short article.
Further reading
Poker Bluffing Frequency Meaning
- The Education of a Modern Poker Player by John Billingham, Emanuel Cinca and Thomas Tiroch, D&B Publishing, October 2013
- There is more interesting material on the website that goes with the above book
- Cowboys Full: The Story of Poker by James McManus, Souvenir Press, 2010
- Optimal Rhode Island Poker by Andrew Gilpin & Tuomas Sandholm, 20th National Conference on Artificial Intelligence, Pittsburgh, PA, 2005
- Andrew Gilpin's website has more information on Rhode Island HoldEm.
Poker Bluffing Frequency Games
About the author
Poker Bluff Frequency
John Billingham is a Professor of Theoretical Mechanics at the University of Nottingham. When he's not solving partial differential equations, he likes to play poker, both live and online. You can read about his feeble attempts to learn how to play better poker, along with a more in-depth discussion of the mathematics of poker in The Education of a Modern Poker Player, a book that he wrote with the help of Thomas Tiroch and Emanuel Cinca, both of whom really know their way around a decision tree.