Skip navigation

Monthly Archives: January 2015

Some background:
I played poker pretty much full time from 2007-2009. I played about 3 million hands, mostly cash games, over 10,000 tournaments, and had done a great deal of analysis on my play. At the end of 2009, even though I was doing better than ever in poker, I felt like the game had become a job.

After doing research and speaking with a bunch of botters I knew it was possible to be fairly profitable. Already, many of the highest performing low-stakes SNG players on Sharkscope were bots. My long term goal was to build a self-improving botting system that could take on any poker game of any type.

Having spent such an immense amount of time analyzing the game and my play, I felt like utilizing all the information to write an AI would be the best use of that knowledge. I spent a good chunk of 2010 both writing an AI and setting up a botting system that would use that AI to play real money games online. I basically had my bots play until Poker Black Friday. In that time hundreds of thousands of SNGs and over 10 million hands were played.

So how does a bot work?
You must completely automate everything a human would do. Open the client, log in, pick the games using the filters and clicking through the results. Approve or close popups. Join games. Play games. Leave games. Close client.

Botting undetected:
Unfortunately most poker sites terms of service disallowed the use of bots with confiscation of funds and account closure as penalties for violation. The biggest and best sites to play on had these terms, so the first order of business is how to play and not get caught.

There were a number of mechanisms sites employed to enforce these terms including:
Resizing tables
Captchas
Different images for the same cards (although inperceptible to humans)
Pop-up boxes with a human asking a simple question
Scanning running software
Tracking mouse clicks
Checks for non-human play schedules

Also, when pokertableratings started getting popular and allowing the purchasing of hand histories, players themselves were finding and reporting bots by auditing play stats.

After looking at what tools were currently available I ended up choosing a piece of open source software called OpenHoldem and forking it for my own purposes. OpenHoldem had a tool called a TableMap that allowed you to capture images and text from a poker table and translate them into variables that could be used for the logic. Since this software couldn’t be made visible to the poker clients being used, I ran poker clients in a headless virtual machine and output the display to a separate computer. Within the virtual machine I used software to place tables at specific resolutions and locations on the screen. A friend helped me write a program that turned the graphic windows from the virtual machine’s output into their own separate windows that OpenHoldem could lock to and use the appropriate table map for. Since only the graphic display is being output from the virtual machine, there isn’t even a remote desktop client such as VNC or Teamviewer running within.

So far this solves poker clients detecting running software and the resizing of tables. For images being slightly different, I just had to make very robust table map sets where they could account for a certain degree of error and utilize fuzzy fonts. For pop-ups and captchas I had alerts pop-up when there was a certain degree of deviation and I could manually take over for a minute to correct the issue. For the first few months I was always within range of a computer that would give an alert so I could respond to an anomaly. After an account had successfully solved a captcha the risk was lowered and I could feel safe going out to dinner or taking a nap without having to actively monitor play.

The same program that generated the graphic display windows for tables from the virtual machine would accept incoming clicks from OpenHoldem for when an action needed to be done and then would convey those clicks into the virtual machine. Since there were regions where buttons existed, using certain distribution rules, the location on the button could be randomly clicked so as to emulate human clicking behavior.

As far as bots not being caught for their measurably similar behavior, I gave them different personalities. Logic was slightly tweaked for more/less aggressive play in certain areas or overall. For example I could just slightly increase the aggression factor for a player and across every possible scenario they would take the slightly more aggressive approach. I could pick just a few stats such as call preflop raise and 3bet post-flop and tweak them by a percentage. In many cases, this made the bot play less optimally, but different enough to be unique. If a bot unexpectedly performed better in certain areas, I could figure out why and use that for future builds.

Running multiple accounts:
The most obvious advantage of a bot playing is that it can play endlessly at its most optimal settings. You couldn’t have a single account play forever because that would be impossible for a human to do and the most obvious red flag of botting. So creating multiple accounts across multiple sites was the best way to get the most of the bots and cover multiple stakes. I set up different VPNs in the same general area as where the account owner’s location claimed to be.

At first I never had any of the bots on the same site and stake play at the same time since I figured it would be sub-optimal to ever have them potentially play against each other. Later I realized that could be a flag and I started semi-randomizing when they would activate and play.

More Automation:
I set up a scheduling system where each bot had its own schedule (e.g. Mon-Thu 11:00AM-7PM with 15% variance). When a bot was set to play, its appropriate VPN was turned on, the virtual machine would boot, the clients would be opened and logged into using a separate script that was able to graphically interact with the virtual machine. Once everything was open a program like TableNinja could be used to register for games that were to pre-defined settings. For sites without TableNinja compatibility I had to write a set of scripts that would check the filters and scroll through to register for the appropriate games.

A forked version of the Table Map system with OpenHoldem was designed to assist with picking games. Going down the games in the lobby or opening up a game lobby the players could be scraped and checked against the database of players and hands to figure out how juicy the game was. I never had this actually stop a bot from joining a game, but I was collecting the data with the plan for better game selection in the future.

Stop losses was a pretty important feature that I built early on. I would have regular iterations of the logic system and sometimes there would be flaws that would make the bot unprofitable. It wasn’t always obvious in their play, but over hundreds of thousands of games the losses would prove otherwise. Essentially an account balance check was done at the beginning of play and then every time a new game was registered. If the balance was below the stop loss limit, it would stop registering for new games.

Actual AI:
So the high level logic was something along the lines of:
Evaluate current actions, current stack, potential effects of all available actions, pick the best one.

All knowledge on all the players would be taken into consideration from previous games by utilizing a database of hand histories. In a particular decision the main variables that influenced the decision were current bets by which players, stack sizes of all remaining players to act, the odds of all those players acting in which ways with what kinds of hands, ICM values, position.

For a simple example, if it was folded to on the button it would first evaluate the EV or raising. It first checked the call preflop raise rates of the blinds and overcall rate of the BB. That stat would be determined by any knowledge of those players from those positions at the current blinds. That stat gives a pretty good idea of what kinds of hands those would be, and then it can be calculated with the current hand and ICM equity values what the EV was. Next the EV of limping and folding would be evaluated in a similar fashion and the highest EV score would determine the action.

I build formulas to quickly evaluate all the major stats that would be used for measurement: PFR, CPR, 3Bet, 4Bet, CBet, etc. I would do some statistical analysis to build a dataset that I thought accurately reflected the stat using the variables I wanted. Then I would use symbolic regression with those variables to generate a reasonable formula. This was time consuming and required a lot of computing power. An example formula for cpfr looked like this: f(nplayersdealt,stackbbs,temppos) = (temppos)/(log((temppos)*((temppos)*(((stackbbs) + ((11.601799964904785)*((nplayersdealt)/((stackbbs) + (sin((((-5.0985198020935059)*(temppos)) – (nplayersdealt))/(nplayersdealt))))))) + (((mod((nplayersdealt) + (((nplayersdealt) + ((5.1521501541137695)*(temppos)))/(nplayersdealt)), -1.5626200437545776)) – (mod(nplayersdealt, log(nplayersdealt)))) – (nplayersdealt)))))). It took number of players in the hand, current stack size relative to big blind, and the current position relative to the dealer.

The formulas I was describing above were used to figure out the baseline adjustments of a particular stat based on the most important general knowledge factors. During actual gameplay there was another set of functions that ran in order to weight the stats based on the players’ known data. This was grabbed from my shared PostgreSQL database across the bots per poker room. There was also a level of confidence associated with each stat based on how much data I had for it for that particular player.

A major issue with this approach was dealing with heteroscedasticity with the initial analysis. I had way more data on how people play AA than anything else simply because it shows down more often since players are more likely to play it. It becomes increasingly difficult to find out how players play a hand like 22 in rare scenarios, even with millions of hands to analyze.

After awhile of accumulating data I felt that playing against an unknown was possibly the place where I felt most optimized. Not in the sense that I could play better against them than a known player, but relatively to someone else approaching an unknown. Using data I could figure out the distribution of common play tactics of an unknown. One tricky part was you couldn’t simply average the distributions by their likelihood of occurring. Playing sub-optimally against one type of strategy could have a far more negative impact than playing sub-optimally against another type. So in some instances it’s better to assume the player is going to be aggressive even if it’s actually more common for the player to be passive just because it’s less harmful against the passive players than it is when they’re aggressive.

Improvements were surprisingly slow and undoubtedly where my system needed the most work. After a bot was consistently beating a certain game, moving up in stakes usually didn’t go well. It would take quite some time of break even or even negative play before adjustment pushed it into the positive zone. I’d say it took well over half a million hands at each stake to get it where I wanted it to be. Utilizing a great deal of the incoming revenue as a research budget for higher stakes made it so there wasn’t very much profit, but kept progress up.

Conclusions:
It was a fun project. My original long term goal was to build a self-improving NL cash game AI and I did spend quite some time building out logic systems for cash games. I figured building SNG bots would be much easier to make profitable, and then once they were running I’d switch over to cash games. I had many of my friends who were profitable at all different stakes and games send me collectively millions of hands to help me out. In exchange I could analyze their respective games and show them very specific instances where they could improve. Having the hole cards of many different profitable players helped immensely with finding core patterns that link to profitability.

To model some of the decision making I put all the players’ hand histories into separate databases so that only the knowledge that the player had on the opponents was available. A model I would put together on something as simple as preflop actions from a particular position I could compare with another player’s hand histories to see if they would take the same approach using the knowledge they uniquely had on their opponents.

This project helped me learn a lot about programming, statistics, and higher level mathematics. Poker AI specifically isn’t something I would ever get into again. For the amount of work that is required there are similar kinds of projects that can yield much higher gains. For example an automated financial trading system. I also can’t recommend that anyone else to get into poker botting beyond a hobbyist activity. The number of specialized areas that I needed to branch into in order to piece everything together was much greater than anticipated and often required me to reach out to those more proficient in those areas. Honestly, I think it’s something that a strong team should be assembled to pursue and then financially it’s likely not worth it compared to other ventures.