In the past year, a game called poker has gone from being an esoteric hobby of a few nerds to becoming a worldwide phenomenon that’s bringing in billions of dollars every single day. In fact, it’s one of the most profitable sports in the world — and it’s all thanks to artificial intelligence (A.I.).
The rise of A.I.-powered technology is part of what has turned poker into the world’s most lucrative sport. In 2018, according to Forbes, the industry generated $7 billion in revenue for online casinos alone. And in just the first three months of 2019, the World Series of Poker raked in $2.2 million in prize money. That’s more than the entire prize pool for the last two years combined.
But how did we get here? Why are professional players so willing to give up their time to play against computers? Let me explain.
Poker is not like other games. If you think about a typical video game, there are two main categories: ones that require skill and ones that don’t. You might have played a Mario Kart where your aim was to be as good as possible at driving around the track, but if you were really bad, you could still win. On the other hand, if you wanted to win Mario Party, you needed to know the rules of the game well enough to beat others.
Poker is different. Unlike video games, which have a clearly defined set of rules, poker is an ambiguous game with many variables that can influence the outcome. The best players in the world will tell you that no matter how much practice they do, they never seem to be able to figure out exactly what the right move is during a given situation. This means that even though poker is a game of skill, there are still factors out of your control that make it impossible to guarantee a win.
This is why it took us so long to understand that A.I. would revolutionize poker. In 2006, IBM researchers had developed Deep Blue, which was the first computer program to defeat a reigning chess champion. But while Deep Blue was amazing, it wasn’t really useful outside of its original context because there weren’t any other programs that could use it.
Then, in 2011, Carnegie Mellon University professor Gary Stoller created a computer program called Libratus that beat a group of professional poker players. It wasn’t the first time someone had done this. Back in 2002, researchers at the University of Alberta had also managed to create a computer program that could beat some humans at heads-up Texas Hold ‘Em when there were only four players involved. But while those programs were impressive, they didn’t have nearly the power of Libratus, which was capable of processing a staggering 4,000 decision trees per second. With that kind of speed, Libratus was able to analyze each of the possible scenarios and then choose the optimal one.
Soon after, the tech press got wind of this breakthrough and began calling it the beginning of the end of humanity. They said that within 20 years, A.I. would take over everything. The problem was that nobody knew quite how to implement such technology. Sure, Deep Blue beat Kasparov, but how does one go about creating a system that can beat the best human players in the world at something as complex as poker?
That question kept researchers busy until 2013, when a team of scientists led by Dr. Stuart Russell at the University of California Berkeley devised a new method for training A.I. through reinforcement learning. Reinforcement learning is essentially an algorithm that learns by trial and error. Instead of telling a computer how to perform a task, you simply let it try things out and reward or punish it based on whether or not it succeeds.
The online platforms like Judi Bola Online are known to provide quality of the services to their customers. Their main aim is to provide high level of the security as the amount of the investment is involved in the game. A person needs to make proper analysis and then only select the best one.
The key thing that allowed this algorithm to work was the concept of AlphaGo Zero. In 2016, Google researchers introduced a neural network named AlphaGo that could learn how to play Go without having been taught beforehand. It started off with no knowledge of the game whatsoever and learned how to play from scratch. Then, in 2017, Google researchers released a follow-up version of AlphaGo called AlphaZero, which used the same reinforcement learning techniques to teach itself how to play chess.
The result was a program capable of beating all human grandmasters at chess with superhuman precision. But it wasn’t just limited to playing chess. While AlphaGo Zero was great at playing Go, it became even better when it applied the same approach to poker. In November 2017, a team of researchers led by Prof. David Silver at the University of Toronto released a program called AlphaStar that was capable of beating the best poker players in the world. Since then, AlphaStar has continued to improve and now it’s regularly winning tournaments held at prestigious events such as the WSOP.
And while it seems like it should be easy for a machine to determine the best course of action in a poker game, the reality is often far more complicated. When faced with multiple decisions, AlphaStar takes advantage of a technique known as deep Q-learning. Deep Q-learning works by having a neural network constantly update its predictions about the current state of the game and then using those predictions to calculate its next best move. Because the computer doesn’t have to calculate each possible scenario individually, it’s able to process thousands of possibilities very quickly. This makes it incredibly powerful.
So, the next time you sit down at the poker table, remember that machines are slowly taking over the world and it’s only a matter of time before they start making our lives miserable.