Continuing the theme I started last month on chess and computer programming, I would like to take a look at modern predictive analytics and how is has its roots in the analytic routines that were created to devise chess stratagems.
The classic predictive analytics question, “what if?” was originally posed by chess players. What if I move my pawn up? What if I take an opponent’s bishop? These questions had to be algorithmically mimicked in a chess program in order to be analyzed. They are lexically and semantically similar to “what if I sell my product in this geo?” or “what if I bundle two products together?” which are the sort of questions asked in predictive analytics for businesses.
If you like and follow tennis, you’ll be familiar with the sort of predictive analytics provided by IBM SlamTracker in the Grand Slam tennis tournaments. By analyzing millions of data points they can pose questions such as “what if Djokovic played Nadal?” They analyze many factors such as player’s history, time of day, court surface, wind condition, roof open/closed, and temperature in order to provide percentage likelihood of certain outcomes. However, the usefulness of a statement such as “If Nadal wins 46% of the return serves, he has 84% chance of winning the set” is debatable.
Unlike tennis, chess analytics are confined to a finite set of rules and behaviors which are relatively easy to encapsulate into programmatic algorithms. In a given position there are a finite set of possible moves (on average about 30). Each of those moves could have a response of another 30 moves from the opponent, and so on. Based on this, n plays by the program results in 30n possible outcomes (positions). The deeper the program thinks ahead the more end positions (nodes) that need to be evaluated. Simply looking four moves ahead results in 810,000 possible outcomes that need to be evaluated. As you can imagine, early programs were really challenged to evaluate board positions in a timely manner based on the computing power required to accomplish the task. Thus many of the early programs attempted to reduce the number of end nodes analyzed by pruning the tree (if you will) of moves both horizontally and vertically.
The following graphic is a tree-like rendering of the move sets that need to be analyzed. Obviously, only a part of the tree is shown for reasons of space. Each circle represents a move in the game, and each child circle represents a response to that particular move. It can easily be see that analyzing the move sets would lend itself to the parallel processing that modern computers have and which are extensively used in business analytics.
The circles tagged with X, Y and Z are outcomes, the result of the choices in the tree made above it. Picking a desirable outcome and thus knowing the move that lead to that outcome is the purpose of the chess analytic engine (I will discuss what is a “desirable outcome” in future post). The challenge here is that the best possible outcome for the computer is when its adversary plays the absolute worst it could play. And since you can’t assume the adversary will play poorly, you have to parse the tree, picking the best moves for both protagonists which result in the most favorable result for the computer–not an easy task at all when you think about it.
There is a lot more that I would like to say about this tree-searching analysis, including how some faults in early chess program logic made the computer look like it was “pouting!” But I will save that for the next post. The important thing to realize is that chess programs are analyzing a very large amount of positions in order to “predict” the impact of a move (choice), and this is very similar to what the predictive business analytics machines are doing today for large corporations.
Latest posts by Paul Pendle (see all)
- Understanding Russian dominance in competitive chess - April 21, 2015
- When computer chess programs sulked… - April 3, 2015
- Chess and predictive analytics: what would happen if…? - February 17, 2015