Thursday, May 16, 2019

reading review - deep thinking

Deep Thinking by Gary Kasparov (July 2018)

Russian chess champion Garry Kasparov explores the history, progress, and implications of machine intelligence in his 2017 book. Deep Thinking covers a wide range of topics related to the idea and I’ll cover some of Kasparov’s varied insights in a series of upcoming posts. However, for today I want to cover the section of the book he devoted to the chess matches he played during the 1990’s against a number of chess computers.

The most famous of these matches, of course, was against IBM’s Deep Blue. The machine quickly progressed during the decade from being a technically impressive but unthreatening imitation of an elite chess player to a true challenger against the world’s best player. The machine finally broke through in 1997 by beating Kasparov in the first game of their six-game match. A year later, it made history by beating Kasaprov in the rematch.

The progress made by the program illustrates many of the important principles that govern machine progress. One example is Moravec’s paradox. This points out that humans are bad at what machines do well while machines are good at what humans do poorly. This suggests that a machine in competition with a human will win as long as it can fully take advantage of its superior processing power. In the chess context, a machine can make up for its shortcomings in strategic planning and patter recognition by analyzing positions at a depth far beyond the ability of a human player. Early programmers of chess machines struggled with this trade-off by focusing too much on teaching the machine to ‘think’ like a human. Over time, as computer speed steadily improved and processing power dramatically increased, the focus of chess machines turned to using brute force to analyze as many positions as possible instead of trying to ‘think’ through a position.

To put this point in another way, machines have historically failed to ‘think’ like humans because the way humans think is not understood well enough to turn into a computer program. The solution for designers has always been to prioritize results over method instead. In the context of chess, the breakthrough came when the programs started to focus on calculation rather than thinking.

It helped computers that chess is simply not complex enough to require a computer to ‘think’ – brute force methods were enough to determine the best play. There is no better example of the power of extensive tactical search than in situations with low margins for error. A human in this situation will almost always make an error at some point because he or she cannot rely on intuition, principles, or experience to guide the thought process. A human might also feel pressure, nerves, or emotions that influence bad decisions. On the other hand, a computer is unaffected by feelings and can navigate such situations with the same process it uses for common positions.

A human, however, is often better equipped to navigate new or novel positions. In these moments, understanding the basic principles of the game and playing the board based on intuition works better than brute force calculations. This is a reality reflected not just on the chessboard but also in any domain where machines are prevalent. In short, automated equipment simply isn’t very flexible and humans in competition with machines can win if they can introduce uncertain elements onto the playing field whenever possible.

One up: I liked the observation that airplanes don’t flap their wings to fly. It makes the point that machine success isn’t dependent on following the blueprints set by living things and I suspect this lesson is likely to hold true even as computers continue to expand and build on the early foundations of artificial intelligence.

One down: One common form of machine learning involves feeding a computer endless examples of a desired behavior. Over time, the computer learns what a correct result looks like and tries to mimic these results with its decisions. In chess, this can lead to weird results – a computer might think queen sacrifices are a good idea in general, for example, when in reality a player only sacrifices a queen when he or she has an exceedingly good reason to do so.

The logic of this method must be applied carefully because the way this lesson applies to other situations can lead to very poor ‘automated’ decision making. A computer learning to drive in this manner, for example, might observe driving behavior and conclude that a green light means go, a red light means stop, and a yellow light means… accelerate through the intersection!

Just saying: I thought no point summarized the chess portion of this book better than the observation that computer programs are better at using knights than their human opponents. The reason is two-fold. First, humans struggle to visualize the crooked movement of a knight with the same ease they visualize the linear movements of the other pieces.

I liked the second reason a little better – computers do not struggle with this visualization because computers don’t visualize anything. I think this point is easily lost whenever people try to think about how computers make calculations. It isn’t really a question of how the computer ‘visualizes’ the problem because a visualization is always a substitute for rigorous calculation. A computer doesn’t need to visualize because it is almost always capable of completing the full series of calculations.