A Brief Review of the 2002 Deep Blue Paper

I recently summarised the Deep Blue paper by Campbell, Hoane, and Hsu (Artificial Intelligence, 2002) for a course that I'm doing. This review considers the project’s techniques and its results (largely as conveyed by the paper, with some additional commentary).

Background

Deep Blue was a computer chess system developed at IBM Research in the mid-1990's in an effort to realise a "world-class chess machine". Deep Blue defeated the then world chess champion Garry Kasparov in a 1997 exhibition match. Deep Blue has since been retired to the Smithsonian Museum.

The team that developed Deep Blue had worked on developing successive generations of chess systems since the 1980's; with Deep Blue being the fifth generation. The broad approach developed iteratively with each new generation, generally adding new methods and refining previous ones— as opposed to shifting to radically new approaches.

Deep Blue’s Implementation

Deep Blue’s success came from a number of factors, including:

  • The use of minimax searches with alpha-beta pruning, iterative deepening, and limited quiescence.

  • Selective non-uniform (uneven) tree development, using heuristics to determine which nodes are worth expanding.

  • A multi-factor evaluation function that considered 8,000 game features.

  • Preset books and databases, with a 'hand-crafted' opening book of approximately 4,000 positions, an extended opening book with scored moves from a 700,000 game database, and an endgame database of pre-evaluated scenarios where six pieces or fewer remain on the board.

  • Computational speed and scale with 30 IBM RS/6000 processor nodes and 480 special-purpose single-chip chess processors.

  • Parallel search processing orchestrated by a hierarchical architecture of a master node, worker nodes, and chess processors; with each worker node controlling 16 chess processors.

While none of the techniques used were introduced by Deep Blue, the aggregate approach was innovative at the time. The most noteworthy of the characteristics above from an AI perspective were:

  • Enhancing alpha-beta pruning through use of a null-move heuristic (where the player to move passes), allowing beta to be determined with reasonable confidence from a shallow rather than a full search. Pruning efficiency was further helped by the ordering of generated moves.

  • Uneven tree expansion, where expansion past a minimum ‘error insurance’ depth only occurs once there is clear evidence that expansion is worthwhile (using "credit" accounting to find successive forced/forcing moves and use heuristics to score their quality). This selective expansion allowed Deep Blue to explore promising moves (such as forcing/forced moves) to as deep as forty plies.

Results & Impact

Deep Blue successfully beat Kasparov, as well as other grandmasters - achieving its goal of being a "world-class chess machine". The victory against Kasparov achieved global renown, capturing the public imagination. While Deep Blue was domain-specific in the large, it gave greater credence to techniques such as uneven tree development and parallel search processing.

Deep Blue was epochal at the time, but has since been eclipsed— more by developments in algorithms than in hardware. More modern approaches achieve comparable and often better results using more efficient search algorithms, and machine learning techniques— being less dependent on pre-configured weightings, books, and databases. Modern alternatives require significantly less computational power and can use off-the-shelf hardware (e.g. running on a regular desktop PC).