Playing at the Edge of AI

Posted November 2006

Reading Blondie24, where I got the title of this post from, got me exited about neural nets and genetic algorithms again. I always did found them fascinating, but I didn't really have any practical use for them myself—until now.

My goal for the Cocoa games project, and GGTL before it, has always been to make building of decent computer game AIs very easy (for two-player zero-sum perfect information games, at least, which is actually a rather large group). My focus has been on the scaffolding—encapsulating the Alpha-Beta algorithm and letting developers focus on implementing its key components: detecting legal moves, applying a move to a state, and the evaluation function.

While the first two are quite uninteresting (they're just game-specific scaffolding), the evaluation function is what really makes or breaks the resulting AI. Given an arbitrary state, this function must determine its fitness; how good or bad the state is. If you are an experienced player you might instinctively know if a given state is good, but it may not be a simple task to translate that to computer code. While computer games AIs have traditionally relied on exploiting knowledge provided by human experts, David Fogel and Kumar Chellapilla set out to see if they could create a program that could learn to play checkers on its own. To achieve this they used a genetic algorithm to evolve a neural net which they used as their program's evaluation function.

David and Kumar were remarkably successful: Blondie learned to play checkers to a human Master level. If I could reproduce their results in a way that would work for a broader range of games, my goal of making board-game AI creation very simple would be largely achieved.

Fast forward a few months. I have implemented a neural net evaluation function for my Connect 4 game1. The net is somewhat simpler in construction than Blondie's, though I copied the techniques used for evolving it straight from the book. After 25 generations I stopped the evolution and played a game against the best neural net so far; it beat me hands down. Unfortunately, after 100 generations my best neural net still haven't been able to beat the hand-rolled evaluation function I wrote for the game originally (I must have had a stroke of genius back then).

This is where I am currently at. I don't want to release a game with an AI inferior to that in the previous version, and I am currently experimenting with a few possible ways to improve the training of the neural net. Rest assured any progress I make will be the topic of future posts.


1

Now no longer available.