EvoStrat Generation 100 Network Defeats On-Line Opponent!

The same web-based Connect-Four system (Four In A Line) was used to test the EvoStrat player that uses the neural network from generation 100, and the EvoStrat-evolved player defeated the on-line system playing in Medium difficulty mode!  This shows that the evolution was successful in developing a neural-network that can evaluate board positions better than a randomly-weighted neural network, and serves as proof that the EvoStrat project effectively uses evolutionary computation to develop a game-playing software agent that learns from playing tournament after tournament against other machine agents.  For this test game, EvoStrat once again played as the Red player (the player that goes second).

To view screenshots of EvoStrat's winning game against Four In A Line, click on these thumbnails.  EvoStrat is the Red player in both games; Four In A Line is Yellow in one screenshot and Blue in the other.
     

Dec6

EvoStrat Generation 0 Network Loses to On-Line Opponent

A web-based Connect-Four game called "Four In A Line" from the website http://www.mathsisfun.com/games/connect4.html was used as an opponent for the EvoStrat system.  For the initial test, a random neural network (from generation 0) was used for the EvoStrat player playing as Red (the player that goes second), and the EvoStrat player lost to the on-line system when that system was playing in Beginner mode (the easiest level that the on-line system provides).

To view screenshots of EvoStrat's game and Four In A Line's game (EvoStrat is the Red player in both games; Four In A Line is Yellow in one screenshot and Blue in the other), click on these thumbnails:
     

Next, the on-line system will be used to test EvoStrat's evolved best player after 100 generations of evolution.

Dec5

EvoStrat-evolved Player Defeats Young Human Opponents

The first official tests of the Connect-Four players evolved by EvoStrat were conducted over the Thanksgiving holiday weekend.  My oldest nephew (age 6) was an enthusiastic test subject for the system, and my children (ages 14, 11 and 8) reluctantly particpated as well.  All four test subjects were able to defeat the Connect-Four machine player that used a randomly-weighted neural network (a network that would make random evaluations of board positions and therefore make random moves), but the system using a network that was evolved to generation 100 consistently beat the 6- and 8-year-old humans, and split several games against the older human test subjects.

Dec1

EvoStrat program produces results

The EvoStrat system has produced a series of neural-network-based computing agents that have evolved generation by generation as a result of the tournament selection method described previously in this project blog.  Using a population size of 20 and an evolution limit of 100 generations, the system has conducted 100 tournaments and evolved the succeeding generation from the winner of the current generation's tournament using mutation.

To view a screenshot that shows the scores of all 20 of the 100th generation of Connect-Four machine players in their tournament, click on this thumbnail:

The next step is to modify the Connect-Four game program so that the machine-based players have the ability to load the neural network weights that were generated by EvoStrat into the Connect-Four game.  The machine player will be tested against human players and against on-line Connect-Four playing programs.

Nov20

Network Configuration Chosen for First Connect-Four Testing

To effectively use a neural network to evaluate Connect-Four game boards, the network must consider the contents of all board spaces.  Since a standard game board has 6 rows and 7 columns, there are 42 inputs that must be fed into the network to ensure that no inputs are ignored by the system.  It is known in the neural network community that having many intermediate levels of neurons between the input and output layers of the network does not yield better results, and only slows down processing time when compared to a network with one or two intermediate layers; as a consequence, the network configuration that is to be chosen is one with two intermediate layers: the first will contain 50 neurons and the second will consist of 10 layers.  The network will produce one output value, which will be a double-precision floating point value between 1.0 and -1.0, which will be interpreted as an evaluation of the board.  A value of 1.0 is a strong position for the Yellow player, and a value of -1.0 strongly favors the Red player; a value near zero is neutral.

Initially, the networks in the first generation will produce random evaluations, and by the process of tournament selection, networks in later generations will produce evaluations that approach the interpretation described above.  To view a screenshot of the network's structure, click on this thumbnail:

Oct30

Tournament Algorithm Implemented and Tested — It Picks the Winner!

The latest test application conducts a tournament among all members of the current population of Neural Networks, with each network playing 5 game as the Yellow player (who goes first) against randomly selected opponents playing Red.  The test of the tournament algorithm doesn't actually play a game of Connect-4; each game for this test is just a random coin-flip to determine the winner.  The purpose of this testing is to see if total game points can be tracked for each population member and the winner of the tournament can be tracked.

For this test, the winner of a match earns 3 points and the loser earns -4 points.  Since each network might play a different number of games as the red player, the total points earned are divided by the number of games played, and the winner of the tournament is the network with the highest average points per game played.  The test program correctly determines the winner, and so this algorithm will be employed when the Connect-4 players are implemented in the next step in this project.  To view a screenshot of the output of the test program, click on this thumbnail:

Oct28

Next Step:  Conducting a Tournament to Pick a Winning Network from Each Generation

The success of the test implemented last night is just a milestone on the path to having a Machine Connect-Four player be able to evaluate game board configurations and choose moves.  A crucial difference between simple neural network pattern recognition tasks and the type of game board evaluation needed for EvoStrat is that there is no known "best" or "ideal" answer for a particular encoding of a game board state.  To train the networks to be used by EvoStrat, a tournament will be held, pitting members of the generation against one another, and the winner of the tournament will be judged as the best member of its generation.

The tournament will be conducted as follows:  each of the members of the population will play 5 games as the yellow player (who makes the first move in every game), against a randomly-selected opponent who will play red.  The expected number of games that any member of the population will play as red can be computed to be 5, so on average, each network will play 5 games as the yellow player and 5 as the red player.  The next stage of this project will be to implement a tournament-playing algorithm to pit networks against each other and choose the "best" member of the generation by the results of the tournament.  Stay tuned....

Oct8

Simple Genetic Algorithm for Neural Network Learning Works!

After a fair amount of reading manuals, javadocs and on-line documentation, Encog 3 has been utilized to implement a test program that establishes a population of 20 initially-random neural networks, and uses competitive techniques to evolve generations of neural networks that get better and better at the task of recognizing a simple two-input pattern.  This is a proof-of-concept test that shows that Encog 3 can be used to represent the neural networks that will be used in EvoStrat to evaluate the game state of a particular game board configuration.  To see Java code that implements the genetic algorithm to evolve the population through numerous generations, or to view a screenshot of the output of the test program, click on one of these thumbnails:
     

Oct7

Encog 3 Chosen for Neural Network Implementation;
(from Heaton Research)

The Machine players for this project need to evaluate Connect-Four game boards and choose the best move from among all possible moves.  To do this, the Machine players will implement a neural network that evaluates game boards.  After researching available neural network open-source software, I've decided to use the Encog 3 machine learning framework for Java, from Heaton Research (http://www.heatonresearch.com).  Using an off-the-shelf software library will save the time it would take to code the details of neural network implementation myself, and allow me to focus on using the neural network in determining the best moves to make at any one point in a game.

Sept27

Working Version 2 Ready for Testing

A game trace area has been added to the interface so that the moves of the game can be recreated after the game has concluded.  Moves are now correctly counted, and illegal moves made by a Human player are properly disallowed.  At this stage, the Machine players still make random moves, and are very easy to defeat.  To play this version, click on THIS LINK.

Sept25

System Supports Machine vs. Machine Play

If Machine vs. Machine play is selected on the start-up screen of the program, a game is played by the two artificial agents against one another, and the final results are displayed on screen when the game has ended.  At this point, the test agents are both random players, so the moves made are generally awful if one examines the Connect-Four board, but the point of this testing is that Machine vs. Machine play is now supported by the game engine.

Sept20

Human vs. Machine Play Now a Reality

The main interface has now been developed to the point that a Connect-Four game can be played in one of several modes:  Human vs. Human or Human vs. Machine (where the first move can be made by either the Human player or the Machine player).  For testing purposes, the Machine player simply chooses a column in which to drop a piece at random, so it's really easy for a Human to beat.  But it is a good demonstration that Human vs. Machine play works with the current system.  Now it's time to get some sleep!

Sept18

First Step:  A Working Connect-Four Game

Before starting work on the evolutionary aspects of the artificial agents for this project, a working Connect-Four game is needed for testing purposes.  Accordingly, the first coding that been completed is for a Java Application that allows two human players to compete in a game.  This platform will be expanded with the ability to have a human play against a machine opponent, as well as having two machine agents play against each other.  For a full-size screen-shot of the first version of the game interface, click on this thumbnail:

Sept10

Partial Class Diagram:  The Player Class and Descendants

I feel it is important to model good programming practices while working on this project, and as a consequence, I intend to use object-oriented design techniques to structure the code for the software components of EvoStrat.  As a first design step, I plan to use inheritance to structure the game players for the system:  each of the two players will come from the Player class, and each specific player can be either a HumanPlayer or a MachinePlayer.  The game engine will expect to work with two Player objects, and the specifics of making moves will be different based on whether the player is a human or a software agent.  Click on this thumbnail to see a partial UML class diagram for this inheritance situation:

Sept4

Mid-Summer Progress Update

Well, today is July 1, and I've done a lot of virtual work on EvoStrat (in other words, thinking about it), but I haven't really started in on the design and programming work yet.  I'm planning to evaluate several artificial neural network programming libraries over the next two months, and be ready to hit the ground running right after Labor Day.  I plan to post updates to this site weekly during the fall of 2013, so keep checking back for the latest news on this project.

Jul1

First Press Release Developed

The EvoStrat Project won't get fully underway until the summer of 2013, but as an example for the Spring 2013 CSC 492 course at the University of Mount Union (titled The Practice of Software Engineering), a sample press release announcing the project has been developed and may be accessed from this link.  More sample documents may be developed during the Spring semester, as time permits.

Feb6

Initial Project Timeline Released

The first draft of a project schedule for EvoStrat has been posted to the project web site.  Use the "Timeline" navigation item at the top of the screen for the latest details.

Jan22

Welcome to EvoStrat!

You've arrived at the web home of EvoStrat, software that uses bio-inspired computing techniques to evolve strategies for games with no initial knowledge of the games themeselves.  Use the navigation links at the top and/or right-side of this page for more information on this project.  This site will be updated regularly as the project develops.

Jan18