Source code for the soccer game. 
Requires MASON
Install the soccer package under with the other demos.

Executable .jar file.


CS 461. Machine Learning




Exploring emergence:

Homework: Use MASON to implement the rules sets (Seeds, Brian's Brain) discussed in Exploring Emergence. Part of the purpose of this exercise is to begin to learn your way around MASON, which I expect to be using throughout the term. A Game-of-Life simulation is defined in Tutorials 1 and 2 (MASON\src\sim\app\tutorial1and2\tutorial<1 or 2>.html).  The rules are defined in the class CA (for cellular automaton).

Optional. Learn about the Inspector and modify it for the Life application so that when one clicks on a grid cell, the value changes from 1 to 0 (if it was 1) and from anything else to 1 (if it wasn't 1).

Rauch, J., Seeing Around Corners, The Atlantic, April 2002 (with highlights Rauch, J., Seeing Around Corners, The Atlantic, April 2002)

Homework: Implement Schelling's segregation model in MASON.  Use a separate agent for each element.  Experiment with various rules such as the number of agents of the same category that make an agent happy, the radius that constitutes a neighborhood, etc. A rule may combine numbers of agents in various neighborhood radii, e.g., the number of similar agents among it nearest neighbors and the number of similar agents in a wider neighborhood.

Optional: Implement some other social model such as Sugarscape.

Mulit-agent models

Heatbugs: SWARM and University of Michigan

Ant foraging: Swarm Intelligence: From Natural to Artificial Systems (Scientific American Article), Ant foraging, (Kube's Research: Collective Robotics
Ant colony optimization of TSP:

Boids: Craig Reynolds, implemented as (Woims in MASON). Duncan Crombie

Reynolds on Steering behaviors. Also from Reynolds's page, pointers to introductory tutorials on vectors and forces.


  1. Use the MASON Keepaway framework to implement the robot box pushing algorithm from the Scientific American Article.
  2. Extend this to a game of soccer.  Have two teams, each of which is trying to push the box (the ball) into the other team's goal. Each is a separate subtype of Bot. Let the two teams be identified by their two colors.
  3. Assume (as Keepaway does) that the ball and the players bounce off the sides of the field. You don't have to worry about that.
  4. Put the goals on the left and right side of the field. The fields are 100 x100 grids, let the goals be 10 units long centered at 50. A goal is scored if the ball touches the goal, in which case the ball does not bounce off the side of the field.
  5. Besides bouncing off the sides of the fields, we have to agree on the basic rules of physics. They are given by the method getForces() in the Bot class. When a player (a Bot) collides with another Entity, either another player or the Ball, it affects the motion of the other Entity.
  6. You may modify the rule that determines how fast the ball moves then kicked by a player by making it move more slowly when kicked by one of your own players.
  7. Delete the last clause of getForces() which tells the Bots to move toward the Ball. That should be a decision your Bots make independently of the laws of physics. Create another method, e.g., getStrategicForce(), which tells the Bot to move according to its strategy rules. For example,
    1. Let them kick the ball either toward the other team's goal or toward one of their own players (or toward a point between one of their own players and the other team's goal). (In Keepaway, the ball moves away from the player who kicks it. So to determine in which the direction the ball will move, approach it in the direction you want it to take.)
    2. Have your players spread out from each other according to one of the anti-swarm rules.
    3. Let your players use some of Reynolds's steering behaviors if they seem useful.
    4. Create a goalie, who follows different rules. 
    5. Define player subtypes, e.g., offense and defense, who follow different rules. Have the defense stick close to players on the other team  and the offense stay away from players on the other team.
    6. Have the offense position itself so that when it kicks the ball, the ball moves toward the opposite goal. Have the defense position itself between the ball and its own goal.
    7. Perhaps your players can even take bank shots off the side of the field.
  8. In step(), after Vector2D force = getForces(keepaway) compute your strategy-based force and add that to force. Then compute acceleration, etc.
  9. Each team has the same maximum total speed (the sum of the player's cap variable must be no more than 5), but you have your choice about how to divide that sum up among your players.  A goalie needs less speed than an offensive player since it doesn't cover much ground.
  10. Only two of your players (your goalie(s) if you have them) are allowed within 2 units of your goal.
  11. Initially, lets have teams of up to 11 players. If you have fewer players, you can divide the same total team speed among the smaller number of players!
  12. Assume the players all start on their own goals and that the ball starts in the middle of the field.  
  13. (Optional) After a score, have the players of the team who lost the previous point start anywhere up to the 30 unit line.

Let's have a tournament.  (Let your team be identified by your own subtype of Bot.)

Prisoners' Dilemma:  Introduction to the Prisoner's Dilemma (University of Toronto)Introduction to the Prisoner's Dilemma (Kendall, Darwen, Yao), Spatialized Iterated Prisoner's Dilemma


Implement spatialized prisoner's dilemma.

Write a single class that has a number of playing strategies built into it. Include at least: always-cooperate, always-defect, tit-for-tat, tit-for-two-tats, Pavlov, grudge, and random.

Have the agent decide which strategy to use by consulting an instance variable, which has a getter and a setter.

Create an instance variable, with a getter and a setter, called mutable, which if true lets the agent switch strategies and if false prevents it from switching strategies.

With these instance variables and accessors, you should be able to set strategies of individual agents from the inspector.

Experiment with establishing some islands of fixed strategies and see what happens.  E.g., put an island of TFTs in one corner of the board, an island of pure defectors in another, perhaps an island of suckers (always cooperate) in another part of the board, etc..


Genetic Algorithms: An introduction to genetic algorithms with Java applets by Marek Obitko; A Genetic Algorithm Tutorial by Darrell Whitley; Hiroaki Sengoku's Traveling Salesman page, Demo, and Paper.


  1. Implement TSP. You may use the TSP framework. (Here is an executable version of my code. tsp.jar) Try it both with crossover and with nothing but mutation.

  2. Modify your prisoner's dilemma system so that the players may evolve their rules. Consider the following. Nearly all the PD rules we have implemented depend on the previous move. (Tit-for-two-tats requires a history of two moves.) There are four possible one-move histories: CC, CD, DC, DD, meaning that on the previous play I played C and the opponent played C or I played C and the opponent played D, etc. Consider the following table.

    C D C D

    It indicates that I will play C whenever the opponent plays C and that I will play D whenever the opponent plays D. In other words, this is the Tit-for-tat strategy.

    The Pavlov strategy (if the opponent plays C, play my previous move; if the opponent plays D, switch from my previous move) may be expressed as follows.

    C D D C

    To complete a strategy, one must specify the first move. So let's add an additional entry. Tit-for-Tat (since it's "nice") is the following.

    First CC CD DC DD
    C C D C D

    Using this framework, a strategy can be specified in 5 entries. 

    If we want to allow probabilistic strategies, replace the C/D entries with values in the range from 0 to 1.  Treat these as probabilities of playing C.  Thus a value of 1 means play C. A value of 0.25 means play C with a probability of 0.25, and play D with a probability of 0.75.  Thus a probabilistic TFT might look like this.

    First CC CD DC DD
    0.95 0.95 0 0.95 0

    Play D in response to the opponent's D. Play C first and in response to the opponent's C 95% of the time; the other 5%, play D.

    An alternative representation (perhaps more interesting although less flexible) is to stick with C and D for responses but to add an additional Flip entry. The Flip entry is the probability that the player will play the move specified in its strategy table. The following is TFT with a Flip probability of 0.1. It means that the player will play a TFT strategy but will invert its move with a probability of 10%. Note that with this approach, one cannot specify the probabilistic TFT in the table above.

    Flip First CC CD DC DD
    0.1 C C D C D

    Select one or another of these representations and modify your PD system so that a player selects its next strategy by combining strategies from its neighbors. 

    Retain the possibility of having some of the players not be mutable.

    Use some algorithm that translates a strategy table into a color so that we can see what is happening.

Genetic Programming:; Genetic programming operations; The Royal Tree problem (summary), Punch, et. al. (1996), Vanneschi (GECCO 2003), GeneticProgrammingReconsidered.pdf, PotentialBasedComputing.pdf.


Download and install ECJ (by Sean Luke again). When you install ECJ, you may delete  It relies on libraries which are not included.

To run ECJ, include the argument -file <pathToParametersFile> in the run command. For example, to run the 6-multiplexer problem, use 

-file ec/app/multiplexerslow/6.params

as a run parameter. For details on parameter files see: ec/docs/parameters.html.

The output will be at the top level in the file out.stat, which is over-written on each run--although the output file can also be specified in the parameters file as, for example, stat.file = $out.stat1. The '$' indicates that the file is created in the same directory where the java run is started.  Without it, the file would be created in the directory where the parameters file is found.

A good example is tutorial4 (see ec/docs/tutorials/tutorial4/index.html). To run it, use:

-file ec/app/tutorial4/tutorial4.params

The line that defines the "right" answer is in MultiValuedRegression.evaluate() (which implements SimpleProblemForm.evaluate()):

expectedResult = currentX*currentX*currentY + currentX*currentY + currentY;

In other words, x^2 * y + x*y + y.  The answer I got when I ran it was (+ y (* (+ y (* x y)) x)), which is equivalent!

To get another answer, change the random number seed.  When I did, I got: (+ y (* x (+ (* y x) y))), which is also equivalent.  

To change the seed (or to use time as a seed) put the line 

seed.0 = time 

into the .params file.

Do one of the the following two exercises.

  1. Implement the Royal Tree problem in ECJ. Experiment to determine why it is so difficult at higher levels. See of you can come up with a fitness function that makes it easier to generate the Royal Tree.  If not, see if you can explain why not. Note that the job of the SimpleProblemForm.evaluate() function is not really to evaluate the generated tree; it is to compute a fitness for the generated tree. The RoyalTree does not have a value as such, but it does have a fitness, which is all that really matters.

    An initial version is in

  2. Generate a soccer goal kicker in ECJ. Assume that a single  player and the ball are placed at random positions on the field and that the simulation is started. Generate force equations that are most successful at kicking goals--and not kicking the ball into the player's own goal. Here the task is to run the MASON soccer game for some fixed number of steps from within the SimpleProblemForm.evaluate()function and see how many goals are scored (vs. how many wrong goals are scored). The inputs should be at least the positions of the ball and the player. It might make the problem easier if you also included the positions of the goals as inputs--even though these would be constants.

Self-organized criticality and Power law examplesBak-Tang-Wiesenfeld sandpile (self-organized criticality), Brouwer's Sandpile applet, Per Bak page by Chen Kan (obit, etc.), Per Bak and Paula Gordon show (audio), Extracts from How Nature Works, Liebovitch and Scheurle,


My home page