code for the soccer game.
Exploring emergence: http://llk.media.mit.edu/projects/emergence/
Homework: Use MASON to implement the rules sets (Seeds, Brian's Brain) discussed in Exploring Emergence. Part of the purpose of this exercise is to begin to learn your way around MASON, which I expect to be using throughout the term. A Game-of-Life simulation is defined in Tutorials 1 and 2 (MASON\src\sim\app\tutorial1and2\tutorial<1 or 2>.html). The rules are defined in the class CA (for cellular automaton).
Optional. Learn about the Inspector and modify it for the Life application so that when one clicks on a grid cell, the value changes from 1 to 0 (if it was 1) and from anything else to 1 (if it wasn't 1).
Rauch, J., Seeing Around Corners, The Atlantic, April 2002 (with highlights Rauch, J., Seeing Around Corners, The Atlantic, April 2002)
Homework: Implement Schelling's segregation model in MASON. Use a separate agent for each element. Experiment with various rules such as the number of agents of the same category that make an agent happy, the radius that constitutes a neighborhood, etc. A rule may combine numbers of agents in various neighborhood radii, e.g., the number of similar agents among it nearest neighbors and the number of similar agents in a wider neighborhood.
Optional: Implement some other social model such as Sugarscape.
Heatbugs: SWARM and University of Michigan
Ant foraging: Swarm Intelligence:
From Natural to Artificial Systems (Scientific American
Research: Collective Robotics)
Ant colony optimization of TSP: http://uk.geocities.com/markcsinclair/aco.html
Boids: Craig Reynolds, implemented as (Woims in MASON). Duncan Crombie
Reynolds on Steering behaviors. Also from Reynolds's page, pointers to introductory tutorials on vectors and forces.
Let's have a tournament. (Let your team be identified by your own subtype of Bot.)
Prisoners' Dilemma: Introduction to the Prisoner's Dilemma (University of Toronto), Introduction to the Prisoner's Dilemma (Kendall, Darwen, Yao), Spatialized Iterated Prisoner's Dilemma
Implement spatialized prisoner's dilemma.
Write a single PrisonersDilemma.java class that has a number of playing strategies built into it. Include at least: always-cooperate, always-defect, tit-for-tat, tit-for-two-tats, Pavlov, grudge, and random.
Have the agent decide which strategy to use by consulting an instance variable, which has a getter and a setter.
Create an instance variable, with a getter and a setter, called mutable, which if true lets the agent switch strategies and if false prevents it from switching strategies.
With these instance variables and accessors, you should be able to set strategies of individual agents from the inspector.
Experiment with establishing some islands of fixed strategies and see what happens. E.g., put an island of TFTs in one corner of the board, an island of pure defectors in another, perhaps an island of suckers (always cooperate) in another part of the board, etc..
Genetic Algorithms: An introduction to genetic algorithms with Java applets by Marek Obitko; A Genetic Algorithm Tutorial by Darrell Whitley; Hiroaki Sengoku's Traveling Salesman page, Demo, and Paper.
Modify your prisoner's dilemma system so that the players may evolve their rules. Consider the following. Nearly all the PD rules we have implemented depend on the previous move. (Tit-for-two-tats requires a history of two moves.) There are four possible one-move histories: CC, CD, DC, DD, meaning that on the previous play I played C and the opponent played C or I played C and the opponent played D, etc. Consider the following table.
It indicates that I will play C whenever the opponent plays C and that I will play D whenever the opponent plays D. In other words, this is the Tit-for-tat strategy.
The Pavlov strategy (if the opponent plays C, play my previous move; if the opponent plays D, switch from my previous move) may be expressed as follows.
To complete a strategy, one must specify the first move. So let's add an additional entry. Tit-for-Tat (since it's "nice") is the following.
Using this framework, a strategy can be specified in 5 entries.
If we want to allow probabilistic strategies, replace the C/D entries with values in the range from 0 to 1. Treat these as probabilities of playing C. Thus a value of 1 means play C. A value of 0.25 means play C with a probability of 0.25, and play D with a probability of 0.75. Thus a probabilistic TFT might look like this.
Play D in response to the opponent's D. Play C first and in response to the opponent's C 95% of the time; the other 5%, play D.
An alternative representation (perhaps more interesting although less flexible) is to stick with C and D for responses but to add an additional Flip entry. The Flip entry is the probability that the player will play the move specified in its strategy table. The following is TFT with a Flip probability of 0.1. It means that the player will play a TFT strategy but will invert its move with a probability of 10%. Note that with this approach, one cannot specify the probabilistic TFT in the table above.
Select one or another of these representations and modify your PD system so that a player selects its next strategy by combining strategies from its neighbors.
Retain the possibility of having some of the players not be mutable.
Use some algorithm that translates a strategy table into a color so that we can see what is happening.
Genetic Programming: http://www.genetic-programming.org/; Genetic programming operations; The Royal Tree problem (summary), Punch, et. al. (1996), Vanneschi (GECCO 2003), GeneticProgrammingReconsidered.pdf, PotentialBasedComputing.pdf.
Download and install ECJ (by Sean Luke again). When you install ECJ, you may delete ec.app.teambots. It relies on libraries which are not included.
To run ECJ, include the argument -file <pathToParametersFile> in the run command. For example, to run the 6-multiplexer problem, use
as a run parameter. For details on parameter files see: ec/docs/parameters.html.
The output will be at the top level in the file out.stat, which is over-written on each run--although the output file can also be specified in the parameters file as, for example, stat.file = $out.stat1. The '$' indicates that the file is created in the same directory where the java run is started. Without it, the file would be created in the directory where the parameters file is found.
A good example is tutorial4 (see ec/docs/tutorials/tutorial4/index.html). To run it, use:
The line that defines the "right" answer is in MultiValuedRegression.evaluate() (which implements SimpleProblemForm.evaluate()):
expectedResult = currentX*currentX*currentY + currentX*currentY + currentY;
In other words, x^2 * y + x*y + y. The answer I got when I ran it was (+ y (* (+ y (* x y)) x)), which is equivalent!
To get another answer, change the random number seed. When I did, I got: (+ y (* x (+ (* y x) y))), which is also equivalent.
To change the seed (or to use time as a seed) put the line
seed.0 = time
into the .params file.
Do one of the the following two exercises.
Self-organized criticality and Power law examples, Bak-Tang-Wiesenfeld sandpile (self-organized criticality), Brouwer's Sandpile applet, Per Bak page by Chen Kan (obit, etc.), Per Bak and Paula Gordon show (audio), Extracts from How Nature Works, Liebovitch and Scheurle,
My home page