The Conquest

Introduction

This document is a report of the development of a multiplayer game “The Conquest” using the concepts and insights taught in the course of autonomous systems. Starting by illustrating the idea behind the game and the rules it then describes how a game can be seen as a multi agent system where players can be either humans or computer programs. Special attention is paid to the problem of realizing and implementing an autonomous virtual player (which is called IntelliAgent) and the game environment using Java as a programming language.

History and Inspiration

The game The Conquest originates from the well-known toy problem in artificial intelligence: missionaries and cannibals. The missionaries and cannibals problem is a classic river-crossing problem (http://en.wikipedia.org/wiki/River_crossing_puzzle). Three missionaries and three cannibals must cross a river using a boat which can carry at most two people at the same time, under the constraint that, for both banks, if there are missionaries present on the bank, they cannot outnumbered by cannibals otherwise they would be eaten by them.

The game

There are two opponent teams (TeamA and TeamB) inside an environment. This environment is divided into various cells and each cell is characterised by certain properties and rules which are initially totally obscure to the players of each team. The goal of the game is to conquer the environment by eliminating the major part of the opposite team players.At every stage of the game the players are tele-transported randomly into the cells and a single player can perceive what is inside a cell, that is: its colour , the number of his companions and the number of the enemy individuals. The only action a player can take is to attack or do nothing. The final action of a team is the most proposed one by its members. According to the rule of that specific cell the result of the combined actions of the two teams gives positive points to the winners and negative points to the losers. When a player reaches a total amount of zero points is dead and it is eliminated from the game. Initially every single player doesn’t know anything about the properties of the cell and the rules neither it can communicate with its team companions, he can only decide randomly but as long as the game proceeds he can accumulate knowledge of the previous outcomes associated with the taken actions. The idea is to exploit the accumulated knowledge to have the upper hand on the opposite team and defeat it. At the end of the game the individuals of the winner team should have acquired a good knowledge of the rules of the territory in a such manner that a new virgin population introduced would be defeated with a high probability of success.

Rules of the Game

A player can be in only one cell at a time for every stage of the game, it can perceives the colour of the cell, the number of his team mates and the number of the enemies. According to his own knowledge and the data acquired then it can only decide to attack or do nothing. The final action of the team is the most voted one by its members. There are certain situations in which no team can win for instance two odd teams inside a cell with EVENODD rule (see the example below), in this case zero points are assigned to each team. In the other cases there are always a potential winner team and a loser one and according to their final decision four different situations of points assignment.
WINNER ActionLOSER ActionWINNER PointsLOSER Points
ATTACKDO NOTHING+2-1
ATTACKATTACK+2-2
DO NOTHINGDO NOTHING00
DO NOTHINGATTACK0-1
The table above shows how points are assigned to the team after the fight. Every player initially has got a certain amount of points but when it reaches a total of zero points it is excluded from the game (state of dead). The game ends when a team has at least almost the double of the player of the opponent team after a prearranged number of stages.Other rules of the cell:
EVENODDThe even number of players wins on the odd one.
ODDEVENThe odd number of players wins on the even one.
MINMAXThe smaller number of players wins on the greater one.
MAXMINThe greater number of players wins on the smaller one.

An Example

cell.jpgThe figure above represents a cell of the environment in which there are two players of the Team A and three players of the Team B. The colour of the cell is BLUE and the rule of the cell is EVENODD. Every player perceives that situation above described and according to their own knowledge:A decides to attack A decides to attack B decides to attack B decides not to attack B decides not to attackThe final actions are Team A attacks and Team B doesn’t attack. Team A wins because we are inside a cell with a rule EVENODD which means that the team with an even numbers of elements defeats the odd one. Two points are assigned to each members of the team A and a point is subtracted from each members of the team B. If the team B had decided to attack instead of doing nothing it would have been lose two points and not just one.

Glossary of Entities

EntityDescription
ENVIRONMENTThe environment is the place in which the other entities are situated and it is divided into several cell.
CELLA cell is a part of the environment which is characterized by two properties: a colour and a rule. The colour is just a way for the player to distinguish a cell from another.
CELL RULEIt is the rule by which the points are assigned according to the team decision.
PLAYERA player is an active entity and it can be either a human or a software agent. It can perceive the environment before deciding what to do and the final result.
TEAMA team is a set of players that pursue the same goal.
PLAYER ACTIONA single player according to the data perceived from the environment and its personal strategy decides to attack or not.
TEAM ACTIONIt’s the final action, it can be an attack or doing nothing and it is determined by the majority of players of the team.
STAGEThe time inside the game is divided into a discrete number of stages. At every stage the player are randomly teletransported into the various cells.

Relevance in Autonomous Systems

The Conquest is a game in which the players pursue the same objective and the team itself is essential to reach the goal indeed a single player against the enemies would die after a few stages. A single player is autonomous, the other entities cannot influence on its own choice and also it should be intelligent because a pure random behaviour would fail for sure (see Experiments section below). This situation is an example of a typical multi-agent system (MAS) which is a system composed of multiple interacting intelligent agents within an environment. Typically multi-agent systems research refers to software agents. However, the agents in a multi-agent system could equally well be robots, humans or human teams. A multi-agent system may contain combined human-agent teams as in this case.

Agents

As it is said before a player can be a human or a computer software which is in the context of autonomous systems is called agent furthermore it should be an intelligent agents. An intelligent agent is an autonomous entity which observes though sensors and acts upon an environment and directs its activity towards achieving a goal. Intelligent agents may also learn or use knowledge to achieve their goals. At the first stage of the game the agents don’t know anything about the rule of the cells hence the necessity to learn them by the experience. Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The learning element uses feedback from the "critic" on how the agent is doing and determines how the performance element should be modified to do better in the future. These feedbacks are the results of the team final actions. For the project three different types of agents have been implemented: DumbAgent, GodAgent and IntelliAgent.
  • DumbAgent
    • DumbAgent is the simplest and trivial implementation of an agent player. It is used only to control the behaviour of the other more evolute agents. It simply randomly decide to attack or not irrespective of the situation perceived and it also doesn’t collect or store information.
  • GodAgent
    • Like the DumbAgent the GodAgent has been implemented only for testing purposes. The peculiarity of the GodAgent is that it knows from the beginning everything about the environment and so it always makes the best decision. Differently from the DumbAgent it perceives the environment.
  • IntelliAgent
    • This agent represents the heart of the entire project it reflects the description of autonomous intelligent agent given above. At first it doesn’t know anything of the environment but as the game proceeds it can store and elaborate the informations about the situation it encounters and then using them to decide the best action to take. The strategy is the follow: when the IntelliAgent doesn’t know anything about a certain situation the only way is deciding randomly but as soon as it acquires knowledge its choices are based on that one.
At every stage the agent memorise the colour of the cell, the number of his team mates and the number of the opponents along with the final action taken by its team and the result achieved. Using a similarity procedure based on its previous experience it then calculates the best action for a new situation. agent.jpg

Java Implementation

Due to the relative simplicity required Java is the programming language chosen for realizing the prototype of the virtual environment and the agents above described.sit

Requirements

Realising a prototype of the core mechanism of the game “The Conquest” as described in this report document. The environment must implement the logic of the entire game. A player can be either a human or a software agent. No graphical interface is required.

Overview

This is an overview of the main classes that compose the program.overview.jpg

Environment

The environment acts as a glue for the others components. It keeps trace of the various cells and the two teams. It randomly distributes the player among the various cells at every stage. It is implemented inside the class Environment.java.
public class Environment {
private final int AGES = 400;
private final int NPLAYERS = 20;
private final int INITIALPOINTS = 20;
private final boolean debug = true;
private ArrayList<Cell> cells;
private ArrayList<Player> playersA;
private ArrayList<Player> playersB;
Meaning of the parameters:
ParameterDescription
AGESNumber of the stages of a game play.
NPLAYERSNumber of players for each of the two teams.
INITIALPOINTSPoint that every player has got at the beginning.
DEBUGIf it is setted to true the program prints out some information about the cells and the agents.

Cell

A cell is the component to which player are assigned at every stage it lets the players to perceive and asks them for their decision, it also assigns points to the players at the end of every “fight”. It is implemented inside the class Cell.java.

Player

The java interface Player.java must be implemented by any other entity which wants to realize a player agent or the control interface for a human player.
public interface Player {
public int getId();
public TeamName getTeam();
public void perceive(Color cellColor, int friends, int enemies);
public Action getAction();
public void setPoints(Action teamAction, int points);
public State getState();
}

IntelliAgent

The class IntelliAgent.java implements the IntelliAgent erlier described. An IntelliAgent stores his experiences in the form of java objects Information.java inside a data structure which is called Mind.java. The class Mind.java also contains the methods that relize the reasoning logic.
public class Mind {
private ArrayList<Information> knowledge;
private Random rnd = new Random();

[...]

public Action decide(Color cellColor, int friends, int enemies){
int attackPot = 0;
int donothingPot = 0;
int similaritiesA = 0;
int similaritiesD =0;

for(Information information: knowledge){
if(information.getColor()==cellColor){
if(information.getAction()==Action.ATTACK){
similaritiesA = similarity(information.getFriends(), information.getEnemies(), friends, enemies);
attackPot+=information.getResult()*information.getReinforce()*similaritiesA;
}
if(information.getAction()==Action.DONOTHING){
similaritiesD = similarity(information.getFriends(), information.getEnemies(), friends, enemies);
donothingPot+=information.getResult()*information.getReinforce()*similaritiesD;
}
}
}


if(attackPot>donothingPot){
return Action.ATTACK;
}else if (attackPot<donothingPot){
return Action.DONOTHING;
}else{
if(rnd.nextInt(2)==0){
return Action.ATTACK;
}
return Action.DONOTHING;
}
}

private int similarity(int a, int b, int c, int d){
int m=1;
if(a%2==0 && c%2==0 && b%2==1 && d%2==1) m++;
if(a%2==1 && c%2==1 && b%2==0 && d%2==0) m++;
if(a<b && c<d) m++;
if(a>b && c>d) m++;

return m;
}

Experiments and Results

To test the system a first trial is done with two teams of DumbAgents only, after several game the victories are equally distributed between them. This is normal due to randomness set on 50% to attacks and 50% to do nothing.A second trial is done with a team composed of DumbAgents and a team composed of GodAgent, as expected the GodAgent team wins all the matches after a relative small number of stages and normally after 2000 stages no DumbAgent survives.The third trial is done with a team of IntelliAgent against a team of DumbAgents, as the second trial the IntelliAgent team alwasy wins on the DumbAgent team but a greater number of DumbAgents survives compared to the second trial. This proves that the IntelliAgents have evolved during the game.

How to Run the Application

Source code is downloadable from the Attachments section. To run the application from command line extract the zip content, get into the directory and then type: "java Game". It requires Java 1.7.