APICe » Courses » Autonomous Systems » 2014/2015 » Projects » ConspiraBoom



  • Roberto Casadei


This is the report of a project developed in the context of the Autonomous Systems course.

The goal of the project is to experiment and put in practice the concepts and insights presented in the course by studying and outlining the core mechanisms of a game.

The game considered in this work is ConspiraBoom. It is a socio-technical game that draws strong inspiration from Two Rooms and a Boom.

See more about Two Rooms and a Boom: https://www.kickstarter.com/projects/gerdling/two-rooms-and-a-boom .

In a nutshell, this project is a case-study and, given the complexity of the game considered, is intended work as a prelude for a conceptual, practical, and methodological stress test against our knowledge apparatus.

What: the game

The short story

ConspiraBoom is a competitive, team-based, social deduction party game set in two rooms where hostages are exchanged at each round. The (initially mutually extraneous) players in the rooms interact to ultimately allow the Bomber to assassinate the President or the latter to remain in life.

The long story.

ConspiraBoom is a competitive, social deduction party game. There are two teams: the Red Team and the Blue Team. Players are secretely given a card that certify their team and role: this is the sole information a player knows at the beginning of the game (In addition to the rules of the game, of course). There are two special roles: the President (in the Blue Team), and the Bomber (in the Red Team). Players are equally distributed between two rooms (i.e., separate playing areas). Then, the game consists of many rounds. In each round, players interact with each other by exchanging information. At the end of each round, some players (i.e., the hostages) will be swapped into opposing rooms, as decided by the room's leader that had been elected for that round. If the Red Team's Bomber is in the same room as the President at the end of the game, then the Red Team wins; otherwise the Blue Team wins.

Why is this game funny?

ConspiraBoom entertains its players by making them interact in a socially dynamic context where competition, cooperation, and unpredictability play a key role. People stay puzzled: ``is he a team-mate or an enemy?``, ``what's her strategy``, ``what's the deep reason for his actions?``, ``what's happening in the other room?``, ``what happens if I do this?``

Moreover, the fear of being discovered, the cleverness of the competition, the unexpected and the hazard stimulate emotions such as excitement, fear, pleasure and reward.

Other characteristics that might contribute to the comical dimension follows:

  • The game is short and doesn't require complex setups
  • Each execution is potentially different from the others
  • The game is also very extensible/customizable to support a variety of new situations or to adapt it to generate more enjoyable scenarios

Why is this game relevant within the Autonomous Systems course?

Basically, as many other games, Two Rooms and a Boom builds on the autonomy of its actors (players) to be interesting and enjoyable. In fact, a good player should be capable of self-government and self-protection against the influence of other players (which might try to manipulate it). Note that one player's autonomy does not just need to be preserved against opponents, but also against team-mates which might suggest uneffective strategies or communicate biased information.

Good players should be intelligent and, in particular, capable of complex epistemic reasoning, based on all the knowledge they hold about the game. Note that the game would be quite boring if a player made deterministic decisions exclusively based on certain knowledge. So, the game is interesting because the players can take unpredictable decisions and because they can elaborate effective strategies based on probabilistic, multi-level reasoning.

Also, it's important for a player to be able to predict (with a certain confidence) the behavior of the other players. In doing so, one could reason by asking himself: ``what are their intentions? what do they believe about me?`` (cf. intentional stance).

Another reason for which players should have strong executive autonomy. and intelligence lies in the fact that each player is responsible in respect to all the team-mates. That is, any ``stupid'' action might offend the entire team.

The competition is ruled so as to foster two conflicting (sub-)goals. As information potentially represent a competitive advantage, on one side, the less information is given to opponents, the lower will be their ability to take profitable decisions. On the other side, the more information is spread across the team, the better chance to perform well.

When a player discovers one or more of its team-mates, things get interesting as they can exchange information and cooperatively build strategies. In particular, a team-level strategy could be defined (e.g., ``if you'll find yourself in this situation, do X and Y, and I'll do W and Z'').

Imagination is also important to predict future situations (e.g., ``what is happening in the other room?`` or ``what will happen in the next round?``). Humans in the real game may also have intuitions (``sixth sense``) or capture unexplicit signs (e.g., based on personal acquaintance with the other players).

In a nutshell, this game exhibits a complexity and characteristics that make it amenable for analysis/modelling/engineering within the conceptual framework developed in the context of the Autonomous Systems course.


The project must satisfy the following requirements:

Functional requirements

  • Basic functional requirements: the game must work and the software players must be able to play at it
  • Human-playability: as we are interested in the socio-technical dimension, the game must allow human players to enter the game in place of software players
  • Autonomy: players must exhibit autonomous behavior
  • Intelligence: players must exhibit a sufficient degree of intelligence (from the point of view of human players)
Technical requirements

  • Unpredictability: players must be capable of taking courses of actions non-deterministically (e.g., software players will exhibit a degree of stochastic behavior that does not overcome a thoughtful, coherent strategy)
  • Practical reasoning: players must be capable, by reasoning, of choosing actions that are appropriate at the current game context
  • Epistemic and ontological reasoning: players must be capable of inferring implicit knowledge from explicitly represented facts
  • Probabilistic reasoning: players must be capable of dealing with uncertainty
  • Profiles: the software systems must allow for multiple profiles (possibly characters, inclinations, attitudes) to be defined for software players
  • Cooperation: players must be capable of cooperation (e.g., by cooperatively build strategies)



TermInformal definition
PlayerAutonomous entity that plays the game
TeamCollection of players
RoleSet of rules and properties assigned to a player
Team roleParticular role in a team
Room roleParticular role in a room
AtionMove from a player
StrategyCoordinated set of mappings from contexts to actions
TurnStage of the game where only a single player can perform a game action

Elements of a game execution

  • 2 Teams: blues and reds
  • N Players per team
  • 2 Rooms
  • Team roles
    • President (only 1 in blue team)
    • Bomber (only 1 in red team)
    • Normal player (N-1 per team)
  • Roles within a room
    • Normal actor (N-1 per room)
    • Room leader (1 per room)
  • Actions (more later)
  • Information units
    • Team/role of a player
    • Strategy of a player
    • Observed actions in the past
  • R rounds
  • Phases
    • Setup: assignment of a pair (team, role) for each member; equally-sized random distribution of players into the two rooms
    • Round
      • Room leader selection
      • Interaction
      • Exchange of hostages (as decided by the room leaders)
    • End of game or back to Round if num_rounds < R
  • Rules
    • Room leader selection by voting (if parity, a leader is randomly chosen)
    • The players cannot interact during leader selection
    • Reds win if the Bomber is in the same room as the President; otherwise, blues win.
    • A round terminates once a _round termination condition_ takes place (e.g., time-based or per num-of-interactions)


An action is any interaction, performed by an a player, which potentially has an effect on the world, that is, on the other players or on the game.

There are two kinds of actions:

  1. Communication actions
  2. Game-related actions
While communication actions can be performed in any game phase, other actions are specific to a given phase. So, doing actions at the wrong time will simply make them fail.

Phase or context-specific actions include:

  • Room leader selection
    • Vote
  • Interaction
    • Co-reveal
    • Colour-reveal
  • Hostages exchange
    • (Only for room leaders) Selection of hostages
Communication actions.

The game is very free for what concerns the interactions that can take place.

However, typical communications that can be provided for include:

  • General
    • Ask X for some information
    • Tell some information to X
  • Interaction
    • Ask X to co-/colour-reveal
    • Accept/reject to co-/colour-reveal


Typically, complex practical and epistemic reasoning is carried out by players.

The reasoning process has the following characteristics:

  • It may have ``genetically'' specific features -- i.e., each player may reason in a different manner or have different attitudes
  • It has to deal with partial information
  • It might be guided by or drive the probability of facts
The following are examples of reasoning that might be implemented.

Epistemic reasoning

GivenInferred knowledge
i. The number of players
ii. The distribution of roles in a team
iii. The roles of a subset of player
The probability of player P to have role R
i. Player X asks me to co-reveal
ii. I don't know his/her role
(X = president/bomber) is quite probable.
i. A team-mate tells me some informationThat information is almost certainly true (for him).

Practical reasoning

GivenInferred action
i. Me and my team-mates have decided that X has to be votedVote X
i. During leader election, I only know the role of few players and they are of the opposing teamDecide if it is better to vote myself or randomly vote one of the unknown players
i. I am the room leader and a red player
ii. It is the last round
iii. The bomber is in this room
iv. The president is in the other room
Send the bomber to the other room based on the probability of the president to be sent here

Feasibility of Two Rooms and a Boom

A question that must be addressed is: can we build, with reasonable effort, a computer program that is able to play this game effectively?

Of course, the answer depends on the meaning of "reasonable" and "effectively".

At a first sight, Two Rooms and a Boom is a game that just seems unplayable by computers.

In the original game, human players try to do their best to conceal information, misguide supposed enemies, perturb other players, deduce facts from any manifestation or sign (be it verbal or not), or elaborate strategies that could not be predicted by other players. Very complex reasoning takes place, clearly multi-level (it's common to reason about the way other players reason) and based on a lot of context information.

If we put aside for a moment undecidable propositions, it is good to reason about the following quote from Von Neumann, "You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that," which essentially suggests to meditate on the depth of our comprehension of the problem at hand and on our ability to effectively express it in a machine-readable way.

In general, multiple levels of feasibility can be considered:

  1. Theoretical feasibility of reasoning: relating to decidability issues in ontological/probabilistic reasoning
  2. Practical feasibility of reasoning: relating to complexity and time constraints
  3. "Turing-test" feasibility: can we build agents that are able to play this game just like humans do?
  4. Project feasibility: can we build a system of acceptable quality within reasonable time/effort constraints?
The "plain vanilla" Two Rooms and a Boom seems to be too unconstrained (free of rules). As a consequence, in that case the game designers have the burden to codify anything the players can deal with (be it natural language utterances, telepathy, 10-level GEB's "strange loops", or simply game contexts where a player has enough information to make an optimal decision).

So, I have thought about the possibility of shaping/restricting the game so as to find a trade-off between complexity, playability, and (most importantly) the goals of this very project. The result is ConspiraBoom.


Domain model

Structural dimension

The object-oriented paradigm effectively supports the modelling of passive game elements and game-related information.

The following UML diagram concisely shows these entities by a structural point of view.



The Game

Behavioral dimension

On the behavioral dimension, the game can be modelled as a FSM:


System architecture

Structural dimension


Interaction dimension (an excerpt)

The following UML Sequence Diagram (with some abuse of notation) provides an excerpt of the interactions in the system during the Init game phase:



Implementation in Jason

Jason is a Java-based, multi-agent system development platform.

It has been chosen as the platform for the development of the prototype of the game because it provides many first-class abstractions that directly map to the elements of our conceptual game model.

The system -- The system (environment and agents) is declaratively described as follows:

MAS prototype_TwoRoomsAndABoom {
	infrastructure: Centralised
	environment: proto1.env.Env(gui,6)
		roby 	proto1_human_player;
		marco 	proto1_dumb_player;  
		ste 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;
		fede 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;
		fanny 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;
		vale 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;
		james 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;
		chiara 	proto1_player agentClass proto1.agent.PlayerAgent agentArchClass proto1.agent.PlayerAgentArch;

More about the implementation in the Discussion section.



Discussion: on the work

Simplification: Two Rooms and a Boom vs. ConspiraBoom

Source of complexity: too few rules

In Two Rooms and a Boom, interaction has essentially no rules; the only regulations that apply are those of your national legal system: players can lie, break agreements, remain silent, mix truth and falsehood, steal one's card, and so on. Such a variety and complexity of interaction is simply overwhelming, so it has been removed.

Source of complexity: leader campaign

In the original game, the players are allowed to interact right before the voting of a leader. In ConspiraBoom, this "leader campaign" phase has been removed because it would have implied the modelling a political context (the ability to talk about one's appropriateness in such a role, promise, trust, influence, ...).

This choice also puts emphasis on strategies. For example, a team could establish internal, team-level rules to decide who should be voted in which context.

Source of complexity: concurrency and time in the real-world

Human and software agents work and reason at different speed. Moreover, for the game flow to be better understandable in software, it is better to hide concurrency. So, the game has been simplified by introducing a concept of turn. There is no significant distortion in the original game as typically the human players focus on one interaction at a time.

Moreover, in Two Rooms and a Boom, the rounds are timed. This features projects issues related to bounded rationality for human players (there is just too little time to think and act). It has been ignored in the software-based game.

Mapping game elements to autonomy-related concepts and implementation choices


First of all, players are autonomous entities. Secondly, they play the game, they do something, they act. Thus, players can be naturally mapped to agents, which are computational entities where autonomy and agency are the key features. As a consequence of being autonomous, their moving force is internal; thus, they are proactive.

/* Initial goals */
/* Plans */
+!start <- wanna_play.

In Jason, our players are agents with mental states. As they are intentional systems, their behavior can be understood and predicted using the intentional stance.

The agency is supported via abstractions such as intentions/goals (cf. practical reasoning in Bratman), plans, and internal/external actions.


The actions are situated in the sense of Suchman. In other words, the context in which they are carried out is prominent. A player could (as it is autonomous, ``free'') advance a vote for its candidate for room leadership during the phase of hostages swapping, but it would not have any effect as the action would be performed at the wrong time.


The concept of a strategy has been defined as a mapping from context to actions. Jason provides a first-class abstraction for strategies: plans. Note that plans can be inspected, removed, and added. The support for these reflective and meta-programming-style features might turn out to be useful for having players talk about strategies or make "strategic agreements."

Two rooms

The players are situated in a room, and can be moved from a room to another. The rooms constitutes the game environment and effectively define a notion of locality, thus ensuring that only players of the same room can interact.

Game, game rules, game phases

The game phases set the context for the behavior of the agents. The players coordinate and act on the basis of the game rules. The environment, by representing the game in itself, can support the coordination of players in a stigmergic fashion. Each agent keeps a representation of the state of the game and possibly its history as well.

In practice, in the Jason-based prototype, the players are triggered by percept additions. When a player perceives a change in the context, e.g., the game turns to the "Hostages selection" phase and the player believes to be the room leader, he knows that should act according to the game rules.

+phase(leader_selection) <-
+phase(interaction) <- // Do nothing: wait my turn
+turn(Who) : .my_name(Me) & Who==Me <-
        // It's my turn: figure out what to do and do it
+phase(hostages_exchange) : .my_name(Me) & room_leader(Me) <-

Two Rooms and a Boom as a social game

The players in the game can be collectively mapped to the notion of society. As in human societies, the agents have different roles within a given social context (here, the game). As in human societies, different players may have conflicting goals. As in human societies, players with the same roles could collaborate to reach goals that could be hard to be achieved with isolated effort.

Attitudes and approaches to the game

The approach of a player to the game depends on the player itself, its mentality, its intelligence, its insightfulness, and so on.

The proposed solution, in Jason, consists definition of multiple agents with different knowledge and plans. For example, the following kinds of agents have been implemented: dumb_player (not very intelligent), human_player (which delegates decisions to the user), and player (the basic player).

Another solution which could be integrated consists in the parametrization of the agents. The parameters might influence the stochastic behavior of an agent by adjusting the probability associated to the different options. For example, a shy player may tend to not vote himself during the election of the room leader.


What kinds of autonomy do the software players exhibit?

  • It is certain that they do not exhibit moral autonomy, nor motivational autonomy with respect to the higher level goals, that is, the exogenous goals given to them when they are constructed, namely playing the game and following the game rules. At this level, the players have just an executive autonomy.
  • The agents are partially autonomous with respect to their designer. The way in which they are built reflects the designer's goals. However, the unpredictability of the game and stochasticity of their cognitive processes definitely make they autonomous to a certain degree (but within the boundary set by the designer).
  • With respect to the environment, the players are partially autonomous. They sense and react to the environmental changes they are interested to, possibly changing their goals.
  • Also, there is partial autonomy with respect to other agents. According to the rules of the game, a player must respond to a request issued by another player; thus, some higher level goals (e.g., the goal to play the game as provided by the designer) "force" the agent to interact with other agents even though it would not be the case in a different context. However, when it comes to playing the game, the goals of an agent are endogeneous and cannot be changed from the outside.
  • With respect to team mates, the players are not very autonomous by a cognitive point of view. In fact, anything a team mate tells them is assumed to be true. Such assumption is not absolute but depends on the way in which the players are defined:
// I believe what my team mates tell me they believe
+A[source(TeamMate)] : team_mate(TeamMate) <- +A.

  • From the point of view of mobility, the players are not autonomous. When they are in a room, they cannot go to the other room (it is enforced by the environment). In practice, when they are hostages, they are actually moved to the opposite room without any chance to self-determination.
    • Another solution is one where the players perceive that they are hostages and so should move to the other room. In this case, the game could not continue until the hostages have gone to the opposite rooms by means of autonomous deliberation.
  • The agents support a notion of adjustable autonomy via the delegate_to_human external action. For example, an agent which finds itself into a "difficult" situation may transfer the responsibility of a decision to a human. Note that a human player can be easily injected into the system using an agent which delegates, at every decision point, any choice to a human.
/* proto1_human_player.asl */
+phase(leader_selection) <-
+turn(Who) : .my_name(Me) & Who==Me <- 
// ...

The action has been implemented as follows:

/* Env: The external action triggers an input dialog */
    String str = UserCommandFrame.AskCommandToUser();
    act = Structure.parseLiteral(str);


The players perform practical reasoning when they are involved in the process of deciding what action should be executed.

+phase(leader_selection) <-
	!decide_who_to_vote(Who); // practical reasoning
	vote(Who);                // is directed towards action

Whereas the following is a trivial example of epistemic reasoning:

+role(Player, Team, Role)[source(percept)] <- // co-revelation
    ?my_role(MyTeam, _);
    if(Team == MyTeam){
        +team_mate(Player); // take a mental note


  • The parts of epistemic reasoning, ontological reasoning, and probabilistic reasoning have not been tackled but we are aware of their significance in this context.
  • For the purpose, we could implement custom belief propagation and belief revision functions
  • The phase of deliberation in practical reasoning (encoded in the agent themselves by the designer, executed by their internal reasoner, but not performed by the agents) would benefit from these mechanisms as well. The main desired state-of-affairs is known in advance: president and bomber in the same room after the last round, depending if the player is red or blue, respectively. Good, intermediate states-of-affairs might considered as well, possibly evaluated on the basis of utility metrics (e.g., number of team mates in a room, number of players of which the role is known, etc.). Then, a sort of imagination/simulation process might happen: an agent should evaluate what the probability of reaching a given desired state is, in the case of a certain decision or another.

Stochastic behavior

A small extension to Jason has been developed to support the selection among multiple plans of action (options) based on probability values assigned to them using the mechanisms of annotations.

It has been implemented by customizing the selectOption() method of class jason.asSemantics.Agent.

public Option selectOption(List<Option> options){
    List<Pair<Option,Double>> lst = new ArrayList<Pair<Option,Double>>();
    for(Option opt : options){
      Literal prob = opt.getPlan().getLabel().getAnnot("prob");
        NumberTerm probVal = (NumberTerm) prob.getTerm(0);
        double probability = Double.parseDouble(probVal.toString());
        lst.add(new Pair<Option,Double>(opt, probability));
      //EnumeratedDistribution ed = new EnumeratedDistribution<Option>(lst);
      //Option result = (Option) ed.sample();
      // Sample an Option based on probabilities
      Option result = Utils.RandomSample(lst);
      return result;
  return options.get(0); // default

About the first version of the prototype

The first prototype has been done by implementing the logic of the game into an agent rather than in the environment.

The Jason agent defined in proto0_organizer.asl works as an "organizer" for the game. In other words, it is the agent which arbitrate the evolution of the game. It keeps the game state in its beliefs, and makes the game state evolve based on the current state of the game and the events that occur from the interactions with the players.

The organizer resembles the referee in sports. It is essentially a coordinator. Such an approach works and may be ok for a quick prototype but, as the "coordination medium" (organizer) and the "coordinables" (players) are components of the same nature (i.e., agents), it just seems improper. It is definitely more appropriate to encapsulate such system-level features into the environment (which may even be implemented with multiple agents according to the behavior to be expressed).

Additional considerations

On the tools used

  • Jason has been chosen for the development of the prototype of ConspiraBoom. It is an agent-oriented programming language based on the BDI architecture. The concepts and the programming infrastructure provided by Jason contribute to the reduction of the abstraction gap from the problem to its actualization.
  • Jade is an FIPA-standard, Java-based agent-development framework. It is actually a middleware. It provide concepts such as agents and tasks but I have preferred Jason as the latter fits more with the objectives of this project.
  • The integration of Jason with CArtAgO might be considered for the part of cooperation of players at the team-level.
Jason has been good for experimenting on BDI agents but it provides very limited support for the development of non-trivial applications.


  • Better definition and refactoring of the rules of the game
  • Robustness of the system and error handling
  • Completion of the intelligent players with the basic strategies
  • Ontological, probabilistic reasoning
  • GUI and playability


This work represents a brief case study on the development of a prototype for a complex socio-technical game, ConspiraBoom. It has been an opportunity to correlate practical activities with a set of concepts related to the many autonomies and features of complex systems. It points out that we need conceptual, methodological, and programming tools in order to adequately tackle nowadays challenges.

The source code is available at the following link: https://github.com/robyonrails/as1415_conspiraboom/