HTTPS SSH

README

Technologies: Python, MPI

Status: Complete, No longer maintained.

This program consists of 2 things: a multi process implementation of a multi agent simulator and a demo implementation of simple reactive agents travelling on a maze.

Communication between processes is done using MPI and the agents are simulated by using a multi process and multi thread model, where each agent gets a time slice.

The name is a play on 'Super Ethical Reality Climax' a mini game in the game Saints Row that pitted players agains each other in a death match. In the same way the idea is to create a simulator that allows pitting different agent implementations against each other.

You can find a high level presentation here.

What works

The program simulates a large grid with multiple agents running simultaneously. The following functionality was implemented:

  • Physical partitioning of world space using multiple processors
  • Fair scheduler with multi threaded execution
  • Messaging between agents and world using a post box model
  • Messaging between partitions using MPI
  • Extensible API for implementing different reactive agents
  • Per agent state machine
  • Per agent perception, including across multiple partitions
  • Transparent migration of agents between processes
  • Graphical representation using pyGame

The objective

The objective is to create a multi agent simulation using MPI that can scale across multiple processors and that allows a large (200+) number of agents simultaneously, each agent can potentially use a different AI implementation.

A secondary objective is that different AI implementations can be easily created by providing an API based on the reactive agent architecture.

More Information

This program was turned in as a final project in my masters degree Parallel Programming class. It was developed by a 2 person team.

The program is implemented by physically partitioning the world into multiple processes. Each process represents a sub grid that can contain walls or floors and where multiple agents can 'live'. Each agent is a python class with its own state machine and AI implementation that can interact with the world or other agents via its actuators. Each partition tracks the agents contained within, its neighbours and the state of each agent.

The agent simulation is done at each partition using an scheduler. This scheduler assigns a time slice to each agent on a regular basis (configurable). When an agent is activated it runs a function that represents a single step in the reactive agent stack and can return an action such as moving. Each agent perceives the world in front of it on a regular basis.

The scheduler is heavily inspired in the reactor used by the twisted network framework and attempts to be as fair as possible. Agent activation is done in parallel by using a multi threaded dispatcher, in order to minimize the overhead of creating and destroying threads a Thread Pool pattern is used.

Inter object communication is implemented using a postbox and messages: when both objects are local the postbox is a shared memory space that holds the data. When the objects are in different partitions (processes) an MPI adaptor library is used to transport the message from one partition to the destination's postbox. In the current implementation there is no direct agent communication.

The Agent AI implementation is done by using 'handlers', each one corresponds to each one of the steps of a reactive agent. The implementation registers multiple handlers, and the scheduler runs them one by one in a time slice. If a handler returns an action such as MOVE then it is performed in the world and the agent is restarted. The API is simple enough that variations can be implemented without knowing the low level communication or scheduling details.

Agents can move between processes in a transparent way, the partition takes care of moving the agent to the destination partition as well as migrating its last state and memory. Everything is done using the MPI based messaging API.

Agents can also perceive whats in front of them by sending a message to the world. The world calculates the visibility of the requested area and queries neighbouring partitions for visibility information. From the agent's point of view there is no difference between perceiving from the local partition or across multiple partitions.

Each agent has a state machine that controls the available actions, for example, an agent cannot do anything else while it is moving. The state machine was implemented with attack and dead states, but currently there is no way to send an attack to another agent.

Finally, there is a special MPI process that receives status updates from each partition and draws a graphical representation using pyGame. The updates are sent periodically by each partition and use a shorthand format to save message space.

My participation

I did the architectural design and implemented the scheduler, agent API and messaging API. Another person implemented the maze generation and graphic drawing.

What is broken

Due to limitations of PyMPI this program only runs on Linux or OSX. No inter agent communication was implemented, which means that there is no way to attack another agent.