Background

Model Checking for (D/G)LTL System Properties

In this paper we address the problem of specifying and verifying systems of communicating agents in a Dynamic Linear Time Temporal Logic (DLTL). This logic provides a simple formalization of the communicative actions in terms of their effects and preconditions. Furthermore it allows to specify interaction protocols by means of temporal constraints representing permissions and commitments. Agent programs, when known, can be formulated in DLTL as complex actions (regular programs). The paper addresses several kinds of verification problems including the problem of compliance of agents to the protocol, and describes how they can be solved by model checking in DLTL using automata.

Systems of autonomous agents providing automated services over the Web are fast becoming a reality. Often these agent systems are constructed using procedural architectures that provide a framework for connecting agent components that perform speci.c tasks. The agent designer codes the tasks necessary to perform a service and uses the framework to connect the tasks into an integrated agent structure. This bottom up approach does not provide an easy mechanism for con.rming global properties of constructed agent systems. In this paper we propose a declarative methodology based on logic programming for modeling such procedurally constructed agents and specifying their global properties as temporal logic formulas. This methodology allows us to bring to bear a body of work for using logic programming based model checking to verify certain global properties of procedurally constructed Multi-Agent Systems.

MABLE is a language for the design and automatic verification of multi-agent systems. MABLE is essentially a conventional imperative programming language, enriched by constructs from the agent-oriented programming paradigm. A MABLE system contains a number of agents, programmed using the MABLE imperative programming language. Agents in MABLE have a mental state consisting of beliefs, desires and intentions. Agents communicate using request and inform performatives, in the style of the fipa agent communication language. MABLE systems may be augmented by the addition of formal claims about the system, expressed using a quantified, linear temporal belief-desire-intention logic. MABLE has been fully implemented, and makes use of the spin model checker to automatically verify the truth or falsity of claims.

The problem of checking that agents correctly implement the semantics of an agent communication language has become increasingly important as agent technology makes its transition from the research laboratory to field-tested applications. In this paper, we show how model checking techniques can be applied to this problem. Model checking is a technique developed within the formal methods community for automatically verifying that finite-state concurrent systems implement temporal logic specifications. We first describe a variation of the MABLE multiagent bdi programming language, which permits the semantics (pre- and post-conditions) of acl performatives to be defined separately from a system where these semantics are used. We then show how assertions defining compliance to the semantics of an acl can be captured as claims about MABLE agents, expressed using MABLE’s associated assertion language. In this way, compliance to acl semantics reduces to a conventional model checking problem. We illustrate our approach with a number of short case studies.


Model Checking AgentSpeak(F) MAS Using SLPN

  • Model Checking AgentSpeak (paper in proceedings, 2003)Rafael H. Bordini, Michael Fisher, Carmen Pardavila, Michael J. Wooldridge

This paper introduces AgentSpeak(F), a variation of the BDI logic programming language AgentSpeak(L) intended to permit the model-theoretic verification of multi-agent systems. After briefly introducing AgentSpeak(F) and discussing its relationship to AgentSpeak(L), we show how AgentSpeak(F) programs can be transformed into Promela, the model specification language for the Spin model-checking system. We also describe how specifications written in a simplified form of BDI logic can be transformed into Spin-format linear temporal logic formul. With our approach, it is thus possible to automatically verify whether or not multi-agent systems implemented in AgentSpeak(F) satisfy specifications expressed as BDI logic formulæ. We illustrate our approach with a short case study, in which we show how BDI properties of a simulated auction system implemented in AgentSpeak(F) were verified.

AgentSpeak is a reactive planning language for programming autonomous agents. It has recently been shown that model checking techniques can be applied to the verification of AgentSpeak systems, through a translation to Promela, the model specification language for the SPIN LTL model-checking system. In this paper, we introduce an alternative verification approach for AgentSpeak, by translating AgentSpeak to Java and then applying JPF2, a general purpose Java model checker. The primary advantage of this approach is that Java is the language of choice for most agent implementations, and the approach is thus much closer to the current practice of agent development than the Promela-based approach. Also, models of AgentSpeak agents represented in Java are both clearer and more natural than those given in Promela. We examine both alternatives by means of a practical application, provide a qualitative comparison between them, and identify some key issues for future research.

This paper gives an overview of our recent work on an approach to verifying multi-agent programs. We automatically translate multi-agent systems programmed in the logic-based agent-oriented programming language AgentSpeak into either Promela or Java, and then use the associated Spin and JPF model checkers to verify the resulting systems. We also describe the simplified BDI logical language that is used to write the properties we want the systems to satisfy. The approach is illustrated by means of a simple case study.

We introduce a class of Petri nets, simple logic Petri nets (SLPN), that are based on logical expressions. We show how this type of nets can be efficiently mapped into logic programs with negation: the corresponding answer sets describe interleaved executions of the underlying nets (Theorem 1). The absence of an answer set indicates a deadlock situation. We also show how to correctly model and specify AgentSpeak agents and multi-agent systems with SLPN’s (Theorem 2). Both theorems allow us to solve the task of model checking AgentSpeak multi-agent systems by computing answer sets of the obtained logic program with any ASP system.


Model Checking Swarm Systems

Robot swarms provide a way for a number of simple robots to work together to carry out a task. While swarms have been found to be adaptable, fault-tolerant and widely applicable, designing individual robot algorithms so as to ensure effective and correct swarm behaviour is very difficult. In order to assess swarm effectiveness, either experiments with real robots or computational simulations of the swarm are usually carried out. However, neither of these involve a deep analysis of all possible behaviours. In this paper we will utilise automated formal verification techniques, involving an exhaustive mathematical analysis, in order to assess whether our swarms will indeed behave as required.

A robot swarm is a collection of simple robots designed to work together to carry out some task. Such swarms rely on: the simplicity of the individual robots; the fault tolerance inherent in having a large population of often identical robots; and the self-organised behaviour of the swarm as a whole. Although robot swarms are being deployed in increasingly sophisticated areas, designing individual control algorithms that can guarantee the required global behaviour is difficult. In this paper we apply and assess the use of formal verification techniques, in particular that of model checking, for analysing the emergent behaviours of robotic swarms. These techniques, based on the automated analysis of systems using temporal logics, allow us to analyse all possible behaviours and so identify potential problems with the robot swarm conforming to some required global behaviour. To show this approach we target a particular swarm control algorithm, and show how automated temporal analysis can help to refine and analyse such an algorithm.

Behaviour Prediction Via Past Behaviour Pattern Analysis

The area of agent modeling deals with the task of observing other agents and modeling their behavior, in order to predict their future behavior, coordinate with them, assist them, or counter their actions. Tipically, agent modeling techniques assume the availability of a plan- or behavior-library, which encodes the full repertoire of expected observed behavior. However, recent applications areas of agent modeling systems are increasingly used in open and/or adversarial settings, where the behavioral repertoire of the observed sgents is unknown at design tiem. This paper focuses on thechallenge of the usupervised autonomous learning of the sequential behaviors of agents, from observations of their behavior. The techniques we present translate observations of the dynamic, complex, continuous multi-variate world state into a time-series of recognized atomic behaviors. This time-series is then analyzed to find repeating subsequences characterizing each team. We compare two alternative approaches to extracting such characteristic sequences, based on frequency counts and statistical dependencies. Our results indicate that both techniques are able to extract meaningful sequences, and do significantly better than random predictions. However, the statistical dependency approach is able to correctly reject sequences that are frequent, but are due to random co-occurence of behaviours, rather than to a true sequential dependency between them.

Agents in dynamic environments have to deal with world representations that change over time. In order to allow agents to act autonomously and to make their decisions on a solid basis an interpretation of the current scene is necessary. If intentions of other agents or events that are likely to happen in the future can be recognized the agent’s performance can be improved as it can adapt the behavior to the situation. In this work we present an approach which applies unsupervised symbolic learning off-line to a qualitative abstraction in order to create frequent patterns in dynamic scenes. These patterns can be later applied during runtime in order to predict future situations and behaviors. The pattern mining approach was applied to two games of the 2D RoboCup simulation league.

Today’s technical systems are becoming increasingly complex. Future systems will consist of a multitude of complex soft- and hardware components, which interact with each other to satisfy global system functional requirements. This trend bears the risk of more and more breakdowns and other unexpected behaviour. Organic Computing (OC) has the vision of addressing the challenges of complex distributed systems by making them more life-like (organic), i. e. endowing them with abilities such as self-organisation, self-configuration, self-repair, or adaptation. This can only be achieved by giving the system elements adequate degrees of freedom. This may result in an emergent behaviour, which can be positive as well as negative. Therefore, we need an observer/ controller architecture, which allows for self-organisation but at the same time enables adequate reactions to control the - sometimes completely unexpected - emerging global behaviour. In this paper, we give an introduction to a generic observer/controller architecture, adapt this framework to a scenario of a self-organising robot swarm, and show how to control and prevent global, collective, unwanted behaviour based on observations of the local behaviour of the distributed agents.

Generic Model Checking And Verifying Approaches

We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem.

We present a bounded model checking (BMC) approach to the verification of temporal epistemic properties of multi-agent systems. We extend the temporal logic CTL * by incorporating epistemic modalities and obtain a temporal epistemic logic that we call CTL * K. CTL * K logic is interpreted under the semantics of synchronous interpreted systems. Though CTL * K is of great expressive power in both temporal and epistemic dimensions, we show that BMC method is still applicable for the universal fragment of CTL * K. We present in some detail a BMC algorithm and prove its correctness. In our approach, agents’ knowledge interpreted in synchronous semantics can be skillfully attained by the state position function, which avoids extending the encoding of the states and the transition relations of the plain temporal epistemic model for time domain.

This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand and expressing real-time system behavior with continuous variables on the other hand. It is shown, how multi-robot systems can be modeled by hybrid and hierarchical state machines and how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent. This research is supported by the grants Fu 263/8 and Sto 421/2 from the German research foundation DFG within the special priority program 1125 on Cooperating Teams of Mobile Robots in Dynamic Environments.

  • Document “Publication.ConghuaZhifeng10” does not exist

Probabilistic temporal logics of knowledge have been used to specify multi-agent systems. In this paper, we introduce a probabilistic temporal logic of knowledge called PTLK for expressing time, knowledge, and probability in multi-agent systems. Then, in order to overcome the state explosion in model checking PTLK we propose an abstraction procedure for model checking PTLK. The rough idea of the abstraction approach is to partition the state space into several equivalence classes which consist of the set of abstract states. The probability distribution between abstract states is defined as an interval for computing the approximation of the concrete system. Finally, the model checking algorithm in PTLK is developed.

{{/velocity}}

In this paper we address the problem of specifying and verifying systems of communicating agents in a Dynamic Linear Time Temporal Logic (DLTL). This logic provides a simple formalization of the communicative actions in terms of their effects and preconditions. Furthermore it allows to specify interaction protocols by means of temporal constraints representing permissions and commitments. Agent programs, when known, can be formulated in DLTL as complex actions (regular programs). The paper addresses several kinds of verification problems including the problem of compliance of agents to the protocol, and describes how they can be solved by model checking in DLTL using automata.

Systems of autonomous agents providing automated services over the Web are fast becoming a reality. Often these agent systems are constructed using procedural architectures that provide a framework for connecting agent components that perform speci.c tasks. The agent designer codes the tasks necessary to perform a service and uses the framework to connect the tasks into an integrated agent structure. This bottom up approach does not provide an easy mechanism for con.rming global properties of constructed agent systems. In this paper we propose a declarative methodology based on logic programming for modeling such procedurally constructed agents and specifying their global properties as temporal logic formulas. This methodology allows us to bring to bear a body of work for using logic programming based model checking to verify certain global properties of procedurally constructed Multi-Agent Systems.

MABLE is a language for the design and automatic verification of multi-agent systems. MABLE is essentially a conventional imperative programming language, enriched by constructs from the agent-oriented programming paradigm. A MABLE system contains a number of agents, programmed using the MABLE imperative programming language. Agents in MABLE have a mental state consisting of beliefs, desires and intentions. Agents communicate using request and inform performatives, in the style of the fipa agent communication language. MABLE systems may be augmented by the addition of formal claims about the system, expressed using a quantified, linear temporal belief-desire-intention logic. MABLE has been fully implemented, and makes use of the spin model checker to automatically verify the truth or falsity of claims.

The problem of checking that agents correctly implement the semantics of an agent communication language has become increasingly important as agent technology makes its transition from the research laboratory to field-tested applications. In this paper, we show how model checking techniques can be applied to this problem. Model checking is a technique developed within the formal methods community for automatically verifying that finite-state concurrent systems implement temporal logic specifications. We first describe a variation of the MABLE multiagent bdi programming language, which permits the semantics (pre- and post-conditions) of acl performatives to be defined separately from a system where these semantics are used. We then show how assertions defining compliance to the semantics of an acl can be captured as claims about MABLE agents, expressed using MABLE’s associated assertion language. In this way, compliance to acl semantics reduces to a conventional model checking problem. We illustrate our approach with a number of short case studies.

Model Checking AgentSpeak(F) MAS Using SLPN

  • Model Checking AgentSpeak (paper in proceedings, 2003)Rafael H. Bordini, Michael Fisher, Carmen Pardavila, Michael J. Wooldridge

This paper introduces AgentSpeak(F), a variation of the BDI logic programming language AgentSpeak(L) intended to permit the model-theoretic verification of multi-agent systems. After briefly introducing AgentSpeak(F) and discussing its relationship to AgentSpeak(L), we show how AgentSpeak(F) programs can be transformed into Promela, the model specification language for the Spin model-checking system. We also describe how specifications written in a simplified form of BDI logic can be transformed into Spin-format linear temporal logic formul. With our approach, it is thus possible to automatically verify whether or not multi-agent systems implemented in AgentSpeak(F) satisfy specifications expressed as BDI logic formulæ. We illustrate our approach with a short case study, in which we show how BDI properties of a simulated auction system implemented in AgentSpeak(F) were verified.

AgentSpeak is a reactive planning language for programming autonomous agents. It has recently been shown that model checking techniques can be applied to the verification of AgentSpeak systems, through a translation to Promela, the model specification language for the SPIN LTL model-checking system. In this paper, we introduce an alternative verification approach for AgentSpeak, by translating AgentSpeak to Java and then applying JPF2, a general purpose Java model checker. The primary advantage of this approach is that Java is the language of choice for most agent implementations, and the approach is thus much closer to the current practice of agent development than the Promela-based approach. Also, models of AgentSpeak agents represented in Java are both clearer and more natural than those given in Promela. We examine both alternatives by means of a practical application, provide a qualitative comparison between them, and identify some key issues for future research.

This paper gives an overview of our recent work on an approach to verifying multi-agent programs. We automatically translate multi-agent systems programmed in the logic-based agent-oriented programming language AgentSpeak into either Promela or Java, and then use the associated Spin and JPF model checkers to verify the resulting systems. We also describe the simplified BDI logical language that is used to write the properties we want the systems to satisfy. The approach is illustrated by means of a simple case study.

We introduce a class of Petri nets, simple logic Petri nets (SLPN), that are based on logical expressions. We show how this type of nets can be efficiently mapped into logic programs with negation: the corresponding answer sets describe interleaved executions of the underlying nets (Theorem 1). The absence of an answer set indicates a deadlock situation. We also show how to correctly model and specify AgentSpeak agents and multi-agent systems with SLPN’s (Theorem 2). Both theorems allow us to solve the task of model checking AgentSpeak multi-agent systems by computing answer sets of the obtained logic program with any ASP system.

Model Checking Swarm Systems

Robot swarms provide a way for a number of simple robots to work together to carry out a task. While swarms have been found to be adaptable, fault-tolerant and widely applicable, designing individual robot algorithms so as to ensure effective and correct swarm behaviour is very difficult. In order to assess swarm effectiveness, either experiments with real robots or computational simulations of the swarm are usually carried out. However, neither of these involve a deep analysis of all possible behaviours. In this paper we will utilise automated formal verification techniques, involving an exhaustive mathematical analysis, in order to assess whether our swarms will indeed behave as required.

A robot swarm is a collection of simple robots designed to work together to carry out some task. Such swarms rely on: the simplicity of the individual robots; the fault tolerance inherent in having a large population of often identical robots; and the self-organised behaviour of the swarm as a whole. Although robot swarms are being deployed in increasingly sophisticated areas, designing individual control algorithms that can guarantee the required global behaviour is difficult. In this paper we apply and assess the use of formal verification techniques, in particular that of model checking, for analysing the emergent behaviours of robotic swarms. These techniques, based on the automated analysis of systems using temporal logics, allow us to analyse all possible behaviours and so identify potential problems with the robot swarm conforming to some required global behaviour. To show this approach we target a particular swarm control algorithm, and show how automated temporal analysis can help to refine and analyse such an algorithm.

Behaviour Prediction Via Past Behaviour Pattern Analysis

The area of agent modeling deals with the task of observing other agents and modeling their behavior, in order to predict their future behavior, coordinate with them, assist them, or counter their actions. Tipically, agent modeling techniques assume the availability of a plan- or behavior-library, which encodes the full repertoire of expected observed behavior. However, recent applications areas of agent modeling systems are increasingly used in open and/or adversarial settings, where the behavioral repertoire of the observed sgents is unknown at design tiem. This paper focuses on thechallenge of the usupervised autonomous learning of the sequential behaviors of agents, from observations of their behavior. The techniques we present translate observations of the dynamic, complex, continuous multi-variate world state into a time-series of recognized atomic behaviors. This time-series is then analyzed to find repeating subsequences characterizing each team. We compare two alternative approaches to extracting such characteristic sequences, based on frequency counts and statistical dependencies. Our results indicate that both techniques are able to extract meaningful sequences, and do significantly better than random predictions. However, the statistical dependency approach is able to correctly reject sequences that are frequent, but are due to random co-occurence of behaviours, rather than to a true sequential dependency between them.

Agents in dynamic environments have to deal with world representations that change over time. In order to allow agents to act autonomously and to make their decisions on a solid basis an interpretation of the current scene is necessary. If intentions of other agents or events that are likely to happen in the future can be recognized the agent’s performance can be improved as it can adapt the behavior to the situation. In this work we present an approach which applies unsupervised symbolic learning off-line to a qualitative abstraction in order to create frequent patterns in dynamic scenes. These patterns can be later applied during runtime in order to predict future situations and behaviors. The pattern mining approach was applied to two games of the 2D RoboCup simulation league.

Today’s technical systems are becoming increasingly complex. Future systems will consist of a multitude of complex soft- and hardware components, which interact with each other to satisfy global system functional requirements. This trend bears the risk of more and more breakdowns and other unexpected behaviour. Organic Computing (OC) has the vision of addressing the challenges of complex distributed systems by making them more life-like (organic), i. e. endowing them with abilities such as self-organisation, self-configuration, self-repair, or adaptation. This can only be achieved by giving the system elements adequate degrees of freedom. This may result in an emergent behaviour, which can be positive as well as negative. Therefore, we need an observer/ controller architecture, which allows for self-organisation but at the same time enables adequate reactions to control the - sometimes completely unexpected - emerging global behaviour. In this paper, we give an introduction to a generic observer/controller architecture, adapt this framework to a scenario of a self-organising robot swarm, and show how to control and prevent global, collective, unwanted behaviour based on observations of the local behaviour of the distributed agents.

Generic Model Checking And Verifying Approaches

We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem.

We present a bounded model checking (BMC) approach to the verification of temporal epistemic properties of multi-agent systems. We extend the temporal logic CTL * by incorporating epistemic modalities and obtain a temporal epistemic logic that we call CTL * K. CTL * K logic is interpreted under the semantics of synchronous interpreted systems. Though CTL * K is of great expressive power in both temporal and epistemic dimensions, we show that BMC method is still applicable for the universal fragment of CTL * K. We present in some detail a BMC algorithm and prove its correctness. In our approach, agents’ knowledge interpreted in synchronous semantics can be skillfully attained by the state position function, which avoids extending the encoding of the states and the transition relations of the plain temporal epistemic model for time domain.

This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand and expressing real-time system behavior with continuous variables on the other hand. It is shown, how multi-robot systems can be modeled by hybrid and hierarchical state machines and how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent. This research is supported by the grants Fu 263/8 and Sto 421/2 from the German research foundation DFG within the special priority program 1125 on Cooperating Teams of Mobile Robots in Dynamic Environments.

  • Document “Publication.ConghuaZhifeng10” does not exist

Probabilistic temporal logics of knowledge have been used to specify multi-agent systems. In this paper, we introduce a probabilistic temporal logic of knowledge called PTLK for expressing time, knowledge, and probability in multi-agent systems. Then, in order to overcome the state explosion in model checking PTLK we propose an abstraction procedure for model checking PTLK. The rough idea of the abstraction approach is to partition the state space into several equivalence classes which consist of the set of abstract states. The probability distribution between abstract states is defined as an interval for computing the approximation of the concrete system. Finally, the model checking algorithm in PTLK is developed.

Environment