Annotated Literature

Annotated Literature

Towards a Methodology for Coordination Mechanism Selection in Open Systems (paper in proceedings, 2003)Simon Miles, Mike Joy, Michael Luck
We have developed a methodology in which we model cooperation to achieve an application’s goals as the result of interactions between agents, without specifying which particular agents the interactions take place between. By raising interactions to the status of first-class design objects, we allow many agents, including those not existing at design time, to take on the role of participants in the interactions. The aim is that the most suitable agents within the open system can achieve the goals to the highest quality at any point in time.
Our methodology, as a whole, is called agent interaction analysis. In agent interaction analysis, the designer performs the following steps to decide on which coordination mechanisms is most appropriate for each application
goal:
(1) The designer is supplied with a set of coordination mechanism definitions in a standard pattern language. This follows the approach of design patterns,
in which abstract parts of a design are made available to designers to encourage re-use of well-founded designs. There are many agent coordination mechanisms suggested in the literature but we use only the above two as examples in this paper (for brevity).
(2) For each application goal, the designer chooses a coordination mechanism most suitable for matching the goal’s preferences. In order to aid comparison, the designer performs more detailed analysis of the mechanisms. This analysis is called assurance analysis.
(3) From the choices of coordination mechanisms for each goal, the designer decides on the coordination mechanisms for the agents that will be added
to the open system. When these agents are added the appication will be instantiated. This step is called collation.
Designing Agent-Oriented Systems by Analysing Agent Interactions (paper in proceedings, 2001)Simon Miles, Mike Joy, Michael Luck
The primary building blocks used by agent interaction design are interactions between agents. The approach of agent interaction analysis is as follows:
– We assume the system contains an arbitrary number of flexible agents with the properties described in the next section (and no other details known as yet).
– We interpret the system requirements as goals and preferences for their achievement.
– We decompose the system goals into independent hierarchies of goals, comparable to hierarchical plans, and actions which achieve the lowest level goals.
– We treat the successful engagement of one or more agents to pursue a goal as an interaction.
– From the particular requirements of each interaction and the system preferences we derive the forms of architecture and particular coordination mechanism to make agents taking part in the interaction behave in a way that fits the preferences well.
Architecture as a Coordination Tool in Multi-site Software Development (article in journal, 2003)Päivi Ovaska, Matti Rossi, Pentti Marttiin
This paper described the role of software architecture in the coordination of multi-site software development through a case study.Architecture was intended to be the tool for coordination in the project. The architecture description was supposed to contain the rules of how the components of the system exchange information
with each other. The observed problems in coordination (lack of overall responsibility, communication and orientation) in architecture design resulted in poor or missing architecture description and different interpretations of the architecture by the project members. These problems caused actual system coordination problemsin later phases.
There were problems with managing the interfaces between system components, their assembly order and interdependencies between them.
Our study suggests that in the multi-site environment, it is not enough to coordinate activities, but in order to achieve a common goal, it is important to coordinate interdependencies between the activities. This kind of coordination needs a common understanding of the software architecture between software development participants.
According to our understanding, participants coordinate their development work through interfaces of their components. Each component can be developed separately, and thus, it is not necessary to take into account development of other components or distance, cultural and language differences between sites. The important issues that matter in this case are the appropriate architecture description and well-defined interfaces between components
The TROPOS Analysis Process as Graph Transformation System (paper in proceedings, 2002)Paolo Bresciani, Paolo Giorgini
In the present paper we focus on the redefinition of the transformation system for early requirements analysis,
already proposed in another paper, in terms of a Graph Transformation System. This provides the necessary machinery to perform
precise inspections of the process of early requirements analysis, and allows us to distinguish among different
strategies for the execution of the process.
As well, the definition of a formal and precise Graph Transformation System for describing the diagram transformation process in Tropos opens the possibility for an implementation of a Tropos diagram editing tool based on the use of a Graph Transformation programming language.
Coordination Specification in Multi-Agent Systems: From Requirements to Architecture with the Tropos Methodology (paper in proceedings, 2002)Anna Perini, Angelo Susi, Fausto Giunchiglia
We think that, in order to build eective MAS that operate into human communities, interacting both with software and human agents, we first need to model coordination processes taking place into the social organizational setting where the MAS has to be introduced. Then, we have t analyze how these coordination processes will be affected by introducing a MAS (analogously to what is done during the macro level analysis for heterogeneous systems). Only in the following steps we keep designing coordination processes among software agents and detailing interaction and communication mechanisms which support the required co- ordination processes. This multi-steps process allow us to keep a trace of the why (i.e. the needs) of the coordination processes modeled at the micro level.
Engineering Self-Coordinating Software Intensive Systems (paper in proceedings, 2010)Wilhelm Schäfer, Mauro Birattari, Johannes Blömer, Marco Dorigo, Gregor Engels, Rehan O'Grady, Marco Platzner, Franz Rammig, Wolfgang Reif, Ansgar Trächtler

The challenges which the engineering of self coordinating systems faces, are characterized by a number of research questions like: 

  • How can optimal strategies be determined in the presence of partial or even unreliable information?
  • How can heterogeneous components decide which information is relevant and which need not be considered?
  • What algorithms might help us in reaching stable, robust, and desirable behavior in a distributed network?
  • How can components find out about their coordination possibilities with cooperating or even competing components?
  • What design and operation principles do we need for the underlying technical infrastructure and how can we maintain these principles when resources are restricted and parts can fail?
  • How can we manage the secure exchange of information without a central public-key infrastructure and only limited resources?
  • When do we need cross-border migrations between hardware and software?
  • What modeling formalisms have to be supplied in order to enable the construction of adaptable systems?
  • What analysis techniques are required to make software reflect on both its own and its environment’s behaviour and consequently change itself?
  • What verification techniques can ensure the correctness of self-coordinating systems despite their inherent potential volatility?
  • How can a tight integration between classical feedback controllers and the state-based discrete control be achieved?
    Today’s software engineering research by and large is not really focussing on answering these questions but is rather centered around much smaller systems than the ones mentioned above which, in addition, often consist of software "only" (e.g. information systems). It is also usually done by small, non-interdisciplinary research teams. Of course, that does not mean that we no longer need research on testing and analysis, on program understanding, or on formal modelling and verification approaches, to list only a few examples. However, this research must be combined with e.g. the development and layout of complex networks and their underlying infrastructure as well as research on corresponding data structures and algorithms and control theory.
A SOA Based Software Engineering Design Approach in Service Engineering (paper in proceedings, 2009)Weider D. Yu, Chia H. Ong
This paper focuses on the investigation and studies on SOA based software engineering methods in the service engineering environment. Among the various SOA related software design and development methods available, service-oriented modeling and architecture (SOMA) service architectural environment is chosen as the target platform in the study. A web based service oriented ubiquitous Healthcare (u-Healthcare) software system was designed and implemented using the set of the software engineering methods developed in the study to gain empirical knowledge and experience on applying the approach to construct SOA service software.
As a software development life-cycle method for developing SOA-based solutions, SOMA defines key techniques and describes the roles on a SOA project and a work breakdown structure (WBS). The WBS includes tasks, the input and output work products for tasks, and the prescriptive guidance needed for detailed analysis, design, implementation, and deployment of services, components, and flows needed to build a robust and reusable SOA environment. SOMA has a fractal software development life cycle because SOMA applies method components (capability patterns) in a self-similar manner recursively in small release or iteration cycles, focusing on addressing technical risk and delivering software valued by the business.
The notion of similarity indicates that the application of principles is similar but not identical, meaning that as we approach larger scopes, even though the same tasks apply, the additional work needed has to evolve in order to take into account factors that arise from the larger scope. In SOMA method, we identify the main activities in a broader scope during the initial stage, and further elaborate and refine the scopes in the later stage.
A Survey of Service Oriented Development Methodologies (paper in proceedings, 2007)Ervin Ramollari, Dimitris Dranidis, Anthony James, Howard Simons
In this paper we presented a state of the art survey of the current service oriented engineering approaches and methodologies. One interesting point is that current SOA methodologies build upon existing, proven techniques, such as OOAD, EA, and BPM. Also, agile processes like XP and RUP are being employed successfully in SOA projects. However, the service paradigm introduces unique requirements that should be addressed by innovative techniques. Another interesting point is that most of the surveyed SOA methodologies propose the meet-in-the middle strategy, where both business requirements and existing legacy applications are taken into account to derive services. Although top-down analysis of the business domain produces services of high quality and long-term value, reality constraints require existing investment on IT infrastructure to be incorporated as well.
Empirical comparison of methods for information systems development according to SOA (paper in proceedings, 2009)Philipp Offermann, Udo Bub
Authors developed the SOA method (SOAM) based on the existing methods and their shortcomings. It is vendor-independent and explicitly states the architecture goals, which is not the case for with any other method. The six phases of SOAM contain all necessary activities, various activities containing several steps. Every activity is specified with executing roles, input and output artefacts and supporting tools. All necessary modelling notations are supported by a tool. The SOAM tool can generate XML schemata (XSD), WSDL files and WS-BPEL process descriptions directly from the graphical models. The method uses the top-down approach and the
bottom-up approach in parallel. The company requirements are analysed following the top-down approach. Required service operations are discovered based on this. Following the bottom-up approach, legacy systems are identified and analysed regarding data and/or functionality that can be wrapped. Top-down requirements and bottom-up findings are then consolidated. Finally, services are designed and service properties ensured. Processes are prepared for execution.
A laboratory experiment was carried out to compare the newly developed SOAM with SOMAM, MSA, SDS, WSIM and SODDM. The comparison was done by using the methods on different company scenarios, rating the experiences of the application using a questionnaire and carrying out a subsequent statistical analysis of the answers. A company scenario consists of models of a company that are relevant for the application of the methods. Evaluating the results of the experiment, it can be said that the ranking of all methods is relatively close to neutral. It is important to investigate whether there is a general problem in the way methods are being described. The laboratory experiment nonetheless shows significant differences in quality between the methods. SOAM and SOMAM received the best results. MSA has high usability but comparatively low usefulness. The goal to align IT functions, especially services, to business processes, can be better reached using SOAM rather than SOMAM.
Service-Oriented Design and Development Methodology (article in journal, 2006)Michael P. Papazoglou, Willem-Jan Van Den Heuvel
This paper describes an experimental methodology for service-oriented design and development. The presented methodology  reflects an attempt in defining a foundation of design and development principles that applies equally well to Web services and business processes. The methodology takes into account a set of development models (e.g., top-down, bottom-up and hybrid), stresses reliance on reference models, and considers several service realization scenarios (including green field development, outsourcing and legacy wrapping). During service and process design, not only the functional requirements of services and processes are considered but also their non-functional characteristics, e.g., security, transactional properties and policies, are taken into account.
A service-oriented design and development methodology is based on an iterative and incremental process that comprises one preparatory and eight distinct main phases that concentrate on business processes. These are planning, analysis and design (A&D), construction and testing, provisioning, deployment, execution and monitoring. These phases may be traversed iteratively (see Figure 2). This approach is one of continuous invention, discovery, and implementation with each iteration forcing the development team to drive the software development project’s artefacts closer to completion in a predictable and repeatable manner. The approach considers multiple realization scenarios for business processes and Web services that take into account both technical and business concerns.
A service-oriented design and development methodology focuses on business processes, which it considers as reusable building blocks that are independent of applications and the computing platforms on which they run. This promotes the idea of viewing enterprise solutions as federations of services connected via well-specified contracts. Two key principles serve as the foundation for service- and business process design: service coupling and cohesion.
Coupling can be achieved by reducing the number of connections between services in a business process, eliminating unnecessary relationships between them, and by reducing the number of necessary relationships - if possible.
Like low coupling, high cohesion is a service-oriented design and development principle to keep in mind during all stages in the methodology. High cohesion increases the clarity and ease of comprehension of the design; simplifies maintenance and future enhancements; achieves service granularity at a fairly reasonable level; and often supports low coupling.
In contrast to traditional software development approaches, the methodology introduced in this article emphasizes activities revolving around service provisioning, deployment, execution and monitoring. Authors believe that these activities will become increasingly important in the world of services as they contribute to the concept of adaptive service capabilities where services and processes can continually morph themselves to respond to environmental demands and changes without compromising on operational and financial efficiencies. In this way, business processes could be analysed in detail instantaneously, discovering and selecting suitable external services, detecting problems in the service interactions, searching for possible alternative solutions, monitoring execution step by step, upgrading and versioning themselves, and so on.  Service adaptivity is particularly useful for integrated supply chains as it implies that an integrated supply chain solution can leverage collaborative, monitoring and control abilities to manage product variability and successfully exploit the benefits of available-to-promise (ATP) capabilities. 
Web Services Implementation Methodology for SOA Application (paper in proceedings, 2006)Siew Poh Lee, Lai Peng Chan, Eng Wah Lee
Service-oriented architecture application faces a number of challenges. The nature of SOA application is centred on software components. Web Services technology facilitates the realization of SOA application. The investigation of existing agile software methodology for component-based development is analysed for identifying the gaps for Web Services development. The comparison study between Web Services development and the agile methodology are
conducted to identify the additional steps required for Web Services development. In addition, the study of Web Services characteristics and the best practices are analysed.
The practical approach is proposed to extend the existing agile methodology with the inclusion of Web Services best practices to every phases of agile software lifecycle. Without reinventing a new software methodology, a systematic and practical methodology for SOA application development is developed.
The outcome of this research study is the key contribution to the OASIS FWSI TC Implementation Methodology Sub-Committee (IMSC) for Web Services Implementation Methodology (WSIM) guidelines.
The purpose of FWSI TC is to facilitate implementation of robust Web Services by defining a practical and extensible methodology consisting of implementation
processes and common functional elements that practitioners can adopt to create high quality Web Services systems without re-inventing them for each implementation by defining only the Web Services-specifics activities spans across software development lifecycle.
The methodology itself is iterative and incremental. The Web Service would go through all the phases thereby developing and refining the Web Services throughout the
project lifecycle per iteration. As compared with the normal structured methodology and agile software methodology, the WSIM has an extra deployment phase after the Testing phase. This phase is specific to Web Services as the developed services need to be deployed and hosted in a targeted application server that provides the reference implementation for Web Services Architecture defined by W3C.
Using SOA Governance Design Methodologies to Augment Enterprise Service Descriptions (paper in proceedings, 2011)Marcus Roy, Basem Suleiman, Dennis Schmidt, Ingo Weber, Boualem Benatallah
This paper  presents an approach for the automatic annotation of Enterprise Services, based on a SOA Governance design methodology. The paper describes a concrete methodology used at SAP, but presented a generic and formal model for capturing the structure of SOA Governance design methodologies. The model consists of terminological concepts and factual concepts, and automata for capturing naming conventions built from these concepts. Naming rules are specified using a (typically very small) set of terminological concepts; from those the authors construct a consolidated automaton and populate it with the respective factual concepts. Using the detailed automaton, authors can automatically annotate service names that (at least partially) adhere to the naming conventions.
The MHS Methodology: Analysis and Design for Context-Aware Systems (paper in proceedings, 2006)Evi Syukur, Seng Wai Loke
The MHS methodology (Syukur2006) that provides step by step high level guidance for building a context-aware pervasive system. It guides developers
through an entire software development lifecycle starting from problem description, system requirements, constructing system elements, graphical design to functional design. MHS methodology is independent of particular context-aware pervasive system architecture, programming language and platforms used for the implementation. Ideally, a pervasive system modeled and designed in MHS could be implemented in many different ways (techniques) as our guideline provides a high-level abstraction and conceptual overview. This allows a development of a pervasive system that targets many different environments (e.g., a pervasive campus, pervasive shopping mall, pervasive recreation park, pervasive home environments, etc.). MHS also provides the ability to track changes (forward or backward) throughout different phases of the methodology and its corresponding constructs.
MHS  provides a modularity and clearcut design by grouping the similar system components into one module. This allows a developer to modify the existing elements or concepts on a particular module only with less impact on other modules, and in the future, we can still add new modules into a system, though not currently specified in the MHS methodology.
In addition, MHS methodology gives a systematic view for developers starting from problem descriptions, requirements, to a design that is reasonably detailed that can be implemented directly. In applying MHS guidelines, the developers move from the conceptual overview (abstract concepts) to more concrete and detailed concepts.
The analysis and design phases can be considered of as a process of developing detailed models that give developers the concrete idea towards the system implementation. In MHS methodology, we identify three basic principals/models that a pervasive system needs to have: (a) Software system elements that consist of contextual element, service element, interaction/message passing element, (b) Environment element, and (c) Entity element. Most of the existing pervasive systems only focus on designing and implementing context-aware software related components such as context-sensing, context-collector, context-interpreter, service discovery, etc., we recognize it is useful to clearly model the target environment and entity in order to deliver better pervasive systems.
A Design Theory for Pervasive Information Systems (paper in proceedings, 2006)Panos E. Kourouthanassis, George M. Giaglis

This  paper outlines the components of a design theory for the development of PIS. The theory consists of a set of meta-requirements, a set of meta-design considerations, and a set of design method considerations.
PIS introduce new elements in multiple dimensions spanning different IS domains, such as Human- Computer Interaction (HCI) and Software Engineering, which admonish us to examine them as a new Information Systems class. In essence, PIS revisit the way we interact with computers by introducing new input modalities and system capabilities. So far, the interaction paradigm for Information Systems has been the desktop. Thus, the design and implementation of Information Systems was based on this paradigm. PIS extend this paradigm by introducing a set of novel characteristics that are summarised in the following

  • PIS deal with non-traditional computing devices that merge seamlessly into the physical environment. 
  • PIS simulate the way that humans interact with the physical world: PIS may  incorporate elements of ambient interactions with devices or objects from the physical space.
  • PIS support a multitude of heterogeneous device types that differ in terms of size, shape (more diverse, ergonomic, and stylistic), and functionality (simple mobile phones, portable laptops, pagers, PDAs, sensors, and so on), providing continuous interaction which moves computing from a localised tool to a constant presence. 
  • PIS support nomadic devices which may be carried around by users and present location-based information. 
  • PIS need to support spontaneous networking, implying ad-hoc detection and linking of the participating devices into a temporary pervasive network creating dynamic dependencies among the linked devices.
  • The participating elements of a pervasive system are highly embedded in the physical environment. 
  • PIS emerge a revised viewpoint in the way we perceive system design. 
  • in PIS it is highly unlikely for the system designer to know in advance the kinds of users who will be interacting with the system. Users may range from being vaguely familiar with IT to expert users. In addition, PIS users may be opportunistic in the sense that they may use the system only sporadically, implying that they may not be subject to training prior to system use.
  • PIS introduce the property of context awareness as a result of the pervasive artefacts capability to collect, process, and manage environmental or user-related information on a realtime basis.
More Principled Design of Pervasive Computing Systems (paper in proceedings, 2005)Simon Dobson, Paddy Nixon
A truly pervasive system requires the ability to reason about behaviours beyond their construction, both individually and in composition with other behaviours. This is rendered almost impossible  when a system's reaction to context is articulated only as code, is scattered across the entire application, and presents largely arbitrary functional changes.
From a user perspective the design of pervasive computing systems is almost completely about interaction design.It is vitally important that users can (in the forward direction) predict when and how pervasive systems will adapt, and (in the reverse direction) can perceive why particular adaptation ha occurred. Th hypothesis for the current is that predictability in pervasive computing arises from having a close, structured and easily-grasped relationship between the context and the behavioural change that context engenders. In current systems this relationship is not explicitly articulated but instead exist implicitly in the system's reaction to events. The aim of this work is to capture the relationship in a way that can be used to both analyse pervasive computing systems and aid their design.
Architectural Decision Models as Micro-Methodology for Service-Oriented Analysis and Design (paper in proceedings, 2007)Olaf Zimmermann, Jana Koehler, Leymann Frank
In this paper, authors propose an engineering approach to service modeling. They treat service realization decisions as first-class entities that guide the service modeler through the design process. They capture these decisions in machine-readable models.
This SOA knowledge is organized in a reusable multi-level SOA decision tree, including a conceptual, a technology, and an asset level. The tree organization follows Model-Driven Architecture (MDA) principles, separating rapidly changing platform-specific concerns from longer-lasting platform-independent decisions. Architecture alternatives in the conceptual level are expressed as SOA patterns. An underlying meta model facilitates automation of service realization decision identification, making, and enforcement: Meta model instances (models) can be created from requirements models and reference architectures, and shared across project boundaries. The meta model also enables decision dependency modeling and tree pruning – making one decision has an impact on many other decisions.
Explicit dependency modeling has another key advantage: the decision tree can serve as a micro-methodology during service design, operating on a more detailed level of abstraction than general purpose methods such as the Rational Unified Process (RUP) and the service modeling approaches described in the literature.
The approach is complementary to these assets; e.g., the decisions can be organized along the RUP phases such as inception, elaboration and construction.
A Trust Analysis Methodology for Pervasive Computing Systems (paper in proceedings, 2005)Stephane Presti, Michael Butler, Michael Leuschel, Chris Booth

The goal of the Trust Analysis Methodology cite{Presti2005} is to help in the design of the pervasive system by highlighting the trust issues inherent to the system. It is a guide rather than a model as it does not define rigorously exact terms but rather provides a means to discover trust issues.
The methodology involves iteration over four steps followed by a final fifth step.
In particular:

  • Scenario (step 1): is a short, fictional narrative, set in the near future that describes people’s daily lives, concentrating on their use of pervasive computing under examination. 
  • Trust Analysis (step 2):   involves Trust Analysis Grid, where the rows of the grid correspond to vignettes in the scenario, while the columns of the grid correspond to categories of trust issues that will be checked against the vignettes. A vignette corresponds to one or several pieces of one or several sentences of the scenario and constitutes a cohesive group with regard to the trust issues.
  • Peer Review (step 3): supports the extraction of trust issues from the perspective of another potential user, who may have a different view on trust issues. This peer review may be the occasion to discover some missing trust issues and complement the reviewer’s point of view.
  • Scenario Refinement (step 4): in this step the scenario is refined by adding new text and vignettes, or removing existing ones.
  • Guiding the Design of the Pervasive System (step 5): consists in using the Trust Analysis Grid to draw some guidelines in order to help in the design of the pervasive systems under consideration.
MetaSelf: An Architecture and a Development Method for Dependable Self-* systems (paper in proceedings, 2010)Giovanna Di Marzo Serugendo, John Fitzgerald, Alexander Romanovsky

The MetaSelf  is a  development process for engineering dependable and controllable self-organising (SO) systems  consisting of four phases. The Requirement and Analysis phase identifies the functionality of the system along with self-* requirements
specifying where and when self-organisation is needed or desired.
The Design phase consists of two sub-phases:

  • D1: the designer chooses architectural patterns and self-* mechanisms; 
  • D2: the individual autonomous components are designed. The necessary metadata and policies are selected and described, and  the self-* mechanisms are simulated and possibly adapted or improved.
    The Implementation phase produces the metadata and executable policies. In the Verification phase, the designer makes sure that agents, the environment, artefacts and mechanisms work as desired. Potential faults and their consequences are identified, similar to the way failure modes and effects analysis works. Corrective measures (redesign or dependability policies) to tolerate or remove the identified faults are taken accordingly.
A Method Fragments Approach to Methodologies for Engineering Self-Organising Systems (article in journal, 2012)Mariachiara Puviani, Giovanna Di Marzo Serugendo, Regina Frei, Giacomo Cabri

This work summarises five relevant methods for developing self-organising multi-agent systems and presents several method fragments coming from these methods. In particular, the paper presents Adelfe, Customised Unified Process, MetaSelf, General Methodology and the Simulation Driven Approach.

A Goal-Oriented Approach for Modelling Self-organising MAS (paper in proceedings, 2009)Mirko Morandini, Frédéric Migeon, Marie-Pierre Gleizes, Christine Maurel, Loris Penserini, Anna Perini

(Morandini2009) proposes to extend a goal-oriented engineering methodology to deal with the modelling of organisations that are able to self-organise in order to reach their goals in a changing environment. To deliver on this aim, the authors combine Tropos4AS (Morandini2008), an extension of TROPOS cite{Tropos2005} for adaptive systems, with concepts, guidelines and modelling steps from the ADELFE methodology, which provides a bottom-up approach for engineering collaborative multi-agent societies with an emergent behaviour.
Authors have, first of all, extended the Tropos4AS meta-model with concepts  such as skills, aptitudes, and actions  coming from Adelfe meta-model. Second, authors have introduced in the combined meta-model also the Agents&Artifact meta-model in order to model the environment.
After that, the authors  adapt the TROPOS4AS modelling process to modelling of the newly introduced concepts.
The resulting MAS has self-adaptation properties, having agents that are able to change their behaviour according to changes in the environment, and having
organisations that adapt themselves to changing needs.
Tropos4AS extends TROPOS goal models to enable the description of systems that shall be able to adapt to environmental changes. Tropos4AS introduces modelling of an actor’s perceived environment and of possible failures that can be identified and prevented by recovery activities. Moreover, goals can be annotated to define the runtime goal achievement behaviour, with various goal types and conditions related to the
environment, for goal creation, achievement and failure. The resulting design model is mapped into implementation language constructs to derive a code skeleton.

Software Engineering for Self-Organizing Systems (paper in proceedings, 2011)H. Van Dyke Parunak, Sven A. Brueckner
This paper proposes an interesting survey on the engineering of self-organising systems. In particular, the authors highlight several challenges for future reserchs such as the system composition coming from the method fragments integration, the system characterisation and control, and the formal analysis—along three different dimensions: emergence, organisation and temporal.
In addition the authors presents points of continuity and contrast between traditional software engineering and self-organisation software engineering, with particular regards to the concepts of architecture, design patterns, method fragmentations, and design processes.