Models and Paradigms of Interaction

   page       attach   
Peter Wegner

The development of a conceptual framework and formal theoretical foundation for object-based pro- gramming has proved so elusive because the observable behavior of objects cannot be modeled by algo- rithms. The irreducibility of object behavior to that of algorithms has radical consequences for both the theory and the practice of computing.
Interaction machines, defined by extending Turing machines with input actions (read statements), are shown to be more expressive than computable functions, providing a counterexample to the hypothesis of Church and Turing that the intuitive notion of computation corresponds to formal computability by Turing machines. The negative result that interaction cannot be modeled by algorithms leads to positive principles of interactive modeling by interface constraints that support partial descriptions of interactive systems whose complete behavior is inherently unspecifiable. The unspecifiability of complete behavior for inter- active systems is a computational analog of Godel incompleteness for the integers.
Incompleteness is a key to expressing richer behavior shared by empirical models of physics and the natural sciences. Interaction machines have the behavioral power of empirical systems, providing a precise characterization of empirical computer science. They also provide a precise framework for object-based software engineering and agent-oriented AI models that is more expressive than algorithmic models.
Fortunately the complete behavior of interaction machines is not needed to harness their behavior for practical purposes. Interface descriptions are the primary mechanism used by software designers and application programmers for partially describing systems for the purpose of designing, controlling, pre- dicting, and understanding them. Interface descriptions are an example of “harness constraints” that con- strain interactive systems so their behavior can be harnessed for useful purposes. We examine both system constraints like transaction correctness and interface constraints for software design and applications.
Sections 1, 2, and 3 provide a conceptual framework and theoretical foundation for interactive models, while sections 4, 5, and 6 apply the framework to distributed systems, software engineering, and artificial intelligence. Section 1 explores the design space of interaction machines, section 2 considers the relation between imperative, declarative, and interactive paradigms of computing, section 3 examines the limita- tions of logic and formal semantics. In section 4 we examine process models, transaction correctness, time, programming languages, and operating systems from the point of view of interaction. In section 5, we examine life-cycle models, object-based design, use-case models, design patterns, interoperability, and coordination. In section 6, we examine knowledge representation, intelligent agents, planning for dynami- cal systems, nonmonotonic logic, and “can machines think?”. The interaction paradigm provides new ways of approaching each of these application areas, demonstrating the value of interactive models as a basis for the analysis of practical problems in computing. Readers interested in the applications of interac- tion machines can skip directly from section 1 to any of the application sections.