Runtime Verification (RV) [1] is a kind of formal verification technique that focuses on checking the behaviour of software/hardware systems. With respect to other formal verification techniques, RV is considered more dynamic and lightweight because it is focused on checking how the system behaves, while it is currently running. Software Testing (ST) is not a formal method, though formal methods are often used for test-case generation with provable qualities, such as coverage metrics. ST amounts to observing the execution of a software system to validate whether it behaves as intended and identify potential malfunctions.
There are many connections between RV and ST; in particular, none of them is exhaustive, and both operate while the real system is running.
This thesis moves a step forward the preliminary work carried out in [2] and aims at
1) extending an existing ST framework (it might be JUnit in Java, or any other framework to be identified as part of the initial design of the Master Thesis) in order to seamlessly support and integrate RV of a complex, distributed system;
2) using the extended ST&RV tool to run a systematic comparison of RV and ST techniques on one or more distributed applications that might be concurrent systems, or IoT systems, or multiagent systems: again, identifying the target application(s) is part of the Master Thesis design.
Good command on software engineering and programming languages is required.
[1] Ezio Bartocci, Yliès Falcone, Adrian Francalanza, and Giles Reger. 2018. Introduction to Runtime Verification. In Lectures on Runtime Verification - Introductory and Advanced Topics, Ezio Bartocci and Yliès Falcone (Eds.). Lecture Notes in Computer Science, Vol. 10457. Springer, 1–33.
[2] Maurizio Leotta, Diego Clerissi, Luca Franceschini, Dario Olianas, Davide Ancona, Filippo Ricca, Marina Ribaudo: Comparing Testing and Runtime Verification of IoT Systems: A Preliminary Evaluation based on a Case Study. ENASE 2019: 434-441