Keynote Speakers

Uwe Nestmann, Technische Universität Berlin, DE

Title: Distributed Process Calculi with Local States

Abstract: Process calculi are popular for several reasons: (1) they precisely capture concurrent computation models via the syntax and semantics of minimalistic languages; (2) they are equipped with rich algebraic theories that build upon behavioural equivalences, often with precise logical counterparts; (3) they support powerful action-based proof techniques. While these advantages of process calculi are good for many concurrent applications, the reasoning about distributed algorithms often requires analyses in a state-based style, e.g., using (global) invariants. Thus, we study extensions of process calculi with explicit support for distribution, where processes dispose of a private memory component representing their own explicit local state. In the talk, I address the motivation behind distributed process calculi with local states as well as the engineering principles when developing the design and theory of such calculi.

Biography: Prof. Dr. Uwe Nestmann is a professor for models and theory of distributed systems at Technische Universität Berlin. His scientific interest spans from purely basic research on models of concurrent and distributed computation to the rigourous analysis of applications using domain-specific model variants and extensions. He received a Test-of-Time award of the CONCUR conference series in 2021 for his contributions to the understanding of the expressive power of concurrent languages. He holds a doctoral degree from the Universität Erlangen-Nürnberg and spent almost ten years in France (INRIA), Denmark (BRICS), and Switzerland (EPFL).

Mariëlle Stoelinga, Radboud University Nijmegen and University of Twente, NL

Title: Maintenance meets model checking: predictive maintenance via fault trees and formal methods

Abstract: Proper maintenance is crucial to keep our trains, power plants and robots up and running. Since maintenance is also expensive, effective maintenance is a typical optimization problem, where one balances costs against system performance (in terms of availability, reliability, remaining useful lifetime).

 Predictive maintenance is a promising technique that aims at predicting failures more accurately, so that just-in-time maintenance can be performed, doing maintenance exactly when and where needed. Thus, predictive maintenance promises higher availability, fewer failures at lower costs. In this talk, I will advocate a combination of model-driven (esp fault trees) and data analytical techniques to get more insight in the costs versus performance of maintenance strategies. I will show the results of several case studies from railroad engineering namely rail track (with Arcadis), the HVAC (heating, ventilation, airco; with Dutch railroads).

Biography: Prof. Dr. Mariëlle Stoelinga is a professor of risk management, both at the Radboud University Nijmegen, and the University of Twente, in the Netherlands.Stoelinga is the project coordinator on PrimaVera, a large collaborative project on Predictive Maintenance in the Dutch National Science Agenda NWA. She also received a prestigious ERC consolidator grant Stoelinga is the scientific programme leader Risk Management Master, a part-time MSc programme for professionals. She holds an MSc and a PhD degree from Radboud University Nijmegen, and has spent several years as a post-doc at the University of California at Santa Cruz, USA.

Alessio Lomuscio, Imperial College London, UK

Title:Towards verifying neural-symbolic multi-agent systems

Abstract:A challenge in the deployment of multi-agent systems (MAS) remains the inherent difficulty of predicting with confidence their run-time behaviour. Over the past twenty years, increasingly scalable verification methods, including model checking and parameterised verification, have enabled the validation of several classes of MAS against AI-based specifications, and several MAS applications in services, robotics, security, and beyond.

Yet, a new class of agents is emerging in applications. Differently from traditional MAS, which are typically directly programmed (and less often purely neural), they combine both connectionist and symbolic aspects. We will refer to these as neural-symbolic MAS. These agents include a neural layer, often implementing a perception function, and symbolic or control-based layers, typically realising decision making and planning. Implementations of neural-symbolic agents permeate many present and forthcoming AI applications, including autonomous vehicles and robotics. Due to the neural layer, as well as their heterogeneity, verifying the behaviours of neural-symbolic MAS is particularly challenging. Yet, I will argue that, given the safety-critical applications they are used in, methods and tools to address their formal verification should be developed.

In this talk I will share some of the contributions on this topic developed at the Verification of Autonomous Systems Lab at Imperial College London. I will begin by describing traditional approaches for the verification of symbolic MAS, and parameterised verification to address arbitrary collections of agents such as swarms. I will then summarise our present efforts on verification of neural perception systems, including MILP-based approaches, linear relaxations, and symbolic interval propagation, introduce our resulting toolkits, Venus and Verinet, and exemplify their use.

This will lead us to existing methods for closed-loop, neural-symbolic MAS. In this context, I will share existing results that enable us to perform reachability analysis, and verify systems against bounded temporal specifications and ATL.

I will conclude by highlighting some of the many challenges that lie ahead.

Biography: Alessio Lomuscio (http://www.doc.ic.ac.uk/~alessio) is Professor of Safe Artificial Intelligence in the Department of Computing at Imperial College London (UK), where he leads the Verification of Autonomous Systems Lab (http://vas.doc.ic.ac.uk/). He serves as Deputy Director for the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. He is a Distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies.

His research interests concern the development of formal verification methods for artificial intelligence. Since 2000 he has worked on the development of formal methods for the verification of autonomous systems and multi-agent systems. To this end, he has put forward several methods based on model checking and various forms abstraction to verify AI systems, including robotic swarms, against AI-based specifications. He is presently focusing on the development of methods to ascertain the correctness of AI systems based on deep neural networks.

He has published approximately 200 papers in AI conferences (including IJCAI, KR, AAAI, AAMAS and ECAI), verification and formal methods conferences (CAV, SEFM, ATVA), and international journals (AIJ, JAIR, ACM ToCL, JAAMAS, Information and Computation). He sits on the Editorial Board member for AIJ, JAIR, and JAAMAS, and has recently served as general co-chair for AAMAS 2021.