The Safe and Trustworthy AI Workshop
Imperial College London, UK, 09/07/2023–10/07/2023
The second STAI workshop is focused on the broad area of safe and trustworthy AI.
temi di interesse
In the last few years, there have been considerable advances in the capabilities of AI systems. However, guaranteeing that these systems are safe and trustworthy is still an issue. An AI system is considered to be safe when we can provide some assurance about its behaviour, and it is considered to be trustworthy if the average user can have well-placed confidence in the system and its decision-making.
This workshop takes a broad view of safety and trustworthiness, covering areas such as the following.
- Formal verification of system behaviour
- Explainable and interpretable AI
- Knowledge representation and reasoning
- Neurosymbolic AI
- Safe multi-agent systems
- Coordination and cooperative AI
- Fairness, bias, and algorithmic discrimination
- AI ethics and value alignment
- Robustness and failures of generalisation
- AI policy and regulation, including the use of agent-based modeling to better understand the consequences of such policy
- The use of norms for ensuring alignment of multi-agent systems with certain values
evento ospitante
funge da
evento ospitato per