top of page


 The presentations of the 2023 edition appear below. Special thanks to keynotes and all the presenters.

Keynote 1 - Paul Lukowicz (DFKI- Kaiserslautern, Germany) , Safety risks of AI: Intelligence, Complexity, and Stupidity

Abstract: TBD

Session 1 - Robustness of AI via OoD and Unknown-Unknows Dectection – Chair: David Bossens (University of Southampton, UK)

> Debate Panel - Session Discussants: Presenters and Session Chair.

Session 2 - AI Robustness, Adversarial Attacks and Reinforcement Learning - Chair: Anqi Liu (Johns Hopkins University, USA)

> Debate Panel - Session Discussants: Presenters and Session Chair.

Session 3 - AI Governance and Policy/Value Alignment – Chair: François Terrier (CEA-LIST, France)

> Debate Panel - Session Discussants: Presenters and Session Chair.

Keynote 2 - François Terrier (Program Director of CEA List, France) - No Trust without regulation! European challenge on regulation, liability and standards for trusted AI

The explosion in the performance of Machine Learning (ML) and the potential of its applications are strongly encouraging us to consider its use in industrial systems, including for critical functions such as decision-making in autonomous systems. While the AI community is well aware of the need to ensure the trustworthiness of AI-based applications, it is still leaving too much to one side the issue of safety and its corollary, regulation and standards, without which it is not possible to certify any level of safety, whether the systems are slightly or very critical.
The process of developing and qualifying safety-critical software and systems in regulated industries such as aerospace, nuclear power stations, railways or automotive industry has long been well rationalized and mastered. They use well-defined standards, regulatory frameworks and processes, as well as formal techniques to assess and demonstrate the quality and safety of the systems and software they develop. However, the low level of formalization of specifications and the uncertainties and opacity of machine learning-based components make it difficult to validate and verify them using most traditional critical systems engineering methods. This raises the question of qualification standards, and therefore of regulations adapted to AI. With the AI Act, the European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values. The question then becomes “How can we rise to the challenge of certification and propose methods and tools for trusted artificial intelligence?”.

Session 5 - AI Trustworthiness, Explainability and Testing - Chair: Prajit T. Rajendran (CEA-LIST, France)

> Debate Panel - Session Discussants: Presenters and Session Chair.

bottom of page