top of page

Dr. François Terrier
Director of the Programs of the CEA's Institute for smart digital system (CEA List)

François Terrier is AI Senior Fellow at CEA. He has a PhD in artificial intelligence and worked 10 years in the domain of expert systems using three-valued, temporal or fuzzy logics. Since 1994, he directs research on system and software engineering. As head of the system and software engineering department of the CEA, he led actions to build, for trustworthy software and systems, open tool chains covering the whole development cycle from requirement specification until equipment integration. His research challenges are, namely, on combining domain oriented modeling with formal methods for high quality, safe and secure critical systems. In 2019, François has been in charge to build the Trustworthy AI program of CEA and becomes in 2022 the Director of the Programs of CEA List (Institute for smart digital systems).


No Trust without regulation! European challenge on regulation, liability and standards for trusted AI

The explosion in the performance of Machine Learning (ML) and the potential of its applications are strongly encouraging us to consider its use in industrial systems, including for critical functions such as decision-making in autonomous systems. While the AI community is well aware of the need to ensure the trustworthiness of AI-based applications, it is still leaving too much to one side the issue of safety and its corollary, regulation and standards, without which it is not possible to certify any level of safety, whether the systems are slightly or very critical.

The process of developing and qualifying safety-critical software and systems in regulated industries such as aerospace, nuclear power stations, railways or automotive industry has long been well rationalized and mastered. They use well-defined standards, regulatory frameworks and processes, as well as formal techniques to assess and demonstrate the quality and safety of the systems and software they develop. However, the low level of formalization of specifications and the uncertainties and opacity of machine learning-based components make it difficult to validate and verify them using most traditional critical systems engineering methods. This raises the question of qualification standards, and therefore of regulations adapted to AI. With the AI Act, the European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values. The question then becomes “How can we rise to the challenge of certification and propose methods and tools for trusted artificial intelligence?”

bottom of page