top of page

Dr. Zakaria Chihani
CEA - List, France

Dr. Zakaria Chihani is a researcher in the Software Safety and Security Laboratory at Commissariat à l'énergie atomique et aux énergies alternatives (CEA) in Saclay, France, where he currently heads the effort around AI trustworthiness, particularly using Formal Methods. With tools and methods developed for explainability, testing and verification of AI, as well as a modular and extensible platform for the characterization of AI Safety, his team is able to assist several industrial partners, such as Technip Energies or Renault, in their AI trustworthiness evaluation. Zakaria strives to encourage academic effervescence around safety of AIs through numerous talks, courses and event organizing such as WAISE and ForMaL, as well as the GT-VRAI working group.

Zakaria_Chihani_Pic.jpg

INVITED TALK - A selected view of AI trustworthiness methods: How far can we go?

While still debating some questions (such as the likelihood of achieving Artificial General Intelligence), the AI community in particular and most of the related stakeholders in general, seem to be more or less convinced that "AI winters" are a thing of the past and that the current "summer" will never end. Indeed, this rapidly evolving field, especially through the recent Deep Learning advances, is too good at particular useful tasks to simply disappear. There is little doubt, for example, that neural networks are poised to permeate a growing number of everyday applications, including sensitive software where trust is paramount.

But as these artifacts move from a fad status to more stably ubiquitous components, their deepening interweaving with different aspects of society deserves a special attention to the development of methods and tools for an adequate characterization of AI trustworthiness. This colossal quest is made difficult by the intrinsic opacity of neural networks and their increasing size, making any method that can bring us closer to trustworthiness a precious commodity. In this talk, we take a retrospective look at some of these methods, and discuss their current added value to safety, as well as the promises they hold for the future of AI trustworthiness.

bottom of page