top of page
AISafety_2022.png
Programme

PROGRAMME

This year the workshop will be conducted during one and half days according to the following schedule.

Day 1 - Sunday, July 24th, 2022 - 13:30 to 17:40 CEST

20-12-2019-053104best-paper-award.png

Day 2 - Monday, July 25th, 2022 - 09:10 to 17:30 CEST

Speakers

INVITED SPEAKERS

TAILOR

TAILOR

The purpose of TAILOR is to build a strong academic-public-industrial research network (https://tailor-network.eu/), with more than 50 partners from all Europe, with the capacity of providing the scientific basis for Trustworthy AI leveraging and combining learning, optimization and reasoning for realizing AI systems that incorporate the safeguards that make them in the reliable, safe, transparent and respectful of human agency and expectations.

The special session will present an overview of the TAILOR project (Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization) and the activities that have been carried on within the project. Then, a panel with experts both from and outside the project consortium will investigate the current challenges and solutions that have been developed to support the Trustworthy AI.

TAILOR - PROGRAMME

TAILOR session will be held the 2nd day of the workshop according to the following schedule.

Day 2 - July 25th, 2022 - 09:10 to 10:40 CEST

TAILOR - CHAIRS AND SPEAKERS

Best Paper Award

BEST PAPER AWARD

The Program Committee (PC) will designate up to 3-5 papers as candidates to the AISafety Best Paper Award.

The selected candidates to AISafety 2022 Best Paper Award are:

The AISafety 2022 Best Paper Award is granted to:

Mattias Brännström, Andreas Theodorou and Virginia Dignum; for: Let it RAIN for Social Good.

Committees

The best paper will be selected based on the votes of the workshop’s Chairs, during the workshop.

 

The authors of the Best Paper Award certificate with the name of the award, the name of the paper, and the names of the authors of the paper, at the workshop’s closing.

ORGANIZING COMMITTEE

  • Gabriel Pedroza, CEA LIST, France

  • Xin Cynthia Chen, University of Hong Kong, China

  • José Hernández-Orallo, Universitat Politècnica de València, Spain

  • Xiaowei Huang, University of Liverpool, UK

  • Huascar Espinoza, KDT JU, Belgium

  • Richard Mallah, Future of Life Institute, USA

  • John McDermid, University of York, UK
  • Mauricio Castillo-Effen, Lockheed Martin, USA

PROGRAMME COMMITTEE

  • Simos Gerasimou, University of York, UK

  • Jonas Nilson, NVIDIA, USA

  • Morayo Adedjouma, CEA LIST, France

  • Brent Harrison, University of Kentucky, USA

  • Alessio R. Lomuscio, Imperial College London, UK

  • Brian Tse, Affiliate at University of Oxford, China

  • Michael Paulitsch, Intel, Germany

  • Ganesh Pai, NASA Ames Research Center, USA

  • Rob Alexander, University of York, UK

  • Vahid Behzadan, University of New Haven, USA

  • Chokri Mraidha, CEA LIST, France

  • Ke Pei, Huawei, China

  • Orlando Avila-García, Arquimea Research Center, Spain

  • I-Jeng Wang, Johns Hopkins University, USA

  • Chris Allsopp, Frazer-Nash Consultancy, UK

  • Andrea Orlandini, ISTC-CNR, Italy

  • Agnes Delaborde, LNE, France

  • Rasmus Adler, Fraunhofer IESE, Germany

  • Roel Dobbe, TU Delft, The Netherlands

  • Vahid Hashemi, Audi, Germany

  • Juliette Mattioli, Thales, France

  • Bonnie W. Johnson, Naval Postgraduate School, USA

  • Roman V. Yampolskiy, University of Louisville, USA

  • Jan Reich, Fraunhofer IESE, Germany

  • Fateh Kaakai, Thales, France

  • Francesca Rossi, IBM and University of Padova, USA

  • Javier Ibañez-Guzman, Renault, France

  • Jérémie Guiochet, LAAS-CNRS, France
  • Raja Chatila, Sorbonne University, France

  • François Terrier, CEA LIST, France

  • Mehrdad Saadatmand, RISE Research Institutes of Sweden, Sweden

  • Alec Banks, Defence Science and Technology Laboratory, UK

  • Roman Nagy, Argo AI, Germany

  • Nathalie Baracaldo, IBM Research, USA

  • Toshihiro Nakae, DENSO Corporation, Japan

  • Gereon Weiss, Fraunhofer IKS, Germany

  • Philippa Ryan Conmy, Adelard, UK

  • Stefan Kugele, Technische Hochschule Ingolstadt, Germany

  • Colin Paterson, University of York, UK

  • Davide Bacciu, Università di Pisa, Italy

  • Timo Sämann, Valeo, Germany

  • Sylvie Putot, Ecole Polytechnique, France

  • John Burden, University of Cambridge, UK

  • Sandeep Neema, DARPA, USA

  • Fredrik Heintz, Linköping University, Sweden

  • Simon Fürst, BMW Group, Germany

  • Mario Gleirscher, University of Bremen, Germany

  • Mandar Pitale, NVIDIA, USA

  • Leon Kester, TNO, The Netherlands

Technical Sessions

TECHNICAL SESSIONS

The presentation files are available in the talk title links below.

Invited Talk 1: Elizabeth Adams (Stanford University Institute for Human Centered AI, USA), Leadership of Responsible AI: Representation Matters

People of color are adversely affected by artificial intelligence (AI) bias. The effects of AI bias have been noted in facial recognition technology, mortgage lending, and algorithms used to determine healthcare treatments. People impacted by AI bias are rarely represented in the development of AI technology (Atker et al., 2021). To prevent AI bias, including diverse perspectives in the creation of Responsible AI (RAI), artifacts that shape policies, procedures, and governance models, could address potential problems in the development of AI.
RAI is an emerging business discipline that examines legal, ethical, and moral standpoints of technology development to help reduce AI bias (Barredo et al., 2020, Taylor et al., 2018). By tying impacted people to innovation and incorporating their ideas as stakeholders, innovations gain substantive and symbolic support from those who are most affected by innovation (Boon et al., 2021). My motivation is explore broader employee stakeholder participation in IS, AI and Organizational Learning. Therefore, I seek to answer the following research question: "How does the participation of African American employee stakeholders in the creation of Responsible AI "shaping artifacts" reduce bias in AI?"

Session 1: AI Ethics: Fairness, Bias, and Accountability - Chair: Gabriel Pedroza (CEA-List, France)

* Let it RAIN for Social Good, Mattias Brännström, Andreas Theodorou, Virginia Dignum
* Accountability and Responsibility of Artificial Intelligence Decision-making Models in Indian Policy Landscape, Palak Malhotra, Amita Misra

* Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression Recognition, Iris Dominguez-Catena, Daniel Paternain, Mikel Galarm


> Debate Panel - Session Discussants: Elizabeth Adams, Gabriel Pedroza

Invited Talk 2: Luis Aranda (Organization for Economic Cooperation and Development), Enabling AI governance: OECD’s work on moving from Principles to practice

Three years later, how far have we gotten in putting the OECD AI principles into practice?
Governments and other stakeholders have been working to implement the OECD AI Principles to make artificial intelligence trustworthy for people and planet. This talk is a timely occasion to highlight work to date and discuss future priorities.
The talk will showcase recent initiatives developed by the OECD Working Party on Artificial Intelligence Governance (AIGO) and the OECD.AI Network of Experts, including the OECD.AI Policy Observatory, a catalogue of tools for trustworthy AI, a user-friendly framework for classifying different types of AI systems and a global AI incidents tracker. The discussion will seek to highlight good AI policy practices related to AI governance.

Session 2 - Short Presentations - Safety Assessment of AI-enabled systems - Chair: Douglas Lange (Naval Information Warfare Center Pacific, USA)

* A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled Systems, Yi Qi, Philippa Ryan Conmy, Wei Huang, Xingyu Zhao, Xiaowei Huang
* Increasingly Autonomous CPS: Taming Emerging Behaviors from an Architectural Perspective, Jerome Hugues, Daniela Cancila

* CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness, Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle


> Debate Panel - Session Discussants:  Luis Aranda, Douglas Lange, Mattias Brännström (Umeå University, Sweden)

Keynote 1: Gary Marcus (Scientist and Author of "Rebooting AI", Canada), Towards a Proper Foundation for Robust Artificial Intelligence

Gary Marcus is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience.
An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of five books, including, The Algebraic Mind, Kluge, The Birth of the Mind, and the New York Times Bestseller Guitar Zero. He has often contributed to The New Yorker, Wired, and The New York Times. His most recent book, Rebooting AI, with Ernest Davis, is one of Forbes’s 7 Must Read Books in AI.

Session 3: Machine learning for safety-critical AI - Chair: John Burden (University of Cambridge, UK)

* Revisiting the Evaluation of Deep Neural Networks for Pedestrian Detection, Patrick Feifel, Benedikt Franke, Arne Raulf, Friedhelm Schwenker, Frank Bonarens, Frank Köster
* Improvement of Rejection for AI Safety through Loss-Based Monitoring, Daniel Scholz, Florian Hauer, Klaus Knobloch, Christian Mayr


> Debate Panel - Session Discussants: Gary Marcus, John Burden, Gabriel Pedroza

Special Session: TAILOR - Towards Trustworthy AI - Chairs: Francesca Pratesi (CNR, Italy), Umberto Straccia (CNR, Italy), Annelot Bosman (Leiden University, Netherlands)

The purpose of TAILOR is to build a strong academic-public-industrial research network (https://tailor-network.eu/), with more than 50 partners from all Europe, with the capacity of providing the scientific basis for Trustworthy AI leveraging and combining learning, optimization and reasoning for realizing AI systems that incorporate the safeguards that make them in the reliable, safe, transparent and respectful of human agency and expectations.

The special session will present an overview of the TAILOR project (Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization) and the activities that have been carried on within the project. Then, a panel with experts both from and outside the project consortium will investigate the current challenges and solutions that have been developed to support the Trustworthy AI.

Keynote 2: Thomas A. Henzinger (ISTA, Austria), Formal Methods meet Neural Networks: A Selection

We review several ways in which formal methods can enhance the quality of neural networks:
first, to learn neural networks with guaranteed properties;
second, to verify properties of neural networks;
and third, to enforce properties of neural networks at runtime.
For the first topic, we discuss reinforcement learning with temporal objectives in stochastic environments; for the second, decision procedures for reasoning about quantized neural networks; for the third, monitoring learned classifiers for novelty detection and fairness, and shielding learned controllers for safety and progress.

Session 4 - Short Presentations - ML Robustness, Criticality and Uncertainty - Chair: Fernando Martinez Plumed (Universitat Politècnica de València, Spain)

* Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers, Georg Siedel, Silvia Vock, Andrey Morozov, Stefan Voßd
* Safety-aware Active Learning with Perceptual Ambiguity and Criticality Assessment, Prajit T Rajendran, Guillaume Ollier, Huascar Espinoza, Morayo Adedjouma, Agnes Delaborde, Chokri Mraidha

* Understanding Adversarial Examples Through Deep Neural Network's Classification Boundary and Uncertainty Regions, Juan Shu, Bowei Xi, Charles Kamhoua


> Debate Panel - Session Discussants: Thomas A. Henzinger, Fernando Martinez Plumed

Invited Talk 3: Simos Gerasimou (University of York, UK), SESAME: Secure and Safe AI-Enabled Robotics Systems

Deep Learning (DL) has become a fundamental building block of learning-enabled autonomous systems. Notwithstanding its great potential, employing DL in safety- and security-critical applications, including robots providing service in healthcare facilities or drones used for inspection and maintenance, raises significant trustworthiness challenges. Within the European project SESAME, we develop a model-based approach supporting the systematic engineering of dependable learning-enabled robotic systems. In this talk, we will overview recent advances made by the project team to provide assurances for the trustworthy, robust and explainable operation of DL, focusing particularly on techniques for deep learning testing and uncertainty analysis.

Session 5: AI Robustness, Generative models and Adversarial learning - Chair: Gabriel Pedroza (CEA-List, France)

* Leveraging generative models to characterize the failure conditions of image classifiers, Adrien Le Coz, Stéphane Herbin, Faouzi Adjed
* Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object Detection
, Svetlana Pavlitskaya, Bianca-Marina Codău, J. Marius Zöllner

* Privacy Safe Representation Learning via Frequency Filtering Encoder, Jonghu Jeong, Minyong Cho, Philipp Benz, Jinwoo Hwang, Jeewook Kim, Seungkwan Lee, Tae-hoon Kim

* Benchmarking and deeper analysis of adversarial patch attack on object detectors, Pol Labarbarie, Adrien Chan Hon Tong, Stéphane Herbin, Milad Leyli-Abadi

> Debate Panel - Session Discussants: Simos Gerasimou, Bowei Xi (Purdue University), Gabriel Pedroza

Invited Talk 4: Zakaria Chihani (CEA List, France), A selected view of AI trustworthiness methods: How far can we go?

While still debating some questions (such as the likelihood of achieving Artificial General Intelligence), the AI community in particular and most of the related stakeholders in general, seem to be more or less convinced that "AI winters" are a thing of the past and that the current "summer" will never end. Indeed, this rapidly evolving field, especially through the recent Deep Learning advances, is too good at particular useful tasks to simply disappear. There is little doubt, for example, that neural networks are poised to permeate a growing number of everyday applications, including sensitive software where trust is paramount.
But as these artifacts move from a fad status to more stably ubiquitous components, their deepening interweaving with different aspects of society deserves a special attention to the development of methods and tools for an adequate characterization of AI trustworthiness. This colossal quest is made difficult by the intrinsic opacity of neural networks and their increasing size, making any method that can bring us closer to trustworthiness a precious commodity. In this talk, we take a retrospective look at some of these methods, and discuss their current added value to safety, as well as the promises they hold for the future of AI trustworthiness.

Session 6: AI Accuracy, Diversity, Causality and Optimization - Chair:  Jose Hernandez-Orallo (Universitat Politècnica de València, Spain)

* The impact of averaging logits over probabilities on ensembles of neural networks, Cedrique Rovile Njieutcheu Tassi, Jakob Gawlikowski, Auliya Unnisa Fitri, Rudolph Triebel
* Exploring Diversity in Neural Architectures for Safety
, Michał Filipiuk, Vasu Singh

* Constrained Policy Optimization for Controlled Contextual Bandit Exploration, Mohammad Kachuee, Sungjin Lee

* A causal perspective on AI deception in games, Francis Rhys Ward, Francesco Belardinelli, Francesca Toni


> Debate Panel - Session Discussants: Zakaria Chihani, Jose Hernandez-Orallo

bottom of page