top of page
AISafety2021.png
Programme

PROGRAMME

The IJCAI organizing committee has decided that all sessions will be held, as a Virtual event. AISafety has been planned as a two-half-days workshop to fit the best for the time zones of speakers (12:00 - 16:00 UTC).

Day 1 - August 19th, 2021 - 12:00 to 16:00 UTC (Montreal: -4 hrs, CET: +2 hrs)

Programme1.jpg
20-12-2019-053104best-paper-award.png

Day 2 - August 20th, 2021 - 12:00 to 16:00 UTC (Montreal: -4 hrs, CET: +2 hrs)

Programme2.jpg
Speakers

INVITED SPEAKERS

Best Paper Award

BEST PAPER AWARD

Partnership on AI (PAI) is sponsoring a $US 500 Best Paper Award for the best submission to AISafety 2021.

 

The Program Committee (PC) will designate up to 3-5 papers as candidates to the AISafety Best Paper Award.

The selected candidates were:

BestPaperAwardCandidates.jpg
logo-PAI-hd2.jpg

The AISafety 2021 Best Paper Award was granted to:

Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe and Xiaowei Huang; for: Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles.

Committees

The best paper will be selected based on the votes of the workshop’s participants. During the workshop, all participants will be able to vote for the best paper.

 

The authors of the Best Paper Award will receive the $US 500 prize and certificate with the name of the award, the name of the paper, and the names of the authors of the paper, at the workshop’s closing.

ORGANIZING COMMITTEE

  • Huascar Espinoza, ECSEL JU, Belgium

  • José Hernández-Orallo, Universitat Politècnica de València, Spain

  • Xiaowei Huang, University of Liverpool, UK

  • Cynthia Chen, University of Hong Kong, China

  • Gabriel Pedroza, CEA LIST, France

  • Mauricio Castillo-Effen, Lockheed Martin, USA

  • Seán Ó hÉigeartaigh, University of Cambridge, UK

  • John McDermid, University of York, UK
  • Richard Mallah, Future of Life Institute, USA

PROGRAMME COMMITTEE

  • Stuart Russell, UC Berkeley, USA

  • Emmanuel Arbaretier, Apsys-Airbus, France

  • Ann Nowé, Vrije Universiteit Brussel, Belgium

  • Simos Gerasimou, University of York, UK

  • Jonas Nilson, NVIDIA, USA

  • Morayo Adedjouma, CEA LIST, France

  • Brent Harrison, University of Kentucky, USA

  • Alessio R. Lomuscio, Imperial College London, UK

  • Brian Tse, Affiliate at University of Oxford, China

  • Michael Paulitsch, Intel, Germany

  • Ganesh Pai, NASA Ames Research Center, USA

  • Hélène Waeselynck, CNRS LAAS, France

  • Rob Alexander, University of York, UK

  • Vahid Behzadan, Kansas State University, USA

  • Chokri Mraidha, CEA LIST, France

  • Ke Pei, Huawei, China

  • Orlando Avila-García, Arquimea Research Center, Spain

  • Rob Ashmore, Defence Science and Technology Laboratory, UK

  • I-Jeng Wang, Johns Hopkins University, USA

  • Chris Allsopp, Frazer-Nash Consultancy, UK

  • Andrea Orlandini, ISTC-CNR, Italy

  • Rasmus Adler, Fraunhofer IESE, Germany

  • Roel Dobbe, TU Delft, The Netherlands

  • Vahid Hashemi, Audi, Germany

  • Feng Liu, Huawei Munich Research Center, Germany

  • Yogananda Jeppu, Honeywell Technology Solutions, India

  • Francesca Rossi, IBM and University of Padova, USA

  • Ramana Kumar, Google DeepMind, UK

  • Javier Ibañez-Guzman, Renault, France

  • Jérémie Guiochet, LAAS-CNRS, France
  • Raja Chatila, Sorbonne University, France

  • François Terrier, CEA LIST, France

  • Mehrdad Saadatmand, RISE Research Institutes of Sweden, Sweden

  • Alec Banks, Defence Science and Technology Laboratory, UK

  • Gopal Sarma, Broad Institute of MIT and Harvard, USA

  • Roman Nagy, Argo AI, Germany

  • Nathalie Baracaldo, IBM Research, USA

  • Toshihiro Nakae, DENSO Corporation, Japan

  • Richard Cheng, California Institute of Technology, USA

  • Ramya Ramakrishnan, Massachusetts Institute of Technology, USA

  • Gereon Weiss, Fraunhofer ESK, Germany

  • Douglas Lange, Space and Naval Warfare Systems Center Pacific, USA

  • Philippa Ryan Conmy, Adelard, UK

  • Stefan Kugele, Technische Hochschule Ingolstadt, Germany

  • Colin Paterson, University of York, UK

  • Javier Garcia, Universidad Carlos III de Madrid, Spain

  • Davide Bacciu, Università di Pisa, Italy

  • Timo Sämann, Valeo, Germany

  • Vincent Aravantinos, Argo AI, Germany

  • Mohamed Ibn Khedher, IRT SystemX, France

  • Umut Durak, German Aerospace Center (DLR), Germany

Recorded Sessions

RECORDED SESSIONS

The presentation files are available in the talk title links below.

Keynote: Emily Dinan (Facebook AI Research, USA), Safety for E2E Conversational AI

Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language.  In this talk, I will discuss the problem landscape for safety for E2E convAI, including recent and related work. I will highlight tensions between values, potential positive impact, and potential harms, and describe a possible path for moving forward.

Session 1: Trustworthiness of Knowledge-Based AI - Chair: José Hernández-Orallo

* Applying Strategic Reasoning for Accountability Ascription in Multiagent Teams, Vahid Yazdanpanah, Sebastian Stein, Enrico Gerding and Nicholas R. Jennings.
*
Impossibility of Unambiguous Communication as a Source of Failure in AI Systems, William Howe and Roman Yampolskiy.
> Debate Panel - Session Discussants: Seán Ó hÉigeartaigh, Gabriel Pedroza

Poster Pitches 1

* Uncontrollability of Artificial Intelligence, Roman Yampolskiy.
*
Domain Shifts in Reinforcement Learning: Identifying Disturbances in Environments, Tom Haider, Felippe Schmoeller Roza, Dirk Eilers, Karsten Roscher and Stephan Günnemann.
*
Chess as a Testing Grounds for the Oracle Approach to AI Safety, James Miller, Roman Yampolskiy, Olle Häggström and Stuart Armstrong.
*
Socio-technical co-Design for accountable autonomous software, Ayan Banerjee, Imane Lamrani, Katina Michael, Diana Bowman and Sandeep Gupta.

The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to "win". In our recent works, we examine this problem theoretically, resorting to a novel innovation dilemma where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. Companies are held to race towards the deployment of some AI-based product in a domain X. They can either carefully consider all data and AI pitfalls along the way (the SAFE ones) or else take undue risks by skipping recommendable testing so as to speed up the processing involved (the UNSAFE ones). Overall, SAFE are costlier strategies and take more time to implement than UNSAFE ones, therefore permitting UNSAFE strategists to claim significant further benefits from reaching technological supremacy first. We show that the range of risk probabilities where the social dilemma arises depends on many factors, the most important among them are the time-scale to reach supremacy in a given domain (i.e. short-term vs long-term AI) and the speed gain by ignoring safety measures. Moreover, given the more complex nature of this scenario, we show that incentives such as reward and punishment (for example, for the purpose of technology regulation) are much more challenging to supply correctly than in case of cooperation dilemmas such as the Prisoner's Dilemma and the Public Good Games.   These results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.

Session 2: Robustness of Machine Learning Approaches - Chair: Xiaowei Huang

* Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles, Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe and Xiaowei Huang [Best Paper Award].
*
Towards robust perception using topological invariants, Romie Banerjee, Feng Liu and Pei Ke.
*
Measuring Ensemble Diversity and Its Effects on Model Robustness, Lena Heidemann, Adrian Schwaiger and Karsten Roscher
> Debate Panel - Session Discussants: Fabio Arnez, José Hernández-Orallo

Keynote: Prof. Dr. Simon Burton (Fraunhofer IKS, Germany), Safety, Complexity, AI and Automated Driving - Holistic Perspectives on Safety Assurance

Assuring the safety of autonomous driving is a complex endeavour. It is not only a technical difficult and resource intensive task but autonomous vehicles and their wider sociotechnical context demonstrate characteristics of complex systems in the stricter sense of the term. That is, they exhibit emergent behaviour, coupled feedback, non-linearity and semi-permeable system boundaries. These drivers of complexity are further exacerbated by the introduction of AI and machine learning techniques. All these factors severely limit our ability to apply traditional safety measures both at design and operation-time.
 
In this presentation, I present how considering AI-based autonomous vehicles as a complex system could lead us towards better arguments for their overall safety. In doing so, I address the issue from two different perspectives. Firstly by considering the topic of safety within the wider system context including technical, management, and regulatory considerations. I then discuss how these viewpoints lead to specific requirements on AI components within the system. Residual inadequacies of machine learning techniques are an inevitable side effect of the technology. I explain how an understanding of root causes of such insufficiencies as well as the effectiveness of measures during design and operation is key to the construction of a convincing safety assurance argument of the system. I will finish the talk with a summary of our current standardisation initiatives in this area as well as directions for future research.

Session 3: Perception and Adversarial Attacks - Chair: Xin Cynthia Chen

*  Deep neural network loses attention to adversarial images, Shashank Kotyan and Danilo Vasconcellos Vargas.
*
An Adversarial Attacker for Neural Networks in Regression Problems, Kavya Gupta, Jean-Christophe Pesquet, Beatrice Pesquet-Popescu, Fateh Kaakai and Fragkiskos Malliaros.
*
Coyote: A Dataset of Challenging Scenarios in Visual Perception for Autonomous Vehicles, Suruchi Gupta, Ihsan Ullah and Michael Madden.
> Debate Panel - Session Discussants: Mauricio Castillo-Effen, José Hernández-Orallo

Invited Talk: Prof. Dr. Umut Durak (German Aerospace Center - DLR, Germany), Simulation Qualification for Safety Critical AI-Based Systems

There is a huge effort towards establishing the methodologies for engineering AI-based systems for safety-critical applications. The automated driving community is highlighting the importance of simulation for virtual development. The correct operation of the systems relies on the correct operation of the tools that are used to create them. Accordingly, the safety of AI-based systems relies on the simulations that are used in their development. We need methods to qualify simulations to be used in the development of safety-critical AI-based systems. Tool qualification requirements have already been established for various safety-critical domains. However, the methods and guidelines for applying these requirements in the simulation engineering life cycle are still missing. This talk proposes a simulation qualification approach, particularly for aviation applications, based on the IEEE Recommended Practice for Distributed Simulation Engineering and Execution Process (DSEEP) and DO-330/ED-215 Software Tool Qualification Considerations.

Session 4: Qualification/Certification of AI-Based Systems - Chair: Mauricio Castillo-Effen

* Building a safety argument for hardware-fault tolerance in convolutional neural networks using activation range supervision, Florian Geissler, Syed Qutub, Sayanta Roychowdhury, Ali Asgari, Yang Peng, Akash Dhamasia, Ralf Graefe, Karthik Pattabiraman and Michael Paulitsch.
*
Artificial Intelligence for Future Skies: On-going standardization activities to build the next certification/approval framework for airborne and ground aeronautic products, Christophe Gabreau, Béatrice Pesquet-Popescu, Fateh Kaakai and Baptiste Lefevre.
*
[No recording as per Speaker request] Using Complementary Risk Acceptance Criteria to Structure Assurance Cases for Safety-Critical AI Components, Michael Klaes, Rasmus Adler, Lisa Jöckel, Janek Groß and Jan Reich.
> Debate Panel - Session Discussants: John McDermid, Huascar Espinoza

bottom of page