PRESENTATIONS

The presentation files are available in the talk title links below.

Session 1: Adversarial Machine Learning - Chair: Huascar Espinoza

* Understanding the One Pixel Attack: Propagation Maps and Locality Analysis. Danilo Vasconcellos Vargas and Jiawei Su. 
* Robustness as inherent property of datapoints. Catalin-Andrei Ilie, Alin Stefanescu and Marius Popescu [Poster Paper].

* [Cancelled] Error-Silenced Quantization: Bridging Robustness and Compactness. Zhicong Tang, Yinpeng Dong and Hang Su.
* An Efficient Adversarial Attack on Graph Structured Data. Zhengyi Wang, Hang Su [Poster Paper].
* Evolving Robust Neural Architectures to Defend from Adversarial Attacks. Shashank Kotyan and Danilo Vasconcellos Vargas.
> Debate Panel - Paper Discussants: Xiaowei Huang, Jose Hernandez-Orallo

Invited Talk: John McDermid and Yan Jia. Safety of Artificial Intelligence: A Collaborative Model.

Achieving and assuring the safety of systems that use artificial intelligence (AI), especially machine learning (ML), pose some specific challenges that require unique solutions. However, that does not mean that good safety and software engineering practices are no longer relevant. This talk shows how the issues associated with AI and ML can be tackled by integrating with established safety and software engineering practices. It sets out a three-layer model, going from top to bottom: system safety/functional safety; ``AI/ML safety''; and safety-critical software engineering. This model gives both a basis for achieving and assuring safety and a structure for collaboration between safety engineers and AI/ML specialists. It is argued that this model is general and that it should underpin future standards and guidelines for safety of this class of system which employ ML, particularly because the model can facilitate collaboration between the different communities.

Session 2: DNN Testing, Analysis and Runtime Monitoring - Chair: Xiaowei Huang

* Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection? Adrian Schwaiger, Poulami Sinhamahapatra, Jens Gansloser and Karsten Roscher.
* A Comparison of Uncertainty Estimation Approaches in Deep Learning Components for Autonomous Vehicle Applications. Fabio Arnez, Huascar Espinoza, Ansgar Radermacher and François Terrier.
* Bayesian Model for Trustworthiness Analysis of Deep Learning Classifiers. Andrey Morozov, Emil Valiev, Michael Beyer, Kai Ding, Lydia Gauerhof and Christoph Schorn [Poster Paper]. 
* DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning. Samet Demir, Hasan Ferit Eniser and Alper Sen. 
> Debate Panel - Paper Discussants: Seán Ó hÉigeartaigh, Huascar Espinoza

Update Report: Richard Mallah (Future of Life Institute). The AI Safety Landscape Initiative

This talk presents an update report of the AI Safety Landscape initiative. The aim of the Consortium on the Landscape of Artificial Intelligence Safety (CLAIS) is to bring together relevant disciplines, initiatives, and organizations interested in collaborating on developing a map of knowledge on the safety, assurance, robustness, and trustworthiness of AI and autonomous systems. A key ambition of this initiative is to align and densely connect the distinct models of this space borne by diverse perspectives, and to leverage their confluence to power the generation of guidance artifacts relevant to a broad range of public stakeholders.

Session 3: AI Planning, Decision Making and Monitoring - Chair: Mauricio Castillo

Increasing the Trustworthiness of Deep Neural Networks via Accuracy Monitoring. Zhihui Shao, Jianyi Yang and Shaolei Ren.
* Extracting Money from Causal Decision Theorists. Caspar Oesterheld and Vincent Conitzer [Poster Paper]. 
* Safety Augmentation in Decision Trees. Sumanta Dey, Pallab Dasgupta and Briti Gangopadhyay.
* Towards Safe and Reliable Robot Task Planning. Snehasis Banerje [Poster Paper].
> Debate Panel - Paper Discussants: Jose Hernandez-Orallo, Huascar Espinoza

Invited Talk: Nathalie Baracaldo (IBM Research). Security and Privacy Challenges in Federated Learning

In this talk I will first discuss the potential vulnerabilities that arise in this setting including membership inference and poisoning attacks. Then I will discuss some of the solutions that the team I lead at IBM research has been working on. These solutions include the utilization of differential privacy and multi-party computation during the training process, and I will explain their effect on the final accuracy of the global model. Finally, I will outline some research directions in this area. 

Session 4: Ethical and Value-Aligned Learning and Planning - Chair: Richard Mallah

* Classifying Choice Set Misspecification in Reward Inference. Rachel Freedman, Rohin Shah and Anca Dragan.
* Ethically Compliant Planning in Moral Autonomous Systems. Justin Svegliato, Samer Nashed and Shlomo Zilberstein [Poster Paper]. 
* Aligning with Heterogenous Preferences for Kidney Exchange. Rachel Freedman.
> Debate Panel - Paper Discussants: Mauricio Castillo, Xin Cynthia Chen