top of page
Poster.png
Programme

PROGRAMME

AISafety has been planned as a two-days workshop with general AI Safety topics in the first day and AI Safety Landscape talks and discussions during the second day.

ā€‹

August 11, 2019 (Room Sicily 2401 - Venetian Macao Hotel Resort)

Programme1.jpg

August 12, 2019 (Room Sicily 2401 - Venetian Macao Hotel Resort)

Programme2.jpg
Speakers

INVITED SPEAKERS

Best Paper Award

BEST PAPER AWARD

Partnership on AI (PAI) is sponsoring a $US 1,000 Best Paper Award for the best submission to AISafety 2019.

ā€‹

The Program Committee (PC) will designate up to 3-5 papers as candidates to the AISafety Best Paper Award. PC members with direct PAI titles (other than participating in work via a PAI partner organization) are recused from judging to avoid conflicts of interest.

ā€‹

The selected candidates are:

BPA.jpg
logo-PAI-hd2.jpg

The AISafety 2019 Best Paper Award was granted to:

ā€‹

Andrea Loreggia, Nicholas Mattei, Francesca Rossi and Kristen Brent Venable for Metric Learning for Value Alignment.

The best paper will be selected based on the votes of the workshop’s participants. During the workshop, all participants will be able to vote for the best paper.

 

The authors of the Best Paper Award will receive the $US 1,000 prize and a certificate with the name of the award, the name of the paper, and the names of the authors of the paper, at the workshop’s closing.

ā€‹

ā€‹

ā€‹

ā€‹

Committees

ORGANIZING COMMITTEE

  • Huáscar Espinoza, Commissariat à l´Energie Atomique, France

  • Han Yu, Nanyang Technological University, Singapore

  • Xiaowei Huang, University of Liverpool, UK

  • Freddy Lecue, Thales, Canada

  • Cynthia Chen, University of Hong Kong, China

  • José Hernández-Orallo, Universitat Politècnica de València, Spain

  • Seán Ó hÉigeartaigh, University of Cambridge, UK

  • Richard Mallah, Future of Life Institute, USA

PROGRAMME COMMITTEE

  • Stuart Russell, UC Berkeley, USA

  • Victoria Krakovna, Google DeepMind, UK

  • Peter Eckersley, Partnership on AI, USA

  • Riccardo Mariani, Intel, Italy

  • Brent Harrison, University of Kentucky, USA

  • Siddartha Khastgir, University of Warwick, UK

  • Emmanuel Arbaretier, Apsys-Airbus, France

  • Martin Vechev, ETH Zurich, Switzerland

  • Sandhya Saisubramanian, University of Massachusetts Amherst, USA

  • Alessio R. Lomuscio, Imperial College London, UK

  • Mauricio Castillo-Effen, Lockheed Martin, USA

  • Yi Zeng, Chinese Academy of Sciences, China

  • Brian Tse, Affiliate at University of Oxford, China

  • Sandeep Neema, DARPA, USA

  • Michael Paulitsch, Intel, Germany

  • Elizabeth Bondi, University of Southern California, USA

  • Hélène Waeselynck, CNRS LAAS, France

  • Rob Alexander, University of York, UK

  • Vahid Behzadan, Kansas State University, USA

  • Simon Fürst, BMW, Germany

  • Chokri Mraidha, CEA LIST, France

  • Fuxin Li, Oregon State University, USA

  • Francesca Rossi, IBM and University of Padova, Italy

  • Ian Goodfellow, Google Brain, USA

  • Yang Liu, Webank, China

  • Ramana Kumar, Google DeepMind, UK

  • Javier Ibañez-Guzman, Renault, France

  • Dragos Margineantu, Boeing, USA
  • Joanna Bryson, University of Bath, UK

  • Heather Roff, Johns Hopkins University, USA

  • Raja Chatila, Sorbonne University, France

  • Hang Su, Tsinghua University, China

  • François Terrier, CEA LIST, France

  • Guy Katz, Hebrew University of Jerusalem, Israel

  • Alec Banks, Defence Science and Technology Laboratory, UK

  • Gopal Sarma, Emory University, USA

  • Lê Nguyên Hoang, EPFL, Switzerland

  • Roman Nagy, BMW, Germany

  • Nathalie Baracaldo, IBM Research, USA

  • Toshihiro Nakae, DENSO Corporation, Japan

  • Peter Flach, University of Bristol, UK

  • Richard Cheng, California Institute of Technology, USA

  • José M. Faria, Safe Perspective, UK

  • Ramya Ramakrishnan, Massachusetts Institute of Technology, USA

  • Gereon Weiss, Fraunhofer ESK, Germany

Recorded Sessions

RECORDED SESSIONS

Co-sponsored by the Assuring Autonomy International Programme (AAIP) and the Centre for the Study of Existential Risk (CSER)

Towards an AI Safety Landscape, Introduction by Workshop Chairs - Xin Cynthia Chen (University of Hong Kong)

On behalf of the workshop chairs, Cynthia summarized the main motivation and objectives of the AI Safety Landscape initiative: get more consensus and focus on generally accepted knowledge. She also presented the proposed Landscape categories. The chairs recognize the complexity of establishing a generally acceptable classification, especially when the intent is to cover different kind of systems/agents, application domains and levels of autonomy/intelligence.

Icon_GuestSpeaker.png
presentation.png

Creating a Deep Model of AI Safety Research - Richard Mallah (Future of Life Institute)

Richard represented the Future of Life Institute (FLI), which fostered the creation of a Landscape of AI Safety and Beneficence Research for research contextualization and in preparation for brainstorming at the Beneficial AI 2017 conference. It has a strong focus on AI-based systems where the main concern is to ensure that machine intelligences, which becomes more and more general and broad in their capability, remain beneficial for the humanity. In this sense, both “AI” and “safety” cover very broad problems, including AGI and superintelligent agents as well as ethics and security.

Icon_GuestSpeaker.png
presentation.png

Towards a Framework for Safety Assurance of Autonomous Systems - John McDermid (University of York)

John is Director of the Lloyd’s Register Foundation funded Assuring Autonomy International Programme (AAIP). His talk addressed the challenges of safety assurance of autonomous systems and proposes a novel framework for safety assurance that, inter alia, uses machine learning to provide evidence for a system safety case and thus enables the safety case to be updated dynamically as system behaviour evolves. AAIP develops a Body of Knowledge (BoK) intended to become a reference source on assurance and regulation of Robotics and Autonomous Systems (RAS).

Icon_GuestSpeaker.png
presentation.png

Panel 1: The Challenge of Achieving Consensus - Chair: Xiaowei Huang - Discussants: Richard Mallah, John McDermid

During this session, Richard and John discussed their experience in the initiatives they lead to look for consensus in a related field, the challenges in AI Safety for getting such consensus? They also discussed to what extend we could get consensus in AI Safety, what priorities should be considered to get consensus for an AI Safety Landscape in the AI Safety field. Finally, they mentioned the kind of mechanisms they deem essential for finding consensus in the safety-critical systems domain considering AI and autonomy aspects.

Icon_GuestSpeaker.png
presentation.png

AI Safety and The Life Sciences - Gopal Sarma (Broad Institute of MIT and Harvard)

Gopal discussed the need to consider Life Sciences in engineering future safe AI-based systems. He anticipate the narratives surrounding biotechnology controversies to become intertwined with concerns related to AI and AI safety. From a public policy and public relations standpoint, this will create many novel challenges in crafting a set of national priorities that address both the concerns of elite scientists (such as the AI safety community) as well as the many fears the general public will have about the interplay between artificial intelligence and synthetic biology.  

Icon_GuestSpeaker.png
presentation.png

Formal Methods in Certifying Learning-Enabled Systems - Xiaowei Huang (University of Liverpool)

Xiaowei discussed the risk of using DNNs in safety-critical systems and the use of formal methods to guarantee robustness and safety in those systems. He summarized safety risks as related to robustness, generalisation, understanding, and interaction. He considers that current verification effort is focused on robustness. He thinks we need to look at the other areas too! Also, he mentioned the need to develop better run-time monitoring and enforcement approachesfor operational-time errors.

Icon_GuestSpeaker.png
presentation.png

AI Safety and Evolutionary Computation - Joel Lehman (Uber AI Labs)

Joel described the broad aspirations of evolutionary computation (EC), and the intersection of AI safety and evolutionary computation. Some communities within EC focus not on optimization of a fixed objective, but on understanding the algorithmic nature of evolution’s divergent creativity, i.e. algorithms that are capable of continually innovating in an open-ended way. These kinds of evolutionary algorithms (EAs) offer a bottom-up path towards human-level AI (HLAI), one where HLAI emerges as a byproduct of a larger open-ended creative project, as occured in biological evolution.

Icon_GuestSpeaker.png
presentation.png

Panel 2: The Need for Paradigm Change - Chair: Seán Ó hÉigeartaigh - Discussants: Gopal Sarma, Xiaowei Huang, Joel Lehman, Nadisha-Marie Aliman, Fredrik Heintz

This panel discussed how AI/ML/DL are stretching the (technical and non-technical) limits of the traditional system engineering disciplines in present-day intelligent systems, and more capable future AI-based systems. Discussants provided a view of the challenges to include new paradigms in AI Safety. They also discussed the priorities in terms of research and development should be considered to include new paradigms in AI Safety, as well as to what extent regulatory frameworks should be changed to tackle the challenges of safe AI-based intelligent autonomous systems.

Icon_GuestSpeaker.png
presentation.png

AI Safety for Humans - Virginia Dignum (University of Umeå)

Virginia emphasised the socio-technical perspective of AI Safety. She looked at ways to ensure that behavior by artificial systems is aligned with human values and ethical principles. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. She particularly focused on the ART principles for AI: Accountability, Responsibility, Transparency.

Icon_GuestSpeaker.png
presentation.png

Towards Trustworthy Autonomous and Intelligent Systems - Raja Chatila (Sorbonne University)

This talk focused on how to make autonomous systems trustworthy to reliably deliver the expected correct service. As decisions usually devoted to humans are being more and more delegated to machines, sometimes running computational algorithms based on learning techniques using data, operating in complex and evolving environments, new issues have to be considered. Raja discussed new technical and non-technical measures to be considered in the design process and in the governance of these systems. He emphasises the IEEE and EU work in this area.

Icon_GuestSpeaker.png
presentation.png

AI Principles and Ethics by Design - Jeff Cao (Tencent Research Institute)

Jeff referred to the Tencent RI work, which is involved in transportation and healthcare sectors, where ethics is important. Jeff mentioned that there are three levels of AI safety: technical, physical and social safety. They have both found problems with e.g. Tesla systems, and hacked them. Tencent people have produced a research report on ‘tech ethics’. He talk about ARCC principles: available, reliable, comprehensible, controllable. He stresses the need of multi-level governance: laws and regulations; industry self-regulation and; education and awareness raising.

Icon_GuestSpeaker.png
presentation.png

Panel 3: Towards More Human-Centered and Ethics-Aware Autonomous Systems - Chair: Richard Mallah, Discussants: Virginia Dignum, Raja Chatila, Jeff Cao

This panel discussed Which aspects of ethics and human-centred disciplines are of high priority when dealing with safety-critical AI-based systems. Developers and operators, at minimum – should have principles and guidelines for such organisations. We should start by looking at existing legal mechanisms. What are the incentives for an organisation to follow ethical guidelines? Positive differentiation!! Customer trust is another key differentiator; and this will influence success in the marketplace. But making things trustworthy may make them more expensive, so maybe there also needs to be an overarching regulatory framework.

Icon_GuestSpeaker.png
presentation.png

Specification, Robustness and Assurance Problems in AI Safety - Victoria Krakovna (Google DeepMind)

Victoria presented the DeepMind's categories for AI Safety, as a first (DeepMind) attempt to map the AI safety knowledge. This includes near and long-term AI safety issues. She discussed the three areas of technical AI safety: specification, robustness, and assurance. Particular focus was done on Specification (ideal, design and relealed specification). DeepMind feels these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research. DeepMind made progress in some of these areas but but many open problems remain.

Icon_GuestSpeaker.png
presentation.png

Panel 4: Building an AI Safety Landscape: Perspectives and Future Work  - Chair: John McDermid, Discussants: Richard Mallah, Seán Ó hÉigeartaigh, Xiaowei Huang, Andrea Aller Tubella

This panel focused on the questions of gaining consensus, terminology,  and connections for building the landscape – safety engineering, legal and ethical, cognitive science, etc. It was suggested that formal methods expertise is needed; as well as to understand human factors; and to consider long-term monitoring of systems. There is a need to consider certification. DeepMind team work – more in foundations & specification and modelling – but scaling is really missing. Legal issues, open vs closed worlds. Importance of system modelling – we need to include this aspect in the landscape.

Icon_GuestSpeaker.png
presentation.png
bottom of page