top of page

RECORDED SESSIONS

Co-sponsored by the Assuring Autonomy International Programme (AAIP) and the Centre for the Study of Existential Risk (CSER)

Towards an AI Safety Landscape, Introduction by Workshop Chairs - Xin Cynthia Chen (University of Hong Kong)

On behalf of the workshop chairs, Cynthia summarized the main motivation and objectives of the AI Safety Landscape initiative: get more consensus and focus on generally accepted knowledge. She also presented the proposed Landscape categories. The chairs recognize the complexity of establishing a generally acceptable classification, especially when the intent is to cover different kind of systems/agents, application domains and levels of autonomy/intelligence.

Icon_GuestSpeaker.png
presentation.png

Creating a Deep Model of AI Safety Research - Richard Mallah (Future of Life Institute)

Richard represented the Future of Life Institute (FLI), which fostered the creation of a Landscape of AI Safety and Beneficence Research for research contextualization and in preparation for brainstorming at the Beneficial AI 2017 conference. It has a strong focus on AI-based systems where the main concern is to ensure that machine intelligences, which becomes more and more general and broad in their capability, remain beneficial for the humanity. In this sense, both “AI” and “safety” cover very broad problems, including AGI and superintelligent agents as well as ethics and security.

Icon_GuestSpeaker.png
presentation.png

Towards a Framework for Safety Assurance of Autonomous Systems - John McDermid (University of York)

John is Director of the Lloyd’s Register Foundation funded Assuring Autonomy International Programme (AAIP). His talk addressed the challenges of safety assurance of autonomous systems and proposes a novel framework for safety assurance that, inter alia, uses machine learning to provide evidence for a system safety case and thus enables the safety case to be updated dynamically as system behaviour evolves. AAIP develops a Body of Knowledge (BoK) intended to become a reference source on assurance and regulation of Robotics and Autonomous Systems (RAS).

Icon_GuestSpeaker.png
presentation.png

Panel 1: The Challenge of Achieving Consensus - Chair: Xiaowei Huang - Discussants: Richard Mallah, John McDermid

During this session, Richard and John discussed their experience in the initiatives they lead to look for consensus in a related field, the challenges in AI Safety for getting such consensus? They also discussed to what extend we could get consensus in AI Safety, what priorities should be considered to get consensus for an AI Safety Landscape in the AI Safety field. Finally, they mentioned the kind of mechanisms they deem essential for finding consensus in the safety-critical systems domain considering AI and autonomy aspects.

Icon_GuestSpeaker.png
presentation.png

AI Safety and The Life Sciences - Gopal Sarma (Broad Institute of MIT and Harvard)

Gopal discussed the need to consider Life Sciences in engineering future safe AI-based systems. He anticipate the narratives surrounding biotechnology controversies to become intertwined with concerns related to AI and AI safety. From a public policy and public relations standpoint, this will create many novel challenges in crafting a set of national priorities that address both the concerns of elite scientists (such as the AI safety community) as well as the many fears the general public will have about the interplay between artificial intelligence and synthetic biology.  

Icon_GuestSpeaker.png
presentation.png

Formal Methods in Certifying Learning-Enabled Systems - Xiaowei Huang (University of Liverpool)

Xiaowei discussed the risk of using DNNs in safety-critical systems and the use of formal methods to guarantee robustness and safety in those systems. He summarized safety risks as related to robustness, generalisation, understanding, and interaction. He considers that current verification effort is focused on robustness. He thinks we need to look at the other areas too! Also, he mentioned the need to develop better run-time monitoring and enforcement approachesfor operational-time errors.

Icon_GuestSpeaker.png
presentation.png

AI Safety and Evolutionary Computation - Joel Lehman (Uber AI Labs)

Joel described the broad aspirations of evolutionary computation (EC), and the intersection of AI safety and evolutionary computation. Some communities within EC focus not on optimization of a fixed objective, but on understanding the algorithmic nature of evolution’s divergent creativity, i.e. algorithms that are capable of continually innovating in an open-ended way. These kinds of evolutionary algorithms (EAs) offer a bottom-up path towards human-level AI (HLAI), one where HLAI emerges as a byproduct of a larger open-ended creative project, as occured in biological evolution.

Icon_GuestSpeaker.png
presentation.png

Panel 2: The Need for Paradigm Change - Chair: Seán Ó hÉigeartaigh - Discussants: Gopal Sarma, Xiaowei Huang, Joel Lehman, Nadisha-Marie Aliman, Fredrik Heintz

This panel discussed how AI/ML/DL are stretching the (technical and non-technical) limits of the traditional system engineering disciplines in present-day intelligent systems, and more capable future AI-based systems. Discussants provided a view of the challenges to include new paradigms in AI Safety. They also discussed the priorities in terms of research and development should be considered to include new paradigms in AI Safety, as well as to what extent regulatory frameworks should be changed to tackle the challenges of safe AI-based intelligent autonomous systems.

Icon_GuestSpeaker.png
presentation.png

AI Safety for Humans - Virginia Dignum (University of Umeå)

Virginia emphasised the socio-technical perspective of AI Safety. She looked at ways to ensure that behavior by artificial systems is aligned with human values and ethical principles. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. She particularly focused on the ART principles for AI: Accountability, Responsibility, Transparency.

Icon_GuestSpeaker.png
presentation.png

Towards Trustworthy Autonomous and Intelligent Systems - Raja Chatila (Sorbonne University)

This talk focused on how to make autonomous systems trustworthy to reliably deliver the expected correct service. As decisions usually devoted to humans are being more and more delegated to machines, sometimes running computational algorithms based on learning techniques using data, operating in complex and evolving environments, new issues have to be considered. Raja discussed new technical and non-technical measures to be considered in the design process and in the governance of these systems. He emphasises the IEEE and EU work in this area.

Icon_GuestSpeaker.png
presentation.png

AI Principles and Ethics by Design - Jeff Cao (Tencent Research Institute)

Jeff referred to the Tencent RI work, which is involved in transportation and healthcare sectors, where ethics is important. Jeff mentioned that there are three levels of AI safety: technical, physical and social safety. They have both found problems with e.g. Tesla systems, and hacked them. Tencent people have produced a research report on ‘tech ethics’. He talk about ARCC principles: available, reliable, comprehensible, controllable. He stresses the need of multi-level governance: laws and regulations; industry self-regulation and; education and awareness raising.

Icon_GuestSpeaker.png
presentation.png

Panel 3: Towards More Human-Centered and Ethics-Aware Autonomous Systems - Chair: Richard Mallah, Discussants: Virginia Dignum, Raja Chatila, Jeff Cao

This panel discussed Which aspects of ethics and human-centred disciplines are of high priority when dealing with safety-critical AI-based systems. Developers and operators, at minimum – should have principles and guidelines for such organisations. We should start by looking at existing legal mechanisms. What are the incentives for an organisation to follow ethical guidelines? Positive differentiation!! Customer trust is another key differentiator; and this will influence success in the marketplace. But making things trustworthy may make them more expensive, so maybe there also needs to be an overarching regulatory framework.

Icon_GuestSpeaker.png
presentation.png

Specification, Robustness and Assurance Problems in AI Safety - Victoria Krakovna (Google DeepMind)

Victoria presented the DeepMind's categories for AI Safety, as a first (DeepMind) attempt to map the AI safety knowledge. This includes near and long-term AI safety issues. She discussed the three areas of technical AI safety: specification, robustness, and assurance. Particular focus was done on Specification (ideal, design and relealed specification). DeepMind feels these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research. DeepMind made progress in some of these areas but but many open problems remain.

Icon_GuestSpeaker.png
presentation.png

Panel 4: Building an AI Safety Landscape: Perspectives and Future Work  - Chair: John McDermid, Discussants: Richard Mallah, Seán Ó hÉigeartaigh, Xiaowei Huang, Andrea Aller Tubella

This panel focused on the questions of gaining consensus, terminology,  and connections for building the landscape – safety engineering, legal and ethical, cognitive science, etc. It was suggested that formal methods expertise is needed; as well as to understand human factors; and to consider long-term monitoring of systems. There is a need to consider certification. DeepMind team work – more in foundations & specification and modelling – but scaling is really missing. Legal issues, open vs closed worlds. Importance of system modelling – we need to include this aspect in the landscape.

Icon_GuestSpeaker.png
presentation.png
bottom of page