The AISafety workshop seeks to explore new ideas on safety engineering, as well as broader strategic, ethical and policy aspects of safety-critical AI-based systems.
You are invited to submit:
- Full Technical Papers,
- Proposals for Technical Talks,
- Position Papers.
Abstract: May 6, 2021 (Extended)
Proposal Submission: May 10, 2021 (Extended)
Acceptance Notification: May 30, 2021
Camera Ready Version: June 15, 2021
This initiative aims to define a “view” of the current needs, challenges and state of the art and the practice of the AI safety field, towards developing an AI Safety body of knowledge.
The Assuring Autonomy International Programme is delighted to be supporting the AI Safety Workshop 2019. The Programme is advancing the safety of robotics and autonomous systems (RAS) across the globe. It is a £12million partnership between Lloyd’s Register Foundation and the University of York that is working with an international community of developers, regulators, researchers and others to ensure the public can benefit from the safe, assured and regulated introduction and adoption of RAS. The Programme is addressing core technical issues underlying the assurance of RAS, supporting industrial demonstrator projects, delivering training and education, and creating an online Body of Knowledge that will reflect the evolving state-of-practice in assuring and regulating RAS.
The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre at the University of Cambridge dedicated to the study and mitigation of existential risks posed by present or future technology.
The CEA (Commissariat à l'énergie atomique et aux énergies alternatives), is a French public government-funded research organisation in the areas of energy, defense and security, information technologies and health technologies. The LIST Institute works on four main themes: factory of the future, cyber-physical systems, artificial intelligence and digital health.