top of page

Prof. Dr. Simon Burton

Prof. Dr. Simon Burton graduated in computer science at the University of York, where he also achieved his Phd on the topic of the verification of safety-critical software in 2001. Simon has a background in a number of industries but has spent the last two decades mainly focusing on the automotive sector, working in research and development projects as well as leading consulting, engineering service and product organisations. Most recently, he held the role of Director of Vehicle Systems Safety at Robert Bosch GmbH where, amongst other things, his efforts were focused on developing strategies for ensuring the safety of automated driving systems. 
 
In September 2020, he joined the leadership of Fraunhofer IKS in the role of research division director where he steers research strategy into “safe intelligence”. His own personal research interests include the safety assurance of complex, autonomous systems, and the safety of machine learning. In addition to his role within Fraunhofer IKS, he has the role of honorary visiting professor at the University of York where he supports a number of research activities and interdisciplinary collaborations.

Simon_Burton.jpg

KEYNOTE: Safety, Complexity, AI and Automated Driving - Holistic Perspectives on Safety Assurance

Assuring the safety of autonomous driving is a complex endeavour. It is not only a technical difficult and resource intensive task but autonomous vehicles and their wider sociotechnical context demonstrate characteristics of complex systems in the stricter sense of the term. That is, they exhibit emergent behaviour, coupled feedback, non-linearity and semi-permeable system boundaries. These drivers of complexity are further exacerbated by the introduction of AI and machine learning techniques. All these factors severely limit our ability to apply traditional safety measures both at design and operation-time.
 
In this presentation, I present how considering AI-based autonomous vehicles as a complex system could lead us towards better arguments for their overall safety. In doing so, I address the issue from two different perspectives. Firstly by considering the topic of safety within the wider system context including technical, management, and regulatory considerations. I then discuss how these viewpoints lead to specific requirements on AI components within the system. Residual inadequacies of machine learning techniques are an inevitable side effect of the technology. I explain how an understanding of root causes of such insufficiencies as well as the effectiveness of measures during design and operation is key to the construction of a convincing safety assurance argument of the system. I will finish the talk with a summary of our current standardisation initiatives in this area as well as directions for future research.

bottom of page