top of page

Dr. Konstantin Dmitriev
Technical University of Munich

Konstantin Dmitriev graduated from the Moscow Institute of Physics and Technology in 2006 with a Master's Degree in Aerospace Engineering, earning distinction. Since then, he has worked in various programs worldwide, specializing in functional safety and certification, primarily in the aviation domain, but also in automotive, railway, and medical certification projects. 
In 2019, Konstantin started a PhD research program at the Technical University of Munich, focusing on Machine Learning Safety and Certification. Concurrently, he has been working part-time at MathWorks in the Certification and Standards team.
At the same time, he became a member of the newly established EUROCAE WG-114 working group, which supports the development and certification of aeronautical systems that implement AI and ML technologies. Within WG-114, Konstantin co-leads the subgroup SG-23, which is dedicated to the design assurance chapter of the new ML certification standard, ARP6983.
The outcomes of his university research project have been published as research papers and open-source data. These contributions are utilized in WG-114's efforts and serve as the foundation for ML Certification examples created by MathWorks Inc.
Additionally, Konstantin is involved in the EUROCAE WG-127 and SAE S-18 working groups, which are developing new standards for aviation.

 

More details about his professional journey can be found on his LinkedIn page: https://www.linkedin.com/in/konstantin-dmitriev-752b2068/

​

KonstantinDmitriev.jpg

The Evolvement of AI/ML Aviation Regulations and Illustration of Some Practical Aspects through an End-to-End Certification Case Study

The rapid advancement of Artificial Intelligence (AI) and specifically Machine Learning (ML) technologies has necessitated the evolution of aviation regulatory frameworks to ensure the safe integration of AI/ML into aircraft systems. In recent years, aviation authorities, standardization bodies, and industry stakeholders have collaborated to establish a regulatory framework for certifying airborne ML applications. This talk explores the dynamic landscape of AI regulations within the aviation domain, highlighting key elements already established and ongoing efforts.

To illustrate the practical application of these emerging regulations, we present a case study of an ML-based Aircraft Emergency Braking System (AEBS) that employs a Deep Neural Network for the visual detection and classification of airport signs. The presentation will focus on ML-specific aspects, including the ML development assurance process, architectural mitigation strategies, data management practices, unintended behaviors of ML models, and the impact of ML performance limitations and variability on safety assessments. The AEBS case study is used within the EUROCAE WG-114 / SAE G-34 joint working group “Artificial Intelligence in Aviation” as an example to facilitate discussion and drive convergence on various regulatory topics.

bottom of page