Prof. Dr. The Anh Han
The Anh Han is a professor (computer science) at Teesside University. His research interest includes behavioral modelling, evolutionary game theory, agent-based simulations. He has published over 80 peer-reviewed articles in top-tier AI conferences and high-ranking scientific journal. His research has been funded by Future of Life Institute, Leverhulme Trust Foundation, and FWO Belgium. He regularly serves in programme committees of top tier conferences (e.g., AAAI, IJCAI, AAMAS) and on editorial boards of international journals (e.g., PLoS One, Adaptive Behavior).
INVITED TALK: Modelling and Regulating Safety Compliance: Game Theory Lessons from AI Development Races Analyses
The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to "win". In our recent works, we examine this problem theoretically, resorting to a novel innovation dilemma where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. Companies are held to race towards the deployment of some AI-based product in a domain X. They can either carefully consider all data and AI pitfalls along the way (the SAFE ones) or else take undue risks by skipping recommendable testing so as to speed up the processing involved (the UNSAFE ones). Overall, SAFE are costlier strategies and take more time to implement than UNSAFE ones, therefore permitting UNSAFE strategists to claim significant further benefits from reaching technological supremacy first. We show that the range of risk probabilities where the social dilemma arises depends on many factors, the most important among them are the time-scale to reach supremacy in a given domain (i.e. short-term vs long-term AI) and the speed gain by ignoring safety measures. Moreover, given the more complex nature of this scenario, we show that incentives such as reward and punishment (for example, for the purpose of technology regulation) are much more challenging to supply correctly than in case of cooperation dilemmas such as the Prisoner's Dilemma and the Public Good Games. These results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.