top of page

PRESENTATIONS

 The presentations of the 2023 edition will appear here. For now, we still show the presentations of the previous edition. This series of presentations will remain available in the "Previous Editions" tab.

Invited Talk 1 - Elizabeth Adams (Standord University Institute for Human Centered AI, USA) , Leadership of Responsible AI– Representation Matters

People of color are adversely affected by artificial intelligence (AI) bias. The effects of AI bias have been noted in facial recognition technology, mortgage lending, and algorithms used to determine healthcare treatments. People impacted by AI bias are rarely represented in the development of AI technology (Atker et al., 2021). To prevent AI bias, including diverse perspectives in the creation of Responsible AI (RAI), artifacts that shape policies, procedures, and governance models, could address potential problems in the development of AI.

 RAI is an emerging business discipline that examines legal, ethical, and moral standpoints of technology development to help reduce AI bias (Barredo et al., 2020, Taylor et al., 2018). By tying impacted people to innovation and incorporating their ideas as stakeholders, innovations gain substantive and symbolic support from those who are most affected by innovation (Boon et al., 2021). My motivation is explore broader employee stakeholder participation in IS, AI and Organizational Learning. Therefore, I seek to answer the following research question: "How does the participation of African American employee stakeholders in the creation of Responsible AI "shaping artifacts" reduce bias in AI?"

Session 1 - AI Ethics: Fairness, Bias, and Accountability - Chair: Gabriel Pedroza (CEA-List, France)

>Debate Panel - Session Discussants: Elizabeth Adams, Gabriel Pedroza.

Invited Talk 2 - Luis Aranda (Organization for Economic Cooperation and Development), Enabling AI governance: OECD’s work on moving from Principles to practice

Three years later, how far have we gotten in putting the OECD AI principles into practice?

Governments and other stakeholders have been working to implement the OECD AI Principles to make artificial intelligence trustworthy for people and planet. This talk is a timely occasion to highlight work to date and discuss future priorities.

The talk will showcase recent initiatives developed by the OECD Working Party on Artificial Intelligence Governance (AIGO) and the OECD.AI Network of Experts, including the OECD.AI Policy Observatory, a catalogue of tools for trustworthy AI, a user-friendly framework for classifying different types of AI systems and a global AI incidents tracker. The discussion will seek to highlight good AI policy practices related to AI governance.

Session 2 - Short Presentations - Safety Assessment of AI-enabled systems - Chair: Douglas Lange (Naval Information
Warfare Center Pacific, USA)

>Debate Panel - Session Discussants: Luis Aranda, Douglas Lange, Mattias Brännström (Umeå University, Sweden)

Keynote 1 - Gary Marcus (Scientist and Author of "Rebooting AI", Canada), Towards a Proper Foundation for Robust Artificial Intelligence

Large pretrained language models like GPT-3 and PaLM  have generated enormous enthusiasm, and are capable of producing remarkably fluent language. But they have also been criticized on many grounds, and described as "stochastic parrots." Are they adequate as a basis for general intelligence, and if not, what would a better foundation for general intelligence look like?

Session 3 - Machine learning for safety-critical AI - Chair: John Burden (University of Cambridge, UK)

>Debate Panel - Session Discussants: Gary Marcus, John Burden, Gabriel Pedroza.

Special Session - TAILOR: Towards Trustworthy AI - Chairs: Francesca Pratesi (CNR, Italy), Umberto Straccia (CNR, Italy), Annelot Bosman (Leiden University, Netherlands)

Keynote 2 - Thomas A. Henzinger (ISTA, Austria) , Formal Methods meet Neural Networks: A Selection

We review several ways in which formal methods can enhance the quality of neural networks:
first, to learn neural networks with guaranteed properties;
second, to verify properties of neural networks;
and third, to enforce properties of neural networks at runtime.
For the first topic, we discuss reinforcement learning with temporal objectives in stochastic environments; for the second, decision procedures for reasoning about quantized neural networks; for the third, monitoring learned classifiers for novelty detection and fairness, and shielding learned controllers for safety and progress.

Session 4 - Short Presentations - ML Robustness, Criticality and Uncertainty - Chair: Fernando Martinez Plumed (Universitat Politècnica de València, Spain)

>Debate Panel - Session Discussants: Thomas A. Henzinger, Fernando Martinez Plumed.

Invited Talk 3 - Simos Gerasimou (University of York, UK) , SESAME: Secure and Safe AI-Enabled Robotics Systems.

Deep Learning (DL) has become a fundamental building block of learning-enabled autonomous systems. Notwithstanding its great potential, employing DL in safety- and security-critical applications, including robots providing service in healthcare facilities or drones used for inspection and maintenance, raises significant trustworthiness challenges. Within the European project SESAME, we develop a model-based approach supporting the systematic engineering of dependable learning-enabled robotic systems. In this talk, we will overview recent advances made by the project team to provide assurances for the trustworthy, robust and explainable operation of DL, focusing particularly on techniques for deep learning testing and uncertainty analysis.

Session 5 - AI Robustness, Generative models and Adversarial learning - Chair: Gabriel Pedroza (CEA-List, France)

>Debate Panel - Session Discussants: Simos Gerasimou, Bowei Xi (Purdue University), Gabriel Pedroza.

Invited Talk 4 - Zakaria Chihani (CEA List, France) , A selected view of AI trustworthiness methods: How far can we go?

While still debating some questions (such as the likelihood of achieving Artificial General Intelligence), the AI community in particular and most of the related stakeholders in general, seem to be more or less convinced that "AI winters" are a thing of the past and that the current "summer" will never end. Indeed, this rapidly evolving field, especially through the recent Deep Learning advances, is too good at particular useful tasks to simply disappear. There is little doubt, for example, that neural networks are poised to permeate a growing number of everyday applications, including sensitive software where trust is paramount.

But as these artifacts move from a fad status to more stably ubiquitous components, their deepening interweaving with different aspects of society deserves a special attention to the development of methods and tools for an adequate characterization of AI trustworthiness. This colossal quest is made difficult by the intrinsic opacity of neural networks and their increasing size, making any method that can bring us closer to trustworthiness a precious commodity. In this talk, we take a retrospective look at some of these methods, and discuss their current added value to safety, as well as the promises they hold for the future of AI trustworthiness.

Session 6 - AI Accuracy, Diversity, Causality and Optimization - Chair: Jose Hernandez-Orallo (Universitat Politècnica de València, Spain)

>Debate Panel - Session Discussants: Zakaria Chihani, Jose Hernandez-Orallo.

bottom of page