top of page

Yonah Welker
EU Commission projects, Yonah.org

Yonah Welker is a technologist and public expert for algorithms and policies, former advisor of the Ministry and authorities of AI and Data, visiting lecturer. Yonah Welker's contributions and work have been featured and added to the acts, reports and frameworks, featured by the White House PCAST, the World Economic Forum, OECD, UNESCO, the WHO, supported AI and Digital Acts, intergovernmental treaties, ontologies of assistive systems, AI, robotics, health, education, accessibility systems, programs and lectures.

Yonah Welker has made over a hundred appearances and published commentary to bring awareness to human-centered technologies and policies, provided tech and policy commentary for the public and authorities (e.g. AI and Data, Telecommunication, Economic and Social Development, technology and research institutions, consulting companies), served as an evaluator and expert to EU-Commission funded projects, contributed to research, development and adoption frameworks, MOOCs, including digital ecosystems, workplaces, educational and public spaces, curated and boarded Summits of AI for humanity, cross-national initiatives. Prior to it, Yonah Welker was co-founder of Hardwaretech think tank, cocreated tech ventures and projects, served as an innovator in residence, evaluator and expert for technology transfer and innovation ecosystems.

​

YonahWelker_edited.jpg

Ability-Centered AI And Policy (Transatlantic Safety Dialogue And Designated Groups)

The proposed talk covers existing issues of the AI Acts, the challenges of high and unacceptable risks systems through the lens of individuals with facial asymmetry, different gestures, gesticulation, communication styles, behavior and action pattern. In particular, people with disabilities, cognitive and sensory impairments, autism spectrum disorders. It also covers statistics addressing misuse and silos including categories of algorithms, policing and city systems, proposed actions and criteria - 6 for facilitating assistive technology and disability-centered AI systems and 8 for safety and preventing misuse, as well as audit and compliance frameworks. (*Following public letter signed by 150 EU organizations including EU Disability Forum)  Similar to how AI systems may discriminate against people of a particular origin or skin tone, systems such as computer vision, facial recognition, speech recognition, and hiring or medical platforms may discriminate against individuals with disabilities. Facial differences or asymmetry, different gestures, gesticulation, speech impairment, or different communication styles may lead to inaccurate identification or discrimination.  For instance, Workday’s AI system was alleged by an older black man with a disability who mentioned that the algorithm potentially hinders his job search. It was also reported that people with disabilities face specific and disproportionate risks from police or security systems since autonomous systems may not correctly recognize assistive devices or target individuals with mental health conditions. Other examples include speech recognition systems that can be less accurate for individuals with speech impairments, leading to misinterpretation, or automated decision-making systems used in education that may not account for the diverse learning styles and needs of students with disabilities or neurodivergent individuals.  These challenges lead us to the necessity of “disability-centered” or “neurodiversity-centered” research, development, and audit frameworks that ensure fairness, transparency, and explainability, humancenteredness, and privacy and security for these groups.  

bottom of page