Celebrating the Trailblazers in AI Safety
Briefing

Celebrating the Trailblazers in AI Safety

As we navigate the evolving world of technology, we consistently strive for innovation and the development of cutting-edge solutions to real world problems. One of those is the safety of AI as it grows exponentially.

AI safety involves making sure that AI systems don’t harm humans or our environment. While many top voices discuss this issue, the real heroes are the technicians working tirelessly to build a safe environment where humans and AI can thrive together. 

At Zeki, we've identified the emerging talent in this field using our unique technology that finds  tomorrow’s science leaders today. These early career rising stars may not be household names yet but they're doing incredible work and we expect them to make significant contributions in the future. Meet a few of them below.

Abeba Birhane

Meet Abeba, a cognitive scientist and Senior Fellow in Trustworthy AI at Mozilla Foundation. Her research focuses on human behaviour, social systems, and responsible, ethical AI.

In collaboration with Vinay Prabhu, Abeba revealed that large-scale image datasets, like ImageNet and 80 Million Tiny Images, used in AI system development, contained racist and misogynistic labels and offensive images. 

You can read her PhD thesis here and discover more of her publications here.

Boris Ivanovic

Boris is a Senior Research Scientist and Manager in NVIDIA's Autonomous Vehicle Research Group, where he researches behaviour prediction and how it interacts with autonomous systems. He combines this prediction with perception and planning to improve the overall safety performance of autonomous systems.

He has a background in computer vision, natural language processing, and data science and often uses these concepts in his current research. You can discover more of his work here on his website.

Aaron Gokaslan

Aaron Gokaslan recently won the 2023 PyTorch Award for his excellent code review work in AI technology. Aaron has a significant impact on the PyTorch community, particularly in code review, which plays a critical role in the growth and strengthening of AI infrastructure. 

He's a PhD student and AI researcher who focuses on generative models, robots, and deep learning. Currently, he collaborates with Assistant Professor Volodymyr Kuleshov at Cornell Tech's Jacobs Technion-Cornell Institute, researching open and efficient generative models, specifically looking at how to bring the cost down of training and deployment.

You can read his published works here

Asia J. Biega

Asia J. Biega is a faculty member and research group leader at the Max Planck Institute for Security and Privacy, where she leads the Responsible Computing group. Her research aims to develop principles of responsible computing, data governance, ethics, and digital well-being into practical applications.

She uses computational interpretations of legal requirements and concepts from social sciences and humanities. This approach enhances our understanding of how people interact with online systems, and aids in creating new technical solutions that reduce systemic and individual harm.

You can read more about Asia’s work here.

Deborah Raji 

Deborah Raji, listed among the Times' top 100 people in AI, is a Fellow at Mozilla Foundation. In 2017, while working with Clarifai, a machine learning (ML) company, she found that a content-moderation model was unfairly flagging content featuring people of colour. This realisation led her to shift her attention from startups to AI research. She now focuses on how AI companies can prevent their models from causing harm, especially to overlooked communities during development.

You can follow Deborah Raji on her linkedin, X and can read her works on Google Scholar.

Agata Foryciarz 

Agata Foryciarz is a PhD student at Stanford Computer Science, part of the Health Policy Data Science Lab, and a graduate fellow at the Center for Comparative Studies in Race and Ethnicity.

She researches how statistical properties of ML models that are used in clinical decision-making have an impact on health equity. She's currently studying the effects of using algorithmic "fairness” approaches to create models predicting the risk of cardiovascular disease.

Additionally, Agata leads an interdisciplinary group at Stanford called Computing and Society which encourages a more critical approach to understanding the social impact of computer science.

You can discover more about her work and read her published works on her website here.

Celestine Mendler-Dünner

Celestine Mendler-Dünner is a Principal Investigator at the ELLIS Institute in Tübingen. She also collaborates with the Max Planck Institute for Intelligent Systems and the Tübingen AI Center, both of which are leading centres for purposeful AI in Europe.

She researches the intersection of ML, social contexts, and digital economies.  She's committed to developing both theoretical and practical tools to make ML safe, reliable, and socially beneficial.

Currently, she's exploring performative prediction - how predictions, when used in societal systems, can influence individual actions and reactions. This in turn can alter the behaviour of the broader system, a dynamic effect often overlooked by traditional ML.

You can read her research and learn more about her on her website here.

Mohammad Yaghini

Mohammad Yaghini is a PhD student at the CleverHans Lab and is also a Meta PhD Fellow.

His research focuses on the intersection of ML and privacy, with an emphasis on trustworthy ML. Specifically, he investigates issues related to ML governance. Recently, he delved into how to protect the intellectual property of ML models by identifying and preventing model extraction.

In addition, Mohammad has worked on identifying and addressing failures of trustworthiness measures. One such issue is "fairwashing," where explainability is misused to justify unfairness.

You can discover more about Mohammad’s work and follow him on his website here.

These trailblazers have the skills and novel ideas to propel this field of AI forward and we hope to elevate and connect more young, global science talent to the best funding and opportunities available. You can find out more about our work and purpose at Zeki here.

Share:
LinkedIn share iconTwitter share iconFacebook share icon

Explore more from Zeki

Who we are
Learn More
What we do
Learn More
Our Story
Learn More