Key leaders and voices to follow for insights on AI safety
Briefing

Key leaders and voices to follow for insights on AI safety

Artificial intelligence has made significant progress in recent years, and it's starting to touch every aspect of our lives. From self-driving cars to medical diagnoses, AI is changing the way we live and work.

However, as we move forward with AI development, it's vital to keep safety in mind. Advocates and top voices for AI safety are working tirelessly to ensure that we use this technology ethically and responsibly. Here are just a few influencers in AI safety who are making a difference in the field and are worth a follow.


Allie K. Miller has played in huge role in advancing AI in business. Previously she was in the driver's seat as the Global Head of Machine Learning Business Development for Amazon. In her early career, she also broke the glass ceiling at IBM by becoming the youngest woman to develop an artificial intelligence product. Allie is a champion for the ethical use of AI and her mission is to make AI and ML more accessible and beneficial to all businesses, regardless of their size or resources.

However, Allie’s contributions extend far beyond her professional roles. She is transforming the AI landscape on a global scale, whether through delivering insightful talks around the world, advising on global AI public policy, or authoring guidebooks to help businesses harness the power of AI. Her game-changing efforts have earned her recognition such as "AI Innovator of the Year" by AIconic, LinkedIn Top Voice for Technology and AI, and more.

For those interested in following Allie K. Miller's work and staying up-to-date with her insights on AI, you can follow her on X, and LinkedIn. Her remarkable contributions and public speaking engagements can be viewed on her official website.


Lila Ibrahim is the COO of DeepMind, a Google-owned company that develops AI technologies. With over 25 years of experience she is now responsible for guiding and developing teams across engineering, virtual environments, program management, and operations.

Being a strong advocate for AI safety, Lila plays a key role spearheading the work on critical issues involving the intersection of AI and society, which includes chairing DeepMind’s cross-functional responsibility and AI for Good initiatives.

You can stay updated with Lila Ibrahim's work and insights on AI safety by following her on X @lilaibrahim. For more in-depth information about her initiatives and projects at DeepMind, check out her profile on the DeepMind website and read her contributions on the DeepMind blog.


In the world of academia, few scholars have captured the public's attention as much as Os Keyes. A non-binary researcher with a background in computer science and social justice, Keyes has made significant contributions to the field of human-computer interaction, specifically in the domain of algorithmic fairness. 

Their research primarily focuses on human-computer interaction, algorithmic fairness, and critical computing, with a focus on the ways in which systems of power and oppression intersect with technology. One of their main contributions to the field has been the development of the concept of "data violence", which refers to the ways in which data collection and analysis can perpetuate harm and injustice, especially upon those in the LGBTQ+ community. Their work also includes research on using technology to improve mental health for queer and trans people, creating tools to support non-binary individuals, and studying the role of algorithmic bias on trans people's healthcare access.

For more information about Os Keyes's work and to stay updated on their latest research, you can follow them on X or visit their personal website.


As co-founder of the Alignment Research Center, Paul Christiano is focused on making AI safer. He previously ran the language model alignment team at OpenAI and continues to dedicate his research towards ensuring AI is aligned with human goals. He believes that by understanding and addressing alignment issues, we can create a safer and more beneficial future for humanity.

Christiano's expertise in reinforcement learning has been crucial in addressing alignment challenges, which have become increasingly important as AI technology advances at a rapid pace. His research and contributions have helped shape the conversation around alignment and inspire others to work towards the same goal.

In addition he's also an advisor for several organisations focused on AI safety and ethics. For those interested in following Paul Christiano's work closely, he maintains an active presence on various platforms. He regularly contributes to the Alignment Forum, where he discusses AI alignment and safety issues as well as here on AI Alignment website.


Founder and Executive Director of Distributed AI Research Institute (DAIR), Co-Founder of the Black in AI initiative and former research scientist at Google AI, Gebru has made groundbreaking research in the area of AI fairness and translational AI, with a particular interest in helping to solve problems that are in alignment with the broader goals of AI ethics.

Her work has been published in several top-tier machine learning and computer vision conferences, and she has participated in major ethical AI policy-making events. She has extensively worked at defining and examining the role racial biases and stereotypes play in automated systems. She has shown how this can create negative situations and algorithms that deliver negatively biased results to minorities or different underrepresented groups.  In 2018, she co-authored a paper on the ethical implications of large language models, which raised awareness of the AI community's ethical concerns and led to further research.

A strong advocate for fair and ethical AI practices and a trailblazer in the field, follow Timnit Gebru on X where she regularly shares her thoughts on AI, fairness, and ethical issues at @timnitGebru. On LinkedIn, she posts updates on her recent activities, research, and presentations. Additionally, to engage with the broader community she co-founded, check out the Black in AI initiative.


David Dalrymple was a Research Fellow in technical AI safety at Oxford before becoming the program director of ARIA, an initiative created by the US government to advance AI research and development. He now works on mobilising resources around the mathematical approaches to guarantee safe and reliable AI.

David has significant industry experience in AI, mathematics, and machine learning. He was involved in various research projects with organisations like Google, DeepMind, and OpenAI. His work focuses on the intersection of mathematics and AI safety, exploring ways to prove the reliability and safety of advanced AI systems. David also holds a PhD in mathematics from the University of California, Berkeley, where he was awarded the Rolf Nevanlinna Prize for his outstanding research.

To follow David Dalrymple's insightful work in AI, mathematics, and machine learning, you can follow him on LinkedIn here. His numerous research papers and articles are accessible on his Github. You can also follow him on X for regular updates on his latest projects and viewpoints on AI safety and reliability.


Mustafa Suleyman is the CEO of Inflection AI and co-founder of DeepMind, a leading AI company acquired by Google in 2014.  Mustafa brings years of experience as an entrepreneur and expert in AI. He's actively involved in AI ethics and has co-founded organisations like DeepMind Ethics & Society and the Partnership on AI. He has been a vocal advocate for responsible AI development, promoting transparency, fairness, and accountability.

In addition to his work in AI, Mustafa is passionate about using technology for social good. He has worked with organisations like the UN and Save the Children to develop AI tools for disaster response and humanitarian aid. He is also a strong advocate for diversity and inclusion in the tech industry, working to create a more inclusive and equitable environment for underrepresented groups.

To follow Mustafa Suleyman and keep up with his groundbreaking work in the AI industry, follow him on LinkedIn. He frequently shares updates on his latest endeavours and thought leadership on AI ethics. For more in-depth information about his projects and collaborations, visit the Inflection AI website. For insights on his work with DeepMind and his contributions to AI ethics, visit the DeepMind's website.


Listed in TIMES 100 most influential people in AI, Jan Leike is the head of Super Alignment at OpenAI. Jan has been involved in the development of InstructGPT, ChatGPT, and the alignment of GPT-4. He has also developed OpenAI’s approach to alignment research and co- authored the Superalignment Team’s research roadmap.

Before his time at OpenAI, Jan was an alignment researcher at DeepMind, where he prototyped reinforcement learning from human feedback, developed secure and privacy-preserving AI algorithms and formal verification tools. Jan is a strong advocate for responsible AI and has given talks on the topic at various events and conferences.

For those interested in deep dives into AI safety, Jan’s talks are often available on the OpenAI YouTube channel.


As an esteemed researcher at Harvard University, Lakkaraju actively explores ethical and fairness issues in machine learning. Her expertise in AI has led to the creation of models and algorithms that solve complex business problems. With a commitment to fairness principles, she strives to eliminate biases in various domains, such as hiring, lending, and criminal justice systems. Her work also includes developing algorithms that consider the impact on marginalised communities, such as people of colour and LGBTQ+ individuals especially in domains like hiring, lending and criminal justice systems. She has been recognised as one of MIT Technology Review's "35 Innovators Under 35" and is a recipient of numerous prestigious awards for her groundbreaking research.

To delve deeper into Hima Lakkaraju's progressive work and ideas, you can follow her on LinkedIn. Her extensive research, articles and papers can be found on her Github. For daily insights and updates on Lakkaraju's ongoing projects and perspectives on fairness in AI and machine learning, follow her on X.


As president of the Patrick J. McGovern Foundation, Vilas Dhar leads the foundation's work to promote ethical and responsible AI development, where he advocates for a social compact that prioritises individuals and communities in product development, creating economic and social opportunities, and empowering the marginalised.   Dhar also serves as an advisor to various AI research institutes and organisations working towards ethical AI development. He is a strong advocate for diversity and inclusion in the tech industry, particularly in the field of AI where ethical considerations are crucial.

You can follow Vilas Dhar on Twitter at @VilasDhar to stay abreast of his latest thoughts and initiatives in AI ethics and policy. More about his work at the Patrick J. McGovern Foundation can be found on their website.


Dario Amodei is the CEO of Anthropic, an AI research company that focuses on collaborative AI development. Previously Dario was vice president of research at OpenAI and during his tenure he set the overall research direction at the organisation and led several teams focused on long-term safety research; this included how to make AI systems more interpretable and how to embed human preferences and values in future powerful AI systems. 

Dario is also a leading figure in the field of AI ethics and has published several influential papers on the topic. He continues to work towards creating ethical guidelines for the development and implementation of AI technology.

For in-depth insights into the projects and vision of Anthropic, the AI research company he leads, pay a visit to their website Anthropic.


Alex Hanna is a Director of Research at the DAIR and a vocal advocate against algorithmic bias, especially around gender identity and sexual orientation. A sociologist by training, her personal experiences as a trans woman of colour significantly inform her research, which focuses on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality.  She has also been an active contributor to the ongoing discussions around ethical AI and data ethics.

Hanna's insightful work and commentary on critical issues surrounding AI and data ethics can be found on LinkedIn and X. For a more in-depth understanding of her research and publications, check out her profile on Google Scholar. She is currently associated with the DAIR, where you can learn more about her projects and collaborations.


A professor of computer science at the University of Montreal, Yoshua Bengio is perfectly positioned to help build the talent pipeline in AI safety research as one of the most influential researchers in the field of AI. He's renowned for his ground breaking contributions to deep learning and received the 2018 A.M. Turing Award, often called the "Nobel Prize of Computing," which he shared with Geoffrey Hinton and Yann LeCun.  He's also received the prestigious Killam Prize in 2019 and in 2022, he achieved the status of being the most cited computer scientist in the world. 

Bengio is a strong advocate for responsible AI development and has been outspoken about the importance of considering ethical implications in AI research. His work focuses on understanding the basic principles of learning and cognition, as well as developing more effective algorithms for machine learning.

His work and insights an be found on his website and for more regular updates, follow him on LinkedIn. He is also affiliated with MILA, the Montreal Institute for Learning Algorithms, where you can find details about his projects and collaborations.

AI is set to become a significant part of our future, and it's essential that we develop this technology ethically and responsibly. The work of these AI safety influencers is instrumental in ensuring that we do so. Their insights, research, and advocacy are essential to building a safe and sustainable future for everyone. By following their work, we can stay up to date with the latest thinking in AI safety and contribute to the conversation about how to develop this technology responsibly.

Share:
LinkedIn share iconTwitter share iconFacebook share icon

Explore more from Zeki

Who we are
Learn More
What we do
Learn More
Our Story
Learn More