As we see AI integrate with and shape almost every area of our lives: at work, at home, online, there's a pressing need for more scientists to work on AI Safety. This talent is crucial for several reasons:
As AI systems become more complex and prevalent, it's essential that they behave as intended and don't cause harm. AI safety scientists work to develop strategies and systems to prevent unwanted behaviours in AI, contributing to the overall reliability and trustworthiness of these systems.
AI safety is also about ensuring AI systems align with human values and ethical principles. This involves a careful balancing act between technological advancement and ethical considerations.
With the rise of AI, new challenges in regulation and governance have emerged. Experts in AI safety can contribute to strategy research and governance, helping to navigate these uncharted territories.
A career in AI safety can have substantial ethical value. As AI continues to influence various aspects of life and society, ensuring its safe use will have significant implications for our long-term future.
To map this talent, our team looked at the increasing number of scientists working on Generative Adversarial Networks (GANs). Pivotal in fortifying AI models, GANs were introduced into AI in 2014 to make them more secure and dependable, thus forming a critical area of AI Safety.
Initially, Canada was the primary hub for discovery, with GANs first developed in 2014 at Mila, a prominent AI research institute in Montreal. However, the number of scientists now working on this topic has gone global and exponential.
In 2016, the USA emerged as the leader in the field, but there's been a significant surge in contributions from UK scientists. Since 2019, the UK has firmly been in second place and today has nearly 10,000 researchers making strides in GANs research. This number is more than twice that of Germany, which holds the third spot. Canada is closely behind, securing the fourth position in the global landscape of GANs research.
The UK is at the very centre of collaborative work on AI Safety. We can see that the exponential growth in researchers working on GANs in the UK started in 2017. This was the year that the government published its first AI strategy and committed to spending £1 billion on AI from 2017-2022.
The strategy aimed to promote the responsible and safe development of AI, encourage innovation and position the UK as one of the leaders in AI research and development. With responsible development at the forefront of investment, GANs research and AI Safety work was a crucial element.
Apart from significant government investment in AI, it's important to highlight that UK researchers identified the potential of GANs ahead of many of their international counterparts. This foresight allowed the UK to establish specialised centres for those interested in GANs research early on.
While the US continues to be the undisputed leader in this field, scientists based in the UK enjoy the advantage of well-established ties between US and UK institutions. These robust connections provide ample opportunities for UK scientists to delve into this domain in collaboration with US researchers.
The bar chart illustrates the increasing number of scientists working on GANs since 2014, and how their geographical distribution has evolved.
The data is structured to show where GAN scientists are based, and the numbers of scientists has been determined by collating the scientific collaborations pioneers in GAN have fostered in the last 10 years.
Cumulative number of scientists by Country of Affiliation.
Note: Data does not include China.
“The government has plans to make the UK the best destination in the world for researchers, attracting the brightest and best from the UK and from overseas, cementing its status as a science superpower.”
UK Government R&D People and Culture, Strategy July 2021