Discover the future of AI: ensuring safety and security in the age of Generative Artificial Intelligence
Briefing

Discover the future of AI: ensuring safety and security in the age of Generative Artificial Intelligence

The rise of generative Artificial Intelligence (AI) has sparked important questions about our security and the future of society. How can we trust these innovative technologies? What is AI Safety, and why should it matter to us? With the UK hosting its first AI Safety Summit on November 1 and 2, it's time to delve into the world of making Machine Learning models safe. 

In essence, AI Safety is a fairly recent field within AI that focuses on making Machine Learning models safe to use. With the rapid advancement of powerful AI systems and their deployment in high-stake scenarios, it's crucial to address the potential risks and ensure their responsible use.

For instance, language and image models could potentially manipulate reality by creating false quotes from influential figures or depicting events that never happened. As a light-hearted example, remember that viral image of Pope Francis in a puffer coat? It was actually generated by AI.

AI-generated image of the Pope in a white puffy jacket
Image credit: Pablo Xavier

To tackle these concerns, AI Safety researchers are dedicating their efforts to four key areas:

1. Developing AI models that can handle unexpected situations and challenges presented by other AI models, ensuring robustness and safety.

2. Implementing monitoring systems to detect any malicious use of AI models, track predictions, and identify any unexpected behaviour.

3. Designing AI models that align with ethical human values, although determining those values is not a technical problem.

4. Utilising AI systems to address systemic safety issues, such as protecting against cyberattacks that involve AI models.

The implications of this research are enormous. Existential threats to humanity have garnered significant attention, but it's not just science fiction. Misuse of AI models is already happening, from spreading false information to committing fraud. Transparency, bias, ethics, security, and job displacement are just some of the risks we face.

The potential real-life impact of these research areas is staggering. Existential threats to humanity, though speculative at this stage, dominate the headlines. The AI Safety Summit will inevitably delve into discussions on this in terms of “frontier” AI and its implications for our future, as these powerful general-purpose systems excel at a variety of tasks. 

However, it's crucial to separate speculation from reality. Current AI models operate by tracking statistical data distributions, meaning the generated output aligns with what the model learned from its training data. As a curious example, image generators struggle with the simple instruction to generate a picture of a watch showing the time of 1pm, producing images with the more aesthetically pleasing time of 10:10am instead, as these are most likely the original source images illustrators and graphic designers choose for marketing.

AI generated images of a watch showing the time as 10:10
AI generated images of watches showing the time as 10:10, when specified to show 1:00

While existential threats capture attention, the misuse of AI models is already a disturbing reality. From information manipulation to fraud and security risks, the scope of misuse is extensive. Lack of transparency, bias and discrimination, ethical dilemmas, concentration of power, and job displacement are just a few of the notable risks facing society today.

So, how can Zeki help navigate what is going on here?

The risks of AI models are real, and designing safe AI is a significant challenge. However, this intensity of research may not be obvious to concerned citizens or even other scientists. Furthermore, the institutions that rely on the best researchers in this field often struggle to find and engage with them effectively.

Enter Zeki. Our mission is to bridge the gap between emerging talent in Deep Tech and the institutions and agencies that can support their endeavours, including in AI Safety. Using our innovative technology, we track where new talent is emerging and can draw conclusions and predictions about the future of AI Safety, identifying current and future trends, and the individuals most likely to make a significant impact if properly supported.

As a case in point, we recently mapped the talent landscape for Generative Adversarial Networks (GANs) to see how scientists in this field are collaborating and evolving. GANs are crucial for making AI models robust and safe to use, and constitute a central topic of research in AI Safety. 

The line chart below shows how the number of scientists working on Generative Adversarial Networks (GANs) across select countries has evolved since 2014, the year these models were introduced into Artificial Intelligence. The data is organised geographically, and the cumulative numbers have been determined by collating the collaborations pioneers in GAN have fostered in the last decade.

Line charts showing The Growth of the Study of Generative Adversarial Networks
The Growth of the Study of Generative Adversarial Networks

As depicted, Canada was initially the primary hub for GAN research, with it first developed at Mila, a prominent AI research institute in Montreal. However, the landscape shifted in 2018 when the UK surpassed Canada, establishing itself as the leading centre for investigation into GANs outside the US and China. Read more about how the UK has evolved as a centre of excellence for AI Safety talent here.

Share:
LinkedIn share iconTwitter share iconFacebook share icon

Explore more from Zeki

Who we are
Learn More
What we do
Learn More
Our Story
Learn More