SHAPING THE FUTURE OF AI

“Shaping the Future of AI”

Report by Emily Claessen
Defence and Security Forum

Speakers:
Professor Sir Nigel Shadbolt,
Alex van Someren,
Lady Olga Maitland,
Fergus Hay.

The dual nature of AI

Artificial Intelligence (AI) is changing the world at an incredible pace, reshaping everything from national security to cybersecurity and geopolitics. It is a double-edged sword: on one hand, it powers cutting-edge military tools and strengthens defences by helping detect threats and respond to them faster. On the other hand, it gives malicious actors the tools to exploit systems with greater precision than ever before. AI’s impact is everywhere, from conflict zones to critical infrastructure, forcing us to grapple with the challenges and responsibilities that come with this powerful technology.

Artistic representation of artificial intelligence

A legacy of innovation

Artificial intelligence is not a new discipline. Its roots stretch back to Alan Turing’s 1950 paper, which introduced the concept of machine intelligence and the Turing Test. Over the decades, AI has transitioned from theoretical constructs to practical applications. Events like IBM’s Deep Blue defeating a chess grandmaster in 1997 or the rise of machine learning algorithms in the 2010s have showcased AI’s potential.

Today, advances in machine learning, deep neural networks, and computational power have led to AI systems that can analyse vast datasets, identify patterns, and make predictions with remarkable accuracy. Yet, these “stochastic parrots” remain fundamentally limited, excelling in specific tasks but lacking genuine understanding or consciousness.

This evolution, while remarkable, has introduced vulnerabilities. As AI systems often rely on large computational resources and datasets, missteps in their training, such as reliance on flawed or recycled data, risk “model collapse,” diminishing their effectiveness and reliability over time.

AI and warfare

The integration of AI into geopolitics and national security has transformed military strategies. The conflict in Ukraine is a case in point, where cyberattacks, drone technology, and precision-guided systems became critical tools. AI-enabled advanced surveillance, streamlined communication, and provided real-time threat analysis, reshaping the battlefield.

However, these innovations come with significant risks. AI-powered autonomous systems in warfare – such as fighter jets using AI for missile defense – showcase the speed and precision AI can bring. But this reliance on AI in life-and-death decisions raises profound ethical and practical concerns. Machines cannot incorporate the nuanced judgement and moral reasoning necessary in high-stakes contexts.

Moreover, adversaries exploit AI’s capabilities to destabilise systems, manipulate information, and launch cyberattacks. AI-driven disinformation campaigns, for example, can erode trust in democratic institutions, influence elections, and spread discord.

In the face of escalating cyber threats, AI could turbocharge attacks like denial-of-service (DDoS) from specific geographies, destabilising critical infrastructure. A troubling view is emerging where certain nations, particularly the US, could treat the development of superintelligent AI as part of a “Manhattan Project” aimed at overwhelming potential adversaries. This raises concerns about a future where AI-driven warfare could become more common, creating new vulnerabilities that are harder to address.

The AI arms race

AI’s ability to detect, predict, and respond to threats offers defenders powerful tools to protect systems and mitigate vulnerabilities. For instance, anomaly detection and automated threat response mechanisms enable organisations to counteract attacks more effectively than ever before.

Yet, these same capabilities empower cybercriminals. AI enables more precise phishing campaigns, where attackers exploit behavioural patterns to craft convincing scams. A striking case involved AI-generated deepfake videos used to impersonate a Hong Kong finance executive, convincing an employee to transfer $25 million to a fraudulent account. These incidents illustrate how AI can be weaponised to deceive even the most cautious professionals.

Synthetic media – like AI-generated child exploitation material – exemplifies AI’s darker uses. Although some argue such content prevents harm by avoiding real victims, it fuels harmful behaviours and normalises criminal intent, creating ethical and societal dilemmas.

Young people and talent

Young people are particularly vulnerable in the cyberspace. Many teenagers begin their digital activities innocently, exploring gaming or coding. However, criminal organisations exploit these skills, grooming youth into cybercrime networks.

At the same time, traditional approaches to recruiting cybersecurity talent often fail to resonate with younger generations. Young people may not listen to organisations like the FBI or government agencies, but they are heavily influenced by popular culture, Netflix shows, TV series, and social media.

We must inspire and train young minds to pursue careers in cybersecurity. A key challenge lies in the current recruitment model, which often requires candidates to have a university degree and two years of prior experience. This approach limits the pool of potential talent. Instead, we need to expand our recruitment efforts and consider unconventional pathways to identify and hire emerging talent. One effective way to do this is through “Capture The Flag” (CTF) competitions and hacking challenges. These events can help connect young talent with the cybersecurity job market, offering a platform to showcase skills and attract the attention of companies eager to fill roles. By engaging young people in these hands-on activities, we can create a pipeline of future cybersecurity professionals to address the growing demand for skilled workers.

The UK’s role in AI leadership

As AI’s influence grows, the UK is uniquely positioned to lead. With world-class universities, a strong tradition of ethical governance, and strong STEM degrees, the UK has the tools to shape AI development responsibly.

Initiatives such as the Bletchley Park AI Safety Summit highlight the UK’s role in fostering international collaboration. However, to maintain its leadership, the UK must address challenges, including retaining top talent and scaling technological innovations.

The UK’s intelligence services, such as GCHQ, have long practiced integrating technology into national security. Their expertise in cyber defence and counterintelligence positions the UK as a strong player in developing global AI norms.

Global coordination for a safer AI future

To address AI’s dual nature, international collaboration is crucial. The rise of autonomous systems capable of launching cyberattacks or disrupting civilian infrastructure underscores the urgency of such measures. However, there are geopolitical barriers to achieving consensus, especially in a world marked by increasing fragmentation. Mutual self-interest could drive nations to collaborate on AI regulation, much as they have with other global challenges like nuclear non-proliferation.

Establishing norms, treaties, and ethical standards for AI and forums like the Bletchley Park Summit and partnerships among democratic nations are initial steps toward consensus. The UK, with its ethical governance traditions and technological expertise, can play a leadership role in this effort.

Looking ahead

As AI continues to evolve, its potential to both harm and help society will grow. Policymakers, technologists, and citizens must manage this duality with care, fostering innovation while mitigating risks. Ensuring that AI serves humanity rather than undermining it will require vigilance, collaboration, and a commitment to ethical principles. While the battle between offence and defence in cybersecurity is far from over, with the right strategies and safeguards, the promise of AI can be harnessed to build a safer, more equitable world. The choices made today will shape AI’s legacy – whether as a tool for progress or a force for division.

Facebook
Twitter
LinkedIn