Fieldtrip to the Future

An exploration into how AI could change diplomacy in ten years' time

Imagining the future can be a challenging exercise.

For example, consider how your life would look in 2035. What parts of this future feels different or uncertain? And how would the world around you look, feel, or act differently? 

Thinking about the distant future can help us stimulate creative and strategic thinking beyond our current preoccupations while still informing the choices we make today. As renowned futurist Jane McGonigal says:

“When we imagine 10 years out, we tend to imagine things that are really core to our most authentic values and hopes and needs. We don't have anything on our to-do list for 10 years from today, so we can really choose things that relate to our biggest meaning, our purpose, or the things that bring us joy.”

It’s this space of imagination and divergence, and of drawing out the implications of different future scenarios, that we can use to evaluate our preferences and help inform our decision-making today. 

Now, think about diplomacy. How could diplomacy look different in ten years' time? 

Perhaps not much has changed in the previous ten years, which might make us think things will broadly remain the same. However, remember that in roughly half that time we’ve witnessed a global pandemic that’s transformed how teams view, for example, work culture. It’s clear that future trends and events could fundamentally shift how things are done quite quickly. 

For the Frontier Tech Hub, we believe that fast-paced technology change, particularly in artificial intelligence, will fundamentally change how nations conduct diplomacy, as well as the day-to-day activities of diplomats around the world. 

We’ve been working with the FCDO to think about the future of diplomacy because we believe it will look very different in ways no one can anticipate. And these potential changes require urgent strategic thinking. 

In 2023, we asked FCDO staff how they felt about AI and other developing technologies. Many felt excited but also anxious and overwhelmed. They also wanted to find a way to cut out the noise and navigate the current ‘hype-phase’. So we embarked on a set of immersive activities to help them harness their imagination and think about the future of AI and diplomacy, using proven foresighting methods that have been adopted by other parts of the UK Government.

We used these foresighting methods to:

  • Understand the trends and developments in AI tools.

  • Imagine what’s possible: How could technology change the day-to-day work of diplomatic entities? 

  • Support staff to take action to be resilient in the face of fast-paced technological change.

The work culminated in a rapid workshop, an online "Fieldtrip to the Future", where staff put on their metaphorical backpacks and visited two scenarios from the year 2034.

You can find out more about the trends and the scenarios we developed below.

Five key trends in AI that will impact diplomacy

The Frontier Tech Hub has been exploring how fast-paced technology change - particularly in artificial intelligence (AI) - might alter the face of international diplomacy over the next ten years.   

Through horizon scanning and driver mapping, we’ve highlighted five key trends related to the likely growing influence of AI during this period on diplomats around the world as it grows in sophistication and maturity. 

Whilst the themes contained within these trends are probable, their magnitude and implications are far from certain. 

Click on the drop-down titles below to dig into each trend.

  • The rapid advancement of Generative AI technology (tools like ChatGPT and the thousands of models like it) has led to an unprecedented surge in the creation and dissemination of misinformation, posing significant challenges for diplomatic missions. AI-powered tools can support the mass production of convincing fake content, which could complicate diplomatic communications and decision-making, and reduce the ability for international organisations to provide a trusted message to the global public. On the other hand, AI could also be used to help identify dangerous and misleading content. 

    ⭐ Early signal: a 2025 UK Government inquiry into the role of disinformation campaigns by state and other actors

    Key drivers to look out for: 

    • The extent to which AI-generated disinformation campaigns can create highly sophisticated, believable fake content that challenges diplomatic verification processes and strains international trust. 

    • The degree to which state-sponsored AI disinformation campaigns can manipulate public sentiment and sway international diplomatic outcomes by targeting multilateral discussions and negotiations. 

    • The speed at which legal and regulatory frameworks can adapt to govern AI-generated disinformation, such as deepfakes, and provide effective protections. 

    • The degree to which AI-driven misinformation can exploit psychological biases, amplifying misinformation spread, and the requirement for enhanced digital literacy and resilience among global populations. 

    What governments should consider: 

    • Enhance verification processes for incoming information, particularly during critical negotiations or crises. 

    • Collaborate with tech companies and research institutions to develop and implement AI-powered tools for detecting and countering disinformation. 

    • Engage in capacity-building initiatives to improve digital literacy and critical thinking skills among staff, partner organisations and the general public. 

    • Advocate for international cooperation in addressing the challenges of AI-generated misinformation, potentially through UN agencies or other multilateral forums. 

    • Develop rapid response mechanisms to quickly identify and counter disinformation campaigns targeting national interests or broader international cooperation efforts.

  • Today, diplomatic staff need to stay up to date with changing news and policy developments, as well as information related to states’ positions and the use of specific language. These activities dominate time and can often feel like information overload. AI-powered tools have the potential to transform diplomatic information retrieval by sifting through vast amounts of unstructured databases containing internal documents, diplomatic resolutions and global media. This could completely change the nature of roles within diplomatic missions, where staff could spend significantly less time on finding, distilling, and analysing information. 

    ⭐Early signal: Back in 2023, the non-profit organisation DiploFoundation launched a tool that conducted live AI reporting of a United Nations Security Council debate. 

    Key drivers to look out for: 

    • The degree to which AI-driven knowledge retrieval systems can secure sensitive diplomatic information and protect against potential breaches and misuse. 

    • The ability of AI systems to accurately interpret nuanced diplomatic language and context-specific terminology 

    • The extent to which diplomats are expected to be engaged in understanding, monitoring and providing feedback to AI tools used by their office.

    • The level of adherence to ethical standards and protocols to mitigate biases within AI tools, ensuring objective and fair information in diplomatic decision-making. 

    • The consistency of international AI governance frameworks and their influence on collaboration and trust among diplomatic missions using AI for knowledge retrieval. 

    • The flexibility of AI systems to be scalable and adaptable to the unique requirements of different diplomatic missions, while retaining universal applicability.

    What governments should consider: 

    • Ensure the protection of sensitive diplomatic information if deciding to implement knowledge retrieval and advisory systems. 

    • Understand the extent to which AI systems understand the nuances of diplomatic norms and language, ranging from the use of specific terms to historical decision making.

  • Advanced AI models are enhancing the prediction of conflicts, natural disasters, economic trends, and other events crucial to international relations. These technologies offer new capabilities in anticipating and responding to global challenges, improving diplomatic decision-making and crisis management. 

    ⭐ Early signal: a 2025 UK Government inquiry into the role of disinformation campaigns by state and other actors

    Key drivers to look out for: 

    • The extent to which AI-generated disinformation campaigns can create highly sophisticated, believable fake content that challenges diplomatic verification processes and strains international trust. 

    • The degree to which state-sponsored AI disinformation campaigns can manipulate public sentiment and sway international diplomatic outcomes by targeting multilateral discussions and negotiations. 

    • The speed at which legal and regulatory frameworks can adapt to govern AI-generated disinformation, such as deepfakes, and provide effective protections. 

    • The degree to which AI-driven misinformation can exploit psychological biases, amplifying misinformation spread, and the requirement for enhanced digital literacy and resilience among global populations. 

    What governments should consider: 

    • Enhance verification processes for incoming information, particularly during critical negotiations or crises. 

    • Collaborate with tech companies and research institutions to develop and implement AI-powered tools for detecting and countering disinformation. 

    • Engage in capacity-building initiatives to improve digital literacy and critical thinking skills among staff, partner organisations and the general public. 

    • Advocate for international cooperation in addressing the challenges of AI-generated misinformation, potentially through UN agencies or other multilateral forums. 

    • Develop rapid response mechanisms to quickly identify and counter disinformation campaigns targeting national interests or broader international cooperation efforts.

  • The use of AI tools in day-to-day diplomatic operations presents significant potential for improved efficiencies. Whether that’s through AI-generated summaries of meetings, the creation of briefings, or the development of sophisticated, domain-specific AI tools for analysis. However, all of these raise important considerations about diplomatic norms, as well as in data security, accuracy, and the changing nature of diplomatic work more generally.

    ⭐ Early signal: a 2025 UK Government inquiry into the role of disinformation campaigns by state and other actors

    Key drivers to look out for: 

    • The extent to which AI-generated disinformation campaigns can create highly sophisticated, believable fake content that challenges diplomatic verification processes and strains international trust. 

    • The degree to which state-sponsored AI disinformation campaigns can manipulate public sentiment and sway international diplomatic outcomes by targeting multilateral discussions and negotiations. 

    • The speed at which legal and regulatory frameworks can adapt to govern AI-generated disinformation, such as deepfakes, and provide effective protections. 

    • The degree to which AI-driven misinformation can exploit psychological biases, amplifying misinformation spread, and the requirement for enhanced digital literacy and resilience among global populations. 

    What governments should consider: 

    • Enhance verification processes for incoming information, particularly during critical negotiations or crises. 

    • Collaborate with tech companies and research institutions to develop and implement AI-powered tools for detecting and countering disinformation. 

    • Engage in capacity-building initiatives to improve digital literacy and critical thinking skills among staff, partner organisations and the general public. 

    • Advocate for international cooperation in addressing the challenges of AI-generated misinformation, potentially through UN agencies or other multilateral forums. 

    • Develop rapid response mechanisms to quickly identify and counter disinformation campaigns targeting national interests or broader international cooperation efforts.

  • The way diplomatic negotiations are conducted might change through the increased role of AI systems as advisors, coaches, and agents in negotiation. Artificial intelligence tools can be used to offer new tools for preparation and analysis, including real-time support based on emerging information. But AI systems can also be used to give strategic advice, particularly on the potential interests of counterparts, and in human behaviour questions like how to maintain a productive dialogue given a particular set of constraints. AI technology can also be used to simulate potential routes a negotiation could take, and use these simulations to help teams practice and prepare for different scenarios. 

    ⭐ Early signal: released in 2018, the IBM-produced Cognitive Trade Advisor was set up to augment trade negotiations by answering specific questions about existing trade rules that could support a negotiating party.

    ​​Key drivers to look out for: 

    • The reliability and trustworthiness of AI-driven insights, particularly their ability to provide accurate, relevant information that supports complex, high-stakes negotiations effectively. 

    • The balance between AI assistance and human judgment, and the extent to which AI will be used to support rather than dominate diplomatic negotiations. 

    • The establishment of ethical standards and transparency measures to govern AI-driven negotiation processes, addressing fairness, data security, and bias to maintain credibility. 

    • The level of acceptance and adaptation among diplomatic teams, as the trust in AI-assisted negotiations will depend on AI’s demonstrated accuracy, transparency, and respect for diplomatic norms. 

    What governments should consider: 

    • Ensure AI-driven insights in negotiations are accurate and reliable, with validation processes to build trust in AI-assisted decision-making during high-stakes discussions. 

    • How to maintain a careful balance between AI support and human judgment, and an understanding of under what circumstances one may be preferred to another. 

    • Establish and enforce ethical standards that address fairness, transparency, and data security in AI-driven negotiations, minimizing bias and protecting the integrity of the negotiation process. 

    • Build trust within diplomatic teams by providing training on AI tools, ensuring these technologies are understood, respected, and used effectively in alignment with diplomatic norms.

Now, choose your future…

Working with fictional, imagined scenarios can help teams understand and evaluate the implications of different trends and their underlying drivers, and to anticipate, strategise and as best as they can, future-proof themselves to those trends.

Click on the two options to transport yourself to the year 2034, where things may look, for better or for worse, very different.

Find out more

A selection of readings from across the web on AI and diplomacy