2034: A Human Rights Scenario

You’ve chosen the first scenario - human rights! ⚖️🕊️ This is one of two futures examining the role AI could have in diplomacy. Click here to read the full account.

It’s sunset in Geneva. You’re in the middle of drafting a key statement that will be read out tomorrow. Your head is down in focus as you try to finish in the next hour. But your flow is disrupted by a notification on your side screen that grabs your attention. It’s an annotation from DiploBot. “Hi there, a tone shift in this section could strengthen your argument on climate displacement”. You sigh, knowing that this will be the start of another long conversation…

In the future, DiploBot is an agent tasked with supporting members of your country’s diplomatic Mission. Click on the screen recording to see how it could look.

Welcome to 2034. You are a member of the human rights team at your country’s Mission in Geneva, and it’s an exciting period. Members are developing a groundbreaking international framework that would recognise “climate refugees" as a unique category through the United Nations Office of the High Commissioner for Human Rights (OHCRH). 

For your country’s government, AI has become a crucial component of how you conduct diplomacy. Over the past few years, the capacity for AI knowledge retrieval and analysis means that tailored tools have become the norm, and your team is using them to support the reviewing of draft framework language.

Enter DiploBot, an ethical AI model trained on human rights, fairness, and objectivity principles. DiploBot’s algorithm is designed to ensure that the language of the instrument aligns with universal human rights standards and emphasises non-discrimination and inclusivity. While corrections and performance reports are made regularly, it is trusted to reply to complex requests in a matter of seconds. ​DiploBot’s tasks range from providing personalised briefings to you and your colleagues every morning, to support in crafting airtight arguments. In many ways, it has become the most valuable member of the team until, of course, the drama comes.

Here’s an example. Last week you ran part of the climate refugee draft through DiploBot (who, apparently, has very strong feelings about “flexible” language). DiploBot flagged “may” as “too ambiguous” 17 times. The problem is, using this language aligns better with your government’s strategy. You wonder: are we leaning on DiploBot to reinforce our values, or is it exposing cracks in them? 

Many colleagues feel this tension between the practicalities of international diplomacy and the stringent use of language that DiploBot advocates for and, when people bring up these different perspectives, it is generally uncollaborative. Try asking DiploBot to navigate the messy reality of human egos or the grey zones of political compromise, and you’ll hear: 'That falls outside my decision matrix.'”.

So it’s sunset in Geneva, and it’s time to go full keyboard warrior mode again, explaining yourself to DiploBot. It’s going to be a long evening…


If an ethical AI is a moderating influence that pushes for greater adherence to human rights, what does that mean for the diplomats who have to work alongside it, who might be working with a conflicting set of incentives or beliefs?

Now that you’ve explored this scenario would you like to explore another?