AI tools can be used in multilateral negotiations to promote greater equity among participants, in particular youth delegates. They can help eliminate repetitive tasks and provide powerful tools to summarize and break down complex content; however, they come with risks of data bias, “hallucinations” that present untruths as facts, and reflections of human prejudice. Speakers at this event explained how negotiators can use AI ethically and consciously—for example, through retrieval models that work only with the source material provided by the user, and through better writing of AI prompts. Better use of AI can help address the inequities faced by smaller delegations and non-native speakers of English.
Marie-Claire Graf, Co-Founder, Youth Negotiators Academy, explained that the purpose of the session was to introduce the use of AI negotiation tools and technologies.
David Dao, Founder and Executive Director, GainForest, presented on the benefits, risks, and limitations of AI platforms. Dao first presented an overview of how AI systems work. He cautioned that the use of technology should not “take away our voice” but should empower it, and therefore it is important to use AI tools well, while being aware of their limitations. Through demonstrating a process with the web-based tool, Teachable Machine, he showed how AI applies image classification, and is unable to classify images that differ from what it has seen before. He noted that the biggest ethical concern about AI is its data bias; for example, 85% of the world’s biodiversity data is from North America, and only 5% from the Amazon.
Dao explained the differences between closed-source AI, such as ChatGPT, Claude 3.5 Sonnet, and Gemini, and open-source AI, such Llama 3, adding that he uses Llama 3 in the context of working with Indigenous Peoples in the Amazon. He explained how Large Language Models (LLMs) are trained by humans but may still be subject to “hallucinations” whereby they provide answers that are coherent, but untrue. He warned that, despite being taught by humans not to draw on illegal, dangerous, or toxic content on the Internet, AI models, just like people, can be persuaded to do so.
In response to questions from participants about data privacy and security of personal devices, Dao noted that social media platforms such as Instagram use photos posted by users to train AI on facial recognition. He added that some companies harvest and sell data on user locations to other companies via the data brokerage market. In the context of multilateral negotiations, Graf recommended caution in uploading confidential data to AI, and to only upload publicly available documents, such as COP decisions that are already published. Dao urged participants to always check text produced by AI for accuracy.
Dao then demonstrated how youth negotiators can use AI to support their work. He noted that a survey by the Youth Negotiators Academy highlighted common challenges, namely: the amount of time needed for information gathering; difficulty in understanding the technical language, especially for non-native speakers; and challeges in being taken seriously in high-stakes environments.
With the use of Google’s NotebookLM model, Dao demonstrated how a draft decision can be uploaded so the tool can create summaries, answer questions about the content, help draft recommendations, and even create a podcast from the content. The value of this tool, he explained, is that it is a “retrieval model” that only works closely with source material.
Dao then presented an open-access AI tool originally built for youth climate negotiators, which is now also being used by youth land negotiators. He demonstrated how to use the by uploading an example of a draft decision, and showing how it can break down the content in different ways to make it easier to understand. Pointing out that COP processes do not have to be “dry,” he showed a prompt which said: “Break it down in Arabic like I am 10 years old with emojis.”
One participant asked how to balance excessive use of AI tools with personal productivity. Graf noted that the goal is not to allow AI to replace humans, but to use the tools in a smart way. For example, she said it can free us from very repetitive tasks such as scanning long documents for specific references.
Dao, for his part, warned against relying exclusively on the tool to create interventions at negotiations because of the risk of data hallucination. The model, he said, is helpful for summarizing positions, thus helping negotiators to generate first drafts that can then be checked and further developed by humans.
Finally, he demonstrated ways to write good prompts for AI, highlighting examples of: role engineering, where the model was asked to write text from the perspective of an Indigenous representative; chain-of-thought engineering, where the model was asked to undertake deep thinking; and quote prompting, where the model was asked to provide direct quotes from the negotiating documents to support its recommendations.
Graf, in a follow-up question after the close of the session, noted that these tools were developed to help youth negotiators overcome inequalities, which are inherent in a system that can be overwhelming for first-time participants in terms of the complexity and volume of the content.
Organizer: Youth Negotiators Academy, Future Leaders Network
Contact: Marie-Claire Graf, marie-claire.graf@youthnegotiators.org
Website: youthnegotiators.org
To receive free coverage of global environmental events delivered to your inbox, subscribe to the ENB Update newsletter.
All ENB photos are free to use with attribution. For UNCCD COP 15 Side Events, please use: Photo by IISD/ENB | Angeles Estrada Vigil