The rapid development of any technology surfaces a wide variety of risks which need to be carefully considered and mitigated, before rolling out solutions at scale. This is particularly important where the pace of change in that technology is fast-moving and hard to keep track of, as it currently is with LLMs. By failing to consider the societal impacts of these technologies, we open the possibility of locking in business models, inequalities and societal harms which could shape the impact of LLMs for years to come.
This is particularly important in international development, where the impact of these risks can significantly affect the lives of the world’s poorest. Moving fast and breaking things is not an option. We need to move with care, to ensure that these technologies are meeting people’s needs and not causing harm. Later in the module, we’ll offer a more comprehensive outline of these risks and some of the strategies practitioners are currently taking to mitigate them.
Misinformation
In the context of this pilot, one of the key risks the team are focused on addressing is the potential for the solution to produce misinformation. In the context of healthcare, misinformation is a serious problem and one which the Avatr solution was explicitly designed to address.
What is the problem of misinformation, and why does it arise for language models? As Weidinger et al explain, language models learn to generate text based on patterns discerned in their training set. Drawing on these patterns, they predict the likelihood of what the next word in a sentence will be, based on the previous text. Part of the challenge is that there is factually incorrect information in these training sets, which means that the data and patterns they are learning from contain both true, and false statements. The deeper challenge is that even if the entire training data contained entirely accurate information the problem would still arise. This is because the language model produces text based on the statistical probability of words following one another, and this method doesn’t necessarily distinguish between true and false information. The crux of the challenge is that there is a conceptual difference between the method of statistical prediction to find the most likely words to make up a response, to relaying accurate information (Weidinger et al., 2021).
How are Inavya addressing this in the development of their system? While RAG does not completely remove the challenge of misinformation with LLMs, it can help mitigate the challenge. As we’ve explained, Avatr uses RAG to supplement the text generation of the LLM with accurate information retrieved from a database of verified factsheets and peer reviewed medical information. This meant that the healthcare team at each hospital control the content that the LLM draws on to generate responses to people’s queries and ensure that the content being shared is accurate. As such the way in which the solution has been designed, which is to retrieve data, rather than synthesise it, greatly reduces the risk of misinformation.
Data Privacy
Ensuring proper mechanisms for data privacy and security are an essential aspect of all technology use-cases. As Andersen explains, there are a few unique challenges which arise in the context of international development use-cases. In the implementation of AI use-cases in international development, there is a layered network of organisations involved in implementation: donor agencies, foreign and local implementing organisations, private sector software developers, and the beneficiaries the solutions are designed for. This has the potential to create an ambiguity around who’s responsible for ethical management of people’s data. This can be particularly important in corrupt countries, with weak data governance, where leaders could exploit personal data to cause harm to intended beneficiaries of a system (Andersen, 2019). For a more comprehensive analysis of these challenges, you can read Artificial Intelligence in International Development: Avoiding Ethical Pitfalls (Andersen, 2019)
With Inavya’s solution, all of the sensitive medical information which is used in the system is stored by Inavya on secure servers, protected by the privacy controls of Microsoft Azure’s hosting system (MS Azure, n.d.). As such the data is not being passed between multiple organisations, opening the potential for corruption and misuse of that data. Additionally, all of the sensitive medical information about patients, including the details of their condition and care plan, have a role-based access system. As such access to their data is limited to the doctors who are given explicit permission by the patients to access it, Inavya, as the developers of the system, aren’t able to see it. They also ensure that their data storage is in line with key regulatory frameworks such as GDPR, and relevant cybersecurity clearance protocols.