The best way to understand the tool is to understand why the team needed to develop a fine-tuned, bespoke model, rather than fall back on the capabilities of a tool like ChatGPT.

As we explored in the previous case study, one of the reasons is related to the information they wanted the tool to retrieve. With a bespoke model, you can limit the databases that the tool searches to only those sources which are relevant to your use case. In this instance, the USAID programme reports and resources that referred to tech use cases.  

The other reason is that foundational models like ChatGPT aren’t as adept at processing the domain-specific nuances of language in specific sectors as they are dealing with more generic requests.

Consider this example:

Take the word ‘resilience’. In the humanitarian sector, there is plenty of debate between researchers and experts on how we should understand resilience in the humanitarian context. What it means for a community to be resilient to natural hazards and political-economic disasters requires experience working in these contexts and seeing firsthand what resilience means for people.

By building a domain-specific LLM which is trained by experts in specific fields, the team could create a model which was better able to process and provide contextually appropriate information-rich responses to requests. Being able to do a sophisticated search on a database of information and use that information for use cases like generating briefs for Congress, requires this nuance to avoid the boilerplate responses that foundational models currently offer.