As we’ll explore throughout the module, there are myriad ethical considerations around the application of AI in international development. What needs to be considered will differ widely depending on each use case, and crucially the type of AI being developed. This module does not offer a comprehensive survey of the different ethical issues around the application of AI in international development (interested readers can find a summary of more comprehensive resources, in our appendix). But instead, we’re going to surface different approaches to developing responsible AI solutions which are safe, accountable, and rooted in respect for the communities that solutions impact.
Algorithmic decision making
One of the main ethical considerations when creating AI systems, especially when focused on something as serious as detecting forest fires, is to establish an effective balance between the work done by humans and the work done by AI. As Andersen explains, algorithmic decision-making involves AI systems making consequential decisions which affect people’s lives (Andersen, 2019). The risks are that these systems can be biased, based on imperfect data, and subject to error. Accordingly, practitioners need to move carefully when considering the degree of autonomy in the system they’re developing, with a keen eye toward the unique complexities and risks of a given use case.
In developing their solution, LUMs decided that the AI should not have any degree of autonomy in making decisions. Rather, the images and classifications made by the computer vision system would aid the operators from the forest department. What the system does is filter through all the events occurring in the forest, and flag potential fire events to operators, who can receive text messages with the link to captured images containing smoke or fire and decide what to do.
In addition to this, the system is tweaked such that it is overly cautious and doesn’t miss any potential fires. This means that while it flags many false positives – cases where there is no fire present – it identifies every case that ought to at least be checked by the operators on the ground. This strategy effectively balances two demands. Firstly, it effectively meets the intended outcome of creating a system which aids operators to identify fires earlier by providing them with images across the forest that could include fires, and secondly, it does so in such a way that ensures all potential fire events are being flagged. Thirdly, the system keeps synchronizing its events with those reported by NASA FIRMS (which are typically delayed by up to 3 hours) to perform self-evaluation of its performance.
Gathering the data to protect privacy
One of the main ethical challenges for developing effective AI systems is the scope of data that is required to train an accurate model. As we’ll explore later, this can involve gathering private user data which needs to be properly protected to avoid the holding of that data infringing on the rights of those users. In the context of computer vision, this challenge manifests itself in the gathering of images which include people and their homes. The strategy LUMs took to mitigate this risk, was to limit the view of the cameras, such that they could not be rotated 360 degrees in areas which included villages and people’s homes. As such, the cameras were set to face just the forest and could not be rotated to gather images which infringed privacy rights.
You have now finished exploring our first case study about Computer Vision. Click continue to move on to the next case study: Using AI in asset management.