By Alastair Spray


Artificial intelligence is already a fact in international development. It is present in many of the tools researchers use on a daily basis to aid in data analysis, translation, and speech-to-text transcription. Services like Google Translate and Otter are significantly cutting the time spent on previously labour-intensive tasks. AI can also support the data collection phase of research, particularly in resource-constrained environments. It can help researchers make predictions, identify challenges ahead of time, and inform interventions.

Outside research, the advancement of existing AI models is expected to have a significant impact on various development areas such as predicting disease outbreaks, droughts, and famines. Remote medical diagnosis and disease outbreak management show great promise, as AI can help health workers target and plan treatments more effectively by identifying patterns in disease transmission. Furthermore, access to satellite imagery and advancements in image recognition can assist farmers in maximizing crop yields by providing models that suggest optimal timings for planting, fertilizing, watering, and harvesting.

Clearly, AI has already begun to make its mark on international development. It is now crucial to consider its responsible implementation.

Risks and implications: fairness and bias

Issues of informed consent, privacy, and data protection become even more complex when AI is involved. Researchers must carefully navigate these ethical challenges to ensure the protection of participants’ rights and the responsible use of data.

There is also a risk of overreliance on AI and the assumption that it can replace human judgment entirely. While AI can assist in data analysis and decision-making, human expertise and contextual understanding are – thankfully – irreplaceable. It is essential to strike a balance between using AI as a tool and maintaining human oversight and critical thinking in the research process.

One key issue is fairness. Unfair systems are those that have unequal effects on different groups, particularly when the results disproportionately impact and reinforce existing patterns of marginalization. These unfair systems often stem from bias. AI can exhibit bias at both the system level and the data level, leading to biased outputs. This, in turn, creates a negative feedback loop, where increasingly biased results are produced over time.

System-level bias refers to developers intentionally or unintentionally incorporating their own biases into the parameters or labels of an AI system. Bias can also emerge at the data level, where the inherent biases present in the data used to train machine learning systems are replicated in the outputs. For example, a system used to recommend senior job candidates for a company in a sector with a lack of diversity, such as banking in the UK, trained on data from current successful employees, may favour older white men while discriminating against women and ethnic minorities.

Risks and implications: accountability and objectivity

Another major challenge is accountability. AI systems make recommendations and inform decisions that impact people’s lives. When something goes wrong who should be held responsible? Several issues contribute to this lack of accountability. Firstly, AI systems themselves lack transparency and are incredibly complex. Without understanding how the system reaches its decisions, accountability becomes difficult.

This issue of accountability is by no means new. As early as the 1970s, an internal IBM document stated that “a computer can never be held accountable therefore a computer must never make a management decision”. More recently however, governments and legal experts have been grappling with the question of liability for crashes involving self-driving cars. The current position in many countries, including the UK, is that when autonomous cars are in autopilot mode, the car manufacturer is ultimately responsible, not the person sat in the driver’s seat.

Another issue is the perception of objectivity and excessive confidence in the results generated by these systems. What if the system makes a mistake? Can individuals appeal the decision? All machine learning systems have error rates, and even where these are low, in a tool with millions of users these can affect thousands of people. Additionally, people who are directly impacted by these systems often remain unaware that they are being evaluated by algorithms. Without knowledge of the system’s existence, accountability becomes non-existent.

Last is the inability of AI systems to feel empathy, a crucial element in almost anything that involves people, but especially important for those seeking to understand and address complex  issues. Human involvement remains essential to ensure that compassion and empathy drive decision-making processes for effective and inclusive development outcomes.

Conclusion

The growth of AI in international development brings both potential and risks. While it can be a powerful tool, it requires responsible implementation. Ethical considerations like bias, transparency, and accountability must be carefully addressed. We must also remember that algorithms lack empathy, underscoring the need for human involvement to ensure compassionate and inclusive decision-making in tackling complex societal issues.

Alastair Spray is a Consultant – Projects and Research. He joined INTRAC in 2020. In his research capacity he undertakes in-house research on INTRAC’s strategic themes and provides support to colleagues applying a range of qualitative and quantitative research methods.

Categories

Blog,