The realm of international development is witnessing a transformative shift, as the once speculative chatter surrounding artificial intelligence (AI) begins to crystallize into tangible applications. A recent session at the 2024 Australasian AID Conference illuminated this burgeoning intersection of technology and humanitarian efforts, showcasing how AI is being harnessed to refine development policies and enhance programming—an evolution with profound implications for aid organizations grappling with complex issues.
Two pioneering entities traversing this terrain—Dragonfly Thinking and the Development Intelligence Lab—unveiled their innovative methodologies, demonstrating that structured AI tools can complement traditional human analysis while deftly sidestepping common pitfalls of AI integration.
At the forefront of this initiative is Miranda Forsyth, co-founder of Dragonfly Thinking and a professor at ANU’s esteemed School of Regulation and Global Governance (RegNet). Forsyth, who candidly refers to herself as “one of the most non-technological people around,” has undergone a remarkable transformation from AI skeptic to advocate. Her collaborative efforts with Professor Anthea Roberts have yielded an AI-driven platform designed to guide users through the labyrinth of intricate development challenges.
Central to their framework is the “Triple R Framework”—an intricate tapestry woven from the threads of risks, rewards, and resilience. This nuanced approach transcends conventional binary analyses like SWOT or cost-benefit assessments by incorporating a third dimension: resilience. This perspective acknowledges the fluidity of systems over time, where risks emerge from the confluence of threats, vulnerabilities, and exposure, while rewards manifest through the interplay of opportunities, capabilities, and access. Resilience, as defined within their model, encompasses absorptive (immediate responses), adaptive (real-time strategic shifts), and transformative (system-wide changes) capacities.
What sets Dragonfly Thinking apart from traditional AI frameworks is its sophisticated cognitive architecture, artfully layered over advanced language models such as ChatGPT. This system does not merely churn out responses; rather, it navigates users through a methodical analytical process, integrating diverse viewpoints akin to the compound eyes of a dragonfly, providing a comprehensive 360-degree perspective.
Forsyth’s demonstration of the platform took a poignant turn as she addressed the dire issue of sorcery accusation-related violence in Papua New Guinea. The technology adeptly unpacks the multifaceted role of social media in exacerbating or alleviating such violence, enabling users to dissect immediate threats—like viral footage of violence—alongside exposure factors and vulnerabilities, all while highlighting opportunities for human rights advocates to counteract harmful narratives.
Meanwhile, the Development Intelligence Lab, nestled in Canberra, has been delving into this technology throughout the past year. They have aimed to explore whether melding “analog analysts” (humans) with AI could yield more insightful policy analyses. CEO Bridi Rice recounted an intriguing experiment where analysts, devoid of specialized knowledge in development economics and operating under time constraints, embarked on analyzing middle-income traps in Southeast Asia. Astonishingly, within a mere two hours, the analysts, using the Dragonfly platform, articulated insights that development economists deemed significant. This AI-enhanced analysis not only proved more holistic than some human analysts might produce but also revealed a tendency for individuals to focus narrowly on either risks or opportunities based on their academic backgrounds.
Rice emphasized that the true strength of this tool lies not in mere efficiency but in its capacity to elevate cognitive processes, enabling users to “think better” and, ultimately, make more informed decisions. While acknowledging the persistent specter of AI bias—particularly regarding Western epistemologies and the marginalization of local insights—she remained optimistic that AI could potentially mitigate some entrenched biases that traditionally hinder policy deliberations.
The experiences of both organizations proffer valuable lessons for development entities contemplating the adoption of AI. Firstly, though AI has the potential to refine routine tasks like document synthesis, its most significant promise resides in enhancing the analytical quality and decision-making processes. Secondly, while bias remains a pressing concern, employing structured analytical frameworks may illuminate and potentially ameliorate certain institutional and disciplinary biases.
One particularly promising avenue is the potential for this technology to amplify local voices and perspectives in development discussions—a necessity that could be furthered by advancements in AI-driven translation capabilities. The Development Intelligence Lab’s “Southeast Asia Pulse Check” project, which aggregates insights from a diverse cohort of experts across various Southeast Asian nations, exemplifies how AI can bridge the chasm between localized knowledge and policy-making. This capability could be invaluable, especially for smaller nations that often lack robust research and policy analysis infrastructures.
The platform’s strength in synthesizing multifaceted viewpoints may also address a common critique of development initiatives—that they often reflect donor biases rather than authentic local realities. By methodically weaving in varied perspectives and knowledge systems, AI-enhanced analysis has the potential to empower development practitioners to genuinely grasp and react to local contexts.
As both organizations plotted their initiatives for 2025, including the launch of an “AI in development” discussion group and collaborative opportunities for tackling specific policy challenges, they expressed a commitment to diversifying the knowledge sources and perspectives integrated into their systems. This includes an exploration of how to effectively capture and integrate oral and indigenous knowledge.
Yet, as this journey unfolds, several critical questions loom large over the role of AI in development. Chief among them are the challenges of managing AI’s substantial energy demands—consider that AI-driven searches use roughly ten times more energy than standard internet inquiries—and ensuring the thoughtful integration of indigenous and local insights. Both presenters underscored the importance of treating AI as a complement to, rather than a replacement for, meaningful consultation and active engagement with local communities throughout the design and execution of development programs.
Moreover, with AI’s rapid evolution, maintaining appropriate human oversight emerges as a substantial consideration. Rice pointed out that, while some policy makers are already captivated by the caliber of AI-supported analyses, the overarching goal remains to augment, not replace, human judgment in critical decision-making processes. Ultimately, AI serves as an assistive technology, enhancing the considerable value of human experience and discernment.
In a world where the development community, akin to nearly all non-corporate entities, grapples with the intricate landscape of AI access and equity, ensuring fair access to these sophisticated tools becomes paramount. As they evolve into vital resources for policy analysis and program design, it is essential to consider how smaller organizations and those situated in developing nations can leverage these technologies without being further hindered by existing resource disparities.
The collaborative venture between Dragonfly Thinking and the Development Intelligence Lab paints an optimistic picture: thoughtful, strategic AI adoption—focused on elevating human capabilities—holds tremendous potential to equip development practitioners in navigating the ever-more intricate challenges that lie ahead.