Opening the Loop — Enhancing Humanitarian Learning through Humanitarian and AI collaboration

IFRC GO
6 min readNov 6, 2024

--

In the world of humanitarian response, every insight gained in the aftermath of a crisis holds the potential to save lives. Over the past two years, our team has been working to develop a system that does just that — condensing and transforming raw, distributed information into bite-sized, actionable knowledge for responders around the world. Like pieces of a vast mosaic, each lesson learned fits into a larger picture, offering critical context and clarity.

This blog post shares the latest chapter in that journey. We initially set out to “close the loop”, ensuring learning feeds back to those who need it most. Over the last year of development, we’ve realised the value of “opening the loop” to enable a continuous, accessible and sustainable cycle of collaboration between humanitarian workers and AI. We think it holds great promise to improve absorption of learnings across our network.

The evaluation and coordination of the Red Cross (FACT) moves from one displaced persons camp to another to assess emergency humanitarian needs in Côte d’Ivoire. Credits: Sophie Chavanel, FACT, IFRC

Building on Strong Foundations: The Operational Learning Database

Expanding from the original focus on lessons emerging from the IFRC Disaster Response Emergency Fund (DREF) operational reports, we have set out to create a centralized operational learning database. Rather than starting from scratch, we looked for existing IFRC resources, agreed frameworks and tables, “standing on the shoulders of giants” to build a database that connected learning insights to different disaster types, PER components, and sectors. This approach meant we weren’t just aggregating data — we were leveraging the collective experience of prior institutional systems.

To make these insights accessible, we developed a queryable API endpoint, where users can filter the data by region, country, disaster type, and more, customizing their view to specific operational needs. Alongside the API, we launched an admin interface for validation, allowing team members to verify and refine the automatically tagged data, ensuring every piece of information is validated before ingestion to other systems. This structure laid the groundwork for a platform that can evolve alongside the different use cases of our responders.

Distribution of learning per region, country and operation

Automating the Flow of Knowledge: DREF Ingestion

The next step was to connect the sections from the digitalised IFRC-DREF final reports to our system. Unlike traditional reports, these entries are already structured due to their being processed through GO, allowing us to simply tap into existing fields without much extra processing. We set up a cron job to automatically pull these learnings into our database and tag them according to the IFRC’s NS Preparedness (PER) framework on a regular schedule, ensuring the flow of fresh, relevant insights.

We also added a scheduled email reminder for teams responsible for closing the final reports from which we are extracting the learnings, helping streamline the process and avoiding delays. This combination of automated data ingestion and human reminders has enabled a consistent flow of valuable information, ready for validation and its subsequent frontline application.

Extracting Learnings from Unstructured Text: Emergency Appeals Ingestion

Incorporating emergency appeal documents proved more challenging due to their unstructured format. PDFs from different years followed varied templates and naming conventions, but our team saw this as an opportunity to innovate. A colleague from the National Society Development Team developed a custom script to extract learnings and sectoral tags directly from these documents.

The result was a solution that not only categorized the insights into learning or challenge but also linked them to their specific sector, preserving the original context. Testing environments were set up to validate each iteration of the script, ensuring continuity in the accuracy of data extraction. Additionally, we implemented a quarterly human validation process to confirm tagging accuracy, achieving a balance between automation and human oversight.

Number of operations with learnings in the database by appeal type and year (cream: DREF, orange: EA)

Summarizing at Scale: Leveraging GPT-3.5 for Actionable Insights

For summarization, we initially explored open-source models, but the high compute costs proved unsustainable for our use case. Ultimately, we adopted Azure OpenAI’s GPT-3.5 model, which offered a viable pay-as-you-go solution within our regional constraints. With careful consideration, we balanced quality and cost, setting up a pipeline that’s ready to scale to GPT-4o once we have tested the solution is not cost-prohibitive given the potential amount of global users.

Our summarization pipeline includes three key steps: prioritization, contextualization, and prompting. In prioritization, we filter learnings by topic and relevance, adapting the scope based on region or country to capture the highest-priority insights. Contextualization adds essential metadata — year, location, disaster type — ensuring that each summary remains anchored in its operational context.

The prompting phase employs a carefully crafted system message and instructional steps developed through prompt-engineering sessions with our IM/PER/DREF/Learning/Operations composed team. Following the humanitarian analysis flow, the model is instructed to describe, explain, and interpret the data, aggregating information and selecting the top three findings to make each summary relevant and immediate. Output formatting follows a JSON structure, with custom fields for title, summary, confidence level, and source linkage, maintaining both clarity and consistency.

Distribution of learning snippets by PER areas

Bringing It to Life: User-Centred Front-End Design

Our front-end design is built with users’ needs in mind, enabling them to explore insights through multi-select filters for region, country, disaster type, sector, PER component and date. The open text search feature allows users to pinpoint specific learnings quickly. Additionally, each summary links back to the original report, providing both high-level takeaways and direct access to the full context — bridging the gap between actionable data and its source.

To support global accessibility, the platform offers automatic translation into English, Spanish, and French. All this displayed in the learn section of the GO platform, recognized as the IFRC response platform, aiming to have better visibility and adoption across our network.

Egyptian Red Crescent volunteers conducted an awareness session in a school in the city of Samannud. Credits: Egyptian Red Crescent Society

Next Steps: Enhancing the Platform and Expanding the Data Scope

As we continue to refine the system, three main areas of focus lie ahead:

1. Focussed User Consultation: We’re conducting usability testing by observing responders as they use the site and gathering their feedback. This will guide us as we integrate visual figures, plots, and potentially customisable summaries for specific milestones and workflows, making insights even more actionable and decision-ready.

2. Ingesting Evaluation Reports: Next, we plan to incorporate evaluation documents, adding another layer of unstructured data to our database. These reports bring rich, historical context, contributing to a fuller understanding of the operational landscape.

3. Continuous Quality Assessment: To ensure the summaries remain accurate and reliable, we’re conducting both automated and human evaluations. Using metrics like relevance, coherence, consistency, and fluency, we’ll have humans systematically evaluate the AI-generated summaries against consistent benchmarks, refining the outputs for even higher standards.

A New Node in the Humanitarian Data Ecosystem

We’ve described how, by transforming distributed data into accessible, context-rich summaries, we’ve built a system that we think closes the loop on humanitarian response learning, allowing responders to systematically access summarised and contextualised insights when they are facing similar decisions. By opening the door for responders to quickly access insights and link back to the source, we’re enabling them to act faster and more effectively — an outcome that could make all the difference in their critical work.

Our position within the IFRC’s data ecosystem enables us to benefit from scale across our 191 member National Society network. GO surfaces these snippets of learning on the platform, but we see a lot of potential further value by integrating the operational learning service into other systems and processes. The GO Wiki and API provides the explanation and access for other teams to benefit from our unique resource. Please get in touch (im@ifrc.org) to let us know how you’d like to use this data in your work.

Whether you’re a data enthusiast or part of our humanitarian team, we invite you to join us in this journey of continuous improvement. Together, let’s keep applying novel technologies to bring potentially life-saving insights to people who need them most.

--

--

IFRC GO
IFRC GO

No responses yet