Opening the black box — building systems which help us learn together
The explanation for success hinges, in powerful and often counter-intuitive ways, on how we react to failure. In this blog, we explore an approach taken to systematically collect and present operational issues on the GO platform, available for analysis across the RCRC Movement. This is a story which involves persistence, honesty, commitment to learning and a machine named Bert.
Black box thinking
The aeronautics industry is a benchmark in terms of reaction to failure. Each plane carries a black box*, constantly tracking pilot actions, with the aim of never reproducing the same accident twice. The result is incredible safety: an accident on average every 2.4 million flights. In contrast, the healthcare industry has always approached failure differently, where the accident is viewed more as “something that unfortunately happens…”. Result? The number of deaths that can be avoided in hospitals, in the USA alone, would represent the equivalent of two plane crashes per day.
Every year around 80 National Societies access funding from the DREF to deliver emergency services to those affected, or at risk, of a wide variety of disasters and crises. As part of the accountability for the use of the funds, final reports are submitted for each operation. As well as reporting achievements, National Societies’ share their challenges and lessons. These operational learnings have been buried in the IFRC’s reports database, rarely and unsystematically used.
Hundreds of individual insights per year, collected from across the global IFRC network and voiced by disaster responders on the front line, collecting digital dust.
How can we unlock the value of this feedback, hitherto hidden in a ‘black box’, to help us learn and improve? What would a system to aggregate, analyse and feedback these insights to then in turn make them actionable for frontline responders across the IFRC look like?
In short, how can we ensure all those lessons are systematically captured and fed back to inform future actions, ensuring they are learnt, at scale?
A systematic approach to learning
The DREF team have been tackling this challenge through what has become known as the ‘operational learning initiative’. The learnings were recorded in a semi-structured manner, allowing for honest examination of issues and challenges. So to realise the value of this information, the team needed to transpose the insights from hundreds of PDF reports to a structured dataset.
The team realised that they would need to apply an approach that allows comparability and in turn identification of trends and patterns. This was done by tagging each individual learning in the database. Some of the tags are rather basic, such as the geography, timeframe or hazard the operation was responding to. In addition, to provide more depth, the team used an analytical framework, which is a great way to make sense of data that might initially look ad hoc and disconnected, by sorting and categorising based on predefined structures.
The Preparedness for Effective Response (PER) mechanism is one such framework, which looks at the National Society resources, skill, policies, strategies, procedures, and systems allowing them to respond effectively to emergencies. By categorising all operational learning using the components and areas of the PER, we allow them to speak to each other, identifying patterns, and making it easier for us to see and prioritise which area might need to be strengthened. Are the challenges due to issues of coordination, with authorities, or within the Movement? Are there common issues around procurement, or capacity to conduct needs analysis? Do certain hazards create recurrent problems in sectoral response strategies?
Applying Machine Learning to enable scale up
Even with a process and analytical framework in place, data won’t just jump into each one of the corresponding boxes by itself. The IFRC have been, with humanitarian partners, in the forefront of the development of a platform to support humanitarian analysis, known as the Data Entry and Exploration Platform, or the DEEP. The DEEP provided a means to “tag” the operational learning extracted from the DREF final reports against the PER framework.
With support from several volunteers, the task to tag learnings from all DREF final reports from 2018 and 2019 began, leading to an exploratory analysis dashboard included on the GO platform in late 2020 to enable access and visualisation of the information.
However, the tagging process was proving time-consuming and, with the Covid-19 pandemic redirecting volunteers to operational priorities, the manual tagging process was starting to become unsustainable.
To ensure that the learnings can continue to be surfaced through minimal human effort, we started to look at technologies that could provide scalable support for the tagging process. With the support of Norwegian Red Cross and Innovation Norway funding, as well as pro-bono support from Amesto Nextbridge, a firm focussing on delivering AI solutions, the IFRC worked on an iterative process to build such a system.
The solution we have now implemented improves the workflow by first splitting, or parsing, each pdf into segments ready for the tagging process. The second step uses Google’s Natural Language Processing BERT model to help propose appropriate PER tags. The overall effect reduces the human processing time tremendously, and we continue to further optimise the system’s performance.
“Learn from the mistakes of others, you can’t live long enough to make them all yourself.” — Eleanor Roosevelt
While we invite and offer support to the IFRC network to explore and analyse the data, we want to go much further to integrate this into IFRC planning, response and long term programming decision-points, such as DREF appeal and contingency planning processes. By integrating this data in a feedback loop to decision-makers, we aim to reinforce constant iteration and improvement, and treat all failure as an opportunity to learn.
This approach, that of opening the black box to enable honest analysis of failure, is vital to a healthy self-regulating organisational culture. By adding together these marginal improvements, even if they seem unimportant, we believe we can make a huge difference to the end result, our collective ability to meet the needs of people affected by disasters and crises.
The “Black Boxes” are deceptive, in that they are rarely black. Cockpits usually install bright coloured boxes, such as orange or yellow, to ensure they’re easily discoverable. Likewise, the “Black Box” approach and the analysis contained within shouldn’t be hidden, but one openly and transparently embraced throughout our Movement.