Over the past months I had the privilege to evaluate logistics responses in two of the major crises of our time, Syria and South Sudan. The experiences could not have been more different. While both regimes share a certain liking for imposing strict standards and rules, the effects are markedly different. In Syria, rules and regulations for the importation of humanitarian goods are applied consistently if strictly, and when actors make an effort to inform themselves they can use them to their advantage – as their strict application can be relied upon, favouring the prepared. In South Sudan on the other hand, rules and regulations appear to be used more randomly. The difference seems to lie in the application. While in Syria, it is consistent and therefore can be ‘relied upon’, in South Sudan it is inconsistent and not applied in an efficient manner. The result is obvious – while despite open conflict and international conflict in Syria humanitarian goods get into the country almost as easily as in any country at peace, in South Sudan it is even a nightmare to ship stuff within government territory. This goes to show that one cannot equal authoritarian regimes with diffiulties in providing aid – at least as far as getting things in are concerned. Rather, the deciding factor is the strength of the administration. So, to learn from this for Brexit which seems to slowly shape itself into a potential humanitarian crisis in Europe, rules are not the problem but how efficiently they are applied – and how well its subjects are prepared for them. Food for thought…..
I recently analysed the performance of humanitarian funding from the Central Emergency Response Fund (CERF) for a client. The CERF provides an interesting insight into performance monitoring of humanitarian money. It is managed by UNOCHA, who over the last years have gradually improved monitoring of its performance. All the data is publicly available and can be used by anyone to get a very detailed insight into the mechanics of humanitarian funding (CERF Performance and Accountability). This is important as the purpose of the CERF is to provide immediate, kick-start assistance to humanitarian operations, and its contributors should really be able to know whether this ‘spirit’ of the CERF is adhered to on the ground.
What is interesting, though, is the very specific focus on cash throughput (to be blunt). Which is important, as apparently a number of recipients are struggling with spending CERF money as fast as it is provided. The CERF itself actually excels in the fast provision of funding: I found that the time between the application for grants under the Rapid Response window reaching the CERF secretariat and disbursement of the funds is in most cases around a week. Which compared to other humanitarian donors is very fast. The recipients, directly only UN agencies, and indirectly NGOs, then have to use the funds as soon as possible, but within six months of disbursement. This is often a bottleneck, and there are lots of examples (as can be seen from the publicly available data) where agencies have not spent the funds anywhere near the speed they were approved and received. So, ‘burn rate’ monitoring definitely has a role until there is faster expenditure across the board.
What so far seems to be lacking though is the output-level and outcome-level performance monitoring. Not so easy to achieve and measure, as it begs harmonisation of indicators, it basically comes down to what each agency reports for each grant. So for one grant the individual agency reports can differ markedly, both in level of detail and degree of quantification. One reason for this is that the CERF due to its limited volume only contributes a very small proportion of an agency’s total funding for a particular response, and is often mixed with other funds to support one joint response. So it isn’t quite so easy to isolate the contribution of the CERF to a particular part of the operation.
Standard indicators would be an option, i.e. on application an agency would have to commit to which indicators they would report against (they could ideally be SPHERE indicators). But this needs prior agreement, there may be negotiation, and this will delay the currently very fast disbursal. Also, some agencies cannot in their systems isolate the contribution of a particular grant if it is mixed in with others, effectively rendering them ‘blind’ to what a particular pot of money bought in a particular operation.
There is a long way to go therefore, in the process of humanitarian evolution, and the harmonisation of systems for managing emergency response, to allow a better understanding of what humanitarian money actually and specifically achieves on the ground. For now, we only really know what each agency achieves (which is good and valuable), but not whether the ‘spirit’ of the CERF was preserved and its funds were really spent on the most immediate and pressing needs in any particular emergency.
I am looking at humanitarian information management for a client at the moment. What intrigues me is the variety of systems that are around – from “home-grown” (i.e. internal) developments to open source and even crowdsourcing systems (Ushahidi being the most prominent of the latter). Great to see such variety and drive to really make information management more efficient and streamlined.
What I am missing, however, is THE aggregator. I am a big fan of feedly for my rss feeds (and thanks – with hindsight – to Google for burying Google Reader – feedly is so much better). What I miss is an aggregator for humanitarian and development information. A colleague and friend just pointed me to AidData (http://aiddata.org/). Hadn’t heard of it before – and it is such an aggregator. And at first glance it looks impressive and worthy of support.
There is a lot, however, to be done on aggregation at a lower level. Say, there is a crisis in Country X and Ushahidi deploys, UN OCHA runs an IM, various other organisations set up their information systems. Some feed into the OCHA systems, some remain distinct, and most likely not all use the same baseline or situation data, or even the same indicators. All “real life”, non-IT problems, but they affect the IT systems as much as they did affect the pre-IT work.
So how can we aggregate humanitarian data better? How do we really know how many people we have reached across all organisations across all sectors? There remains a lot of work to be done, and while IT will help, we need to fix a lot of intrinsic non-IT information management problems first.
AidData is a promising development. Let’s hope there will be more.
Remote Monitoring in Somalia and Eastern Burma: a Comparative Analysis
Mona Fetouh, United Nations
Volker Hüls, Making Aid Work
Christian Balslev-Olesen, Danish Church Aid
Abstract: As aid agencies contend with increased risks and diminishing humanitarian space globally, they have had to adopt more flexible methods of aid delivery, monitoring, and evaluation. This paper examines experiences of UNICEF, International Rescue Committee, and The Border Consortium in Somalia and Eastern Burma–environments characterized by restricted access and heavy reliance by international aid organisations on local partners. Because of these factors hindering direct access for monitoring visits, these agencies tested and developed a number of familiar and new channels to collect and validate monitoring information in remote management situations. The presentation comes from a practitioner’s perspective, and contributes to the small but growing knowledge base on M&E in emergency contexts.
I will post the link to the paper when it is out
I just read this article on Big Data in the new issue of Foreign Affairs. While reading, I wondered if there is something we can learn for IT in humanitarian response. I can see some potential in getting information that is helpful for early warning or even actual response from using big data in addition to specific assessments and surveys. An example I can think of is correlating mobile payment stats (mobile money is now pretty universal in countries like Kenya and used by virtually everyone) with livelihood early warning systems, as it could give an early indication of livelihoods changing (e.g. more money being sent from the city to the countryside could indicate a worsening situation).
I was just sent the link to this blog post on RTM. For a long time we have tried and piloted technology driven ‘real time’ systems for monitoring. SMS monitoring has been around for some time now for example, so have electronic systems in health clinics etc. that feed big monitoring databases. Is this going to fix the issues of monitoring in general – i.e. poor data quality, insufficient frequency etc. At least it should fix the timeliness of data collection, right ? But is it going to help with all the other issues ? Will it encourage better monitoring practice ?
Granted, we need electronic systems, quantitative methods and statistics skills to process data, but the basic human skill of observation (look and listen) is still the most important on the ground – and is so often not used to its full potential. In that respect, real time monitoring definitely leaves a gap – that of someone ‘checking things out’ which is always much more powerful than just collecting data. We need both, and we need to stop treating monitoring as primarily being a data problem – data are part of it, and needed, but nothing can ever replace simple observation.
Together with Mona Fetouh, now at the UN Office for Internal Oversight, and Christian Balslev-Olesen, now with DanChurchAid, I have worked on remote monitoring in Eastern Burma, based on our experience in Somalia. Mona presented this experience at the 28th ALNAP meeting in April 2013 in Washington, DC – links to the materials are below.
Both environments are characterised by heavy reliance of by international aid organisations on local partners. Because of this, and other factors hindering direct access for monitoring visits, real-time testing was developed for a number of familiar and new channels to collect and validate monitoring information in remote management situations. The presentation was made from a practitioner’s perspective, and learning from testing new channels for collection and validation was offered.
These are the links to the materials: