Category Archives: Monitoring / Evaluation

How do we measure performance of humanitarian funding?

I recently analysed the performance of humanitarian funding from the Central Emergency Response Fund (CERF) for a client. The CERF provides an interesting insight into performance monitoring of humanitarian money. It is managed by UNOCHA, who over the last years have gradually improved monitoring of its performance. All the data is publicly available and can be used by anyone to get a very detailed insight into the mechanics of humanitarian funding (CERF Performance and Accountability). This is important as the purpose of the CERF is to provide immediate, kick-start assistance to humanitarian operations, and its contributors should really be able to know whether this ‘spirit’ of the CERF is adhered to on the ground.

What is interesting, though, is the very specific focus on cash throughput (to be blunt). Which is important, as apparently a number of recipients are struggling with spending CERF money as fast as it is provided. The CERF itself actually excels in the fast provision of funding: I found that the time between the application for grants under the Rapid Response window reaching the CERF secretariat and disbursement of the funds is in most cases around a week. Which compared to other humanitarian donors is very fast. The recipients, directly only UN agencies, and indirectly NGOs, then have to use the funds as soon as possible, but within six months of disbursement. This is often a bottleneck, and there are lots of examples (as can be seen from the publicly available data) where agencies have not spent the funds anywhere near the speed they were approved and received. So, ‘burn rate’ monitoring definitely has a role until there is faster expenditure across the board.

What so far seems to be lacking though is the output-level and outcome-level performance monitoring. Not so easy to achieve and measure, as it begs harmonisation of indicators, it basically comes down to what each agency reports for each grant. So for one grant the individual agency reports can differ markedly, both in level of detail and degree of quantification. One reason for this is that the CERF due to its limited volume only contributes a very small proportion of an agency’s total funding for a particular response, and is often mixed with other funds to support one joint response. So it isn’t quite so easy to isolate the contribution of the CERF to a particular part of the operation.

Standard indicators would be an option, i.e. on application an agency would have to commit to which indicators they would report against (they could ideally be SPHERE indicators). But this needs prior agreement, there may be negotiation, and this will delay the currently very fast disbursal. Also, some agencies cannot in their systems isolate the contribution of a particular grant if it is mixed in with others, effectively rendering them ‘blind’ to what a particular pot of money bought in a particular operation.

There is a long way to go therefore, in the process of humanitarian evolution, and the harmonisation of systems for managing emergency response, to allow a better understanding of what humanitarian money actually and specifically achieves on the ground. For now, we only really know what each agency achieves (which is good and valuable), but not whether the ‘spirit’ of the CERF was preserved and its funds were really spent on the most immediate and pressing needs in any particular emergency.

Advertisements

Information Management and Aggregation

I am looking at humanitarian information management for a client at the moment. What intrigues me is the variety of systems that are around – from “home-grown” (i.e. internal) developments to open source and even crowdsourcing systems (Ushahidi being the most prominent of the latter). Great to see such variety and drive to really make information management more efficient and streamlined.

What I am missing, however, is THE aggregator. I am a big fan of feedly for my rss feeds (and thanks – with hindsight – to Google for burying Google Reader – feedly is so much better). What I miss is an aggregator for humanitarian and development information. A colleague and friend just pointed me to AidData (http://aiddata.org/). Hadn’t heard of it before – and it is such an aggregator. And at first glance it looks impressive and worthy of support.

There is a lot, however, to be done on aggregation at a lower level. Say, there is a crisis in Country X and Ushahidi deploys, UN OCHA runs an IM, various other organisations set up their information systems. Some feed into the OCHA systems, some remain distinct, and most likely not all use the same baseline or situation data, or even the same indicators. All “real life”, non-IT problems, but they affect the IT systems as much as they did affect the pre-IT work.

So how can we aggregate humanitarian data better? How do we really know how many people we have reached across all organisations across all sectors? There remains a lot of work to be done, and while IT will help, we need to fix a lot of intrinsic non-IT information management problems first.

AidData is a promising development. Let’s hope there will be more.

Paper on Data Quality in Remote Monitoring at Evaluation 2013

Our work on Remote Monitoring will be a paper at the American Evaluation Association Annual Conference Evaluation 2013:

Remote Monitoring in Somalia and Eastern Burma: a Comparative Analysis

Presenter(s):

Mona Fetouh, United Nations
Volker Hüls, Making Aid Work
Christian Balslev-Olesen, Danish Church Aid

Abstract: As aid agencies contend with increased risks and diminishing humanitarian space globally, they have had to adopt more flexible methods of aid delivery, monitoring, and evaluation. This paper examines experiences of UNICEF, International Rescue Committee, and The Border Consortium in Somalia and Eastern Burma–environments characterized by restricted access and heavy reliance by international aid organisations on local partners. Because of these factors hindering direct access for monitoring visits, these agencies tested and developed a number of familiar and new channels to collect and validate monitoring information in remote management situations. The presentation comes from a practitioner’s perspective, and contributes to the small but growing knowledge base on M&E in emergency contexts.

I will post the link to the paper when it is out

‘BIG DATA’ – useful in humanitarian response?

I just read this article on Big Data in the new issue of Foreign Affairs. While reading, I wondered if there is something we can learn for IT in humanitarian response. I can see some potential in getting information that is helpful for early warning or even actual response from using big data in addition to specific assessments and surveys. An example I can think of is correlating mobile payment stats (mobile money is now pretty universal in countries like Kenya and used by virtually everyone) with livelihood early warning systems, as it could give an early indication of livelihoods changing (e.g. more money being sent from the city to the countryside could indicate a worsening situation).

Real Time Monitoring – fixing all the problems of Monitoring?

I was just sent the link to this blog post on RTM. For a long time we have tried and piloted technology driven ‘real time’ systems for monitoring. SMS monitoring has been around for some time now for example, so have electronic systems in health clinics etc. that feed big monitoring databases. Is this going to fix the issues of monitoring in general – i.e. poor data quality, insufficient frequency etc. At least it should fix the timeliness of data collection, right ? But is it going to help with all the other issues ? Will it encourage better monitoring practice ?

Granted, we need electronic systems, quantitative methods and statistics skills to process data, but the basic human skill of observation (look and listen) is still the most important on the ground – and is so often not used to its full potential. In that respect, real time monitoring definitely leaves a gap – that of someone ‘checking things out’ which is always much more powerful than just collecting data. We need both, and we need to stop treating monitoring as primarily being a data problem – data are part of it, and needed, but nothing can ever replace simple observation.

Data quality in remote monitoring – a comparative analysis

Together with Mona Fetouh, now at the UN Office for Internal Oversight, and Christian Balslev-Olesen, now with DanChurchAid, I have worked on remote monitoring in Eastern Burma, based on our experience in Somalia. Mona presented this experience at the 28th ALNAP meeting in April 2013 in Washington, DC – links to the materials are below.

Both environments are characterised by heavy reliance of by international aid organisations on local partners. Because of this, and other factors hindering direct access for monitoring visits, real-time testing was developed for a number of familiar and new channels to collect and validate monitoring information in remote management situations. The presentation was made from a practitioner’s perspective, and learning from testing new channels for collection and validation was offered.

These are the links to the materials:

Summary
Recording
ALNAP 28th Meeting

Are ‘regular’ IMS/MIS useful for monitoring humanitarian response ?

I have just concluded an assessment of national monitoring capacity in countries currently or likely to be affected by natural or man-made disasters. A lot of these countries use Management Information Systems/Information Management Systems that have been supported by the international community, and these are most common in Health and in Education. How useful are these for humanitarian monitoring, especially when it comes to monitoring response (coverage, timeliness, adequacy) ? Is it just a matter of adding more indicators ? What about capacity (human/technical) limitations ? Do they collect data frequently enough ? It seems that while there is potential, when it comes to crises these systems are not capable to deliver the necessary frequency and range of data.
Also, electronic systems, while they are attractive in terms of speed of transmission and standardization of data, leave out a lot of interesting information. To start, the “client satisfaction” with humanitarian assistance remains under-explored, and while systems for getting feedback from beneficiaries of aid are increasingly used, most people still have no easy way to feed back to those providing assistance on its adequacy.
We should therefore be cautious in relying too much on the systems, and focus a lot more on what information we should be using. Which, granted, ultimately can feed into an information management system.