How do we measure performance of humanitarian funding?

I recently analysed the performance of humanitarian funding from the Central Emergency Response Fund (CERF) for a client. The CERF provides an interesting insight into performance monitoring of humanitarian money. It is managed by UNOCHA, who over the last years have gradually improved monitoring of its performance. All the data is publicly available and can be used by anyone to get a very detailed insight into the mechanics of humanitarian funding (CERF Performance and Accountability). This is important as the purpose of the CERF is to provide immediate, kick-start assistance to humanitarian operations, and its contributors should really be able to know whether this ‘spirit’ of the CERF is adhered to on the ground.

What is interesting, though, is the very specific focus on cash throughput (to be blunt). Which is important, as apparently a number of recipients are struggling with spending CERF money as fast as it is provided. The CERF itself actually excels in the fast provision of funding: I found that the time between the application for grants under the Rapid Response window reaching the CERF secretariat and disbursement of the funds is in most cases around a week. Which compared to other humanitarian donors is very fast. The recipients, directly only UN agencies, and indirectly NGOs, then have to use the funds as soon as possible, but within six months of disbursement. This is often a bottleneck, and there are lots of examples (as can be seen from the publicly available data) where agencies have not spent the funds anywhere near the speed they were approved and received. So, ‘burn rate’ monitoring definitely has a role until there is faster expenditure across the board.

What so far seems to be lacking though is the output-level and outcome-level performance monitoring. Not so easy to achieve and measure, as it begs harmonisation of indicators, it basically comes down to what each agency reports for each grant. So for one grant the individual agency reports can differ markedly, both in level of detail and degree of quantification. One reason for this is that the CERF due to its limited volume only contributes a very small proportion of an agency’s total funding for a particular response, and is often mixed with other funds to support one joint response. So it isn’t quite so easy to isolate the contribution of the CERF to a particular part of the operation.

Standard indicators would be an option, i.e. on application an agency would have to commit to which indicators they would report against (they could ideally be SPHERE indicators). But this needs prior agreement, there may be negotiation, and this will delay the currently very fast disbursal. Also, some agencies cannot in their systems isolate the contribution of a particular grant if it is mixed in with others, effectively rendering them ‘blind’ to what a particular pot of money bought in a particular operation.

There is a long way to go therefore, in the process of humanitarian evolution, and the harmonisation of systems for managing emergency response, to allow a better understanding of what humanitarian money actually and specifically achieves on the ground. For now, we only really know what each agency achieves (which is good and valuable), but not whether the ‘spirit’ of the CERF was preserved and its funds were really spent on the most immediate and pressing needs in any particular emergency.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s