Blog Thelma - Rethinking Evaluations

From accountability to learning: rethinking evaluations

Evaluations are a very useful tool because they provide us with tangible evidence of a project’s impact and the results achieved. 

thelma en maaike-v2.png
Thelma Brenes Muñoz and Maaike Platenburg

Yet, they are not always seen in a positive light, either because some people see it as an assessment of their personal performance, or because they believe that a lot of effort goes into an evaluation only for it to end up in a desk drawer unread.

Also, evaluations have primarily been focused on proving the results for the benefit of stakeholders. What has this investment or project achieved, what is the impact and can you show us the results are standard questions from stakeholders, rather than focusing on what we can learn from them to stimulate change in our organization. In our experience, this can limit their usefulness. Instead, it is our goal to strengthen their learning opportunities.

We have studied FMO’s journey with investment evaluations since the Evaluation Unit was set up in 2001, and by doing this we have identified six success factors needed to improve our learning.

But first let’s look at the three phases of the journey, reckon the pros and cons of the different methods, and understand how learning can be brought to the forefront.

The mainstream: scoring performance of investments

In 2001 when the Unit was set up, we decided to use the Good Practice Standards for Evaluation of Private Sector Investment Operations. Investment Officers assessed performance on a scorecard in terms of development, investment outcomes and operational effectiveness five years after the initial investment.

This approach was thorough and is widely used by multilateral development banks and international financial institutions because it demonstrates ‘results’, and therefore an organization’s effectiveness. However, while it is a robust tool for accountability, it is less strong on learning. Furthermore, it consumes a significant amount of staff time. It limited the potential to explore policy-relevant questions in any depth, (e.g. renewable versus non-renewable energy), and there was little focus on on-site evaluations.

In brief, this approach was useful for validating project outcomes on the ground, but many were anecdotal and overall the information was very limited in terms of developing policy. Thus, after nine years of following this approach, FMO decided to revise it.

Midway

In 2010 FMO decided to adopt a sector-based annual evaluation covering the investment departments on a rotating basis. The evaluations assessed a representative sample in that sector each year. This approach had a fixed structure, focusing on standard questions about development impact, ESG and additionality. While these evaluations did provide learning that led to improved investment focus on specific sub-sectors and impact measurements, they still leant heavily towards accountability rather than learning.

Furthermore, sector evaluations were conducted independently of other FMO organizational processes, such as. the Strategy Cycle which meant they didn’t focus on relevant strategic questions, or the specific needs of the organization. It was also time consuming and the team felt that the ‘added value’ of evaluations could be improved, especially in relation to strategic relevance and organizational learning opportunities.

At the same time, we saw that with the increasing demand for more analytic information that demonstrates impact and the improvements of our impact systems that will easily provide us with the ‘results’, we can pay more attention to all the areas where we could learn from more than just the numbers.

In 2020, we changed our approach to one that would enhance our learning opportunities. 

Tipping the scale

In 2020, we changed our approach to one that would enhance our learning opportunities. We did this by moving from sector evaluations to ones focused on a specific theme across sectors.

We believe that by sourcing the evaluation themes from the strategic planning cycle, and by feeding the results back into the next planning cycle, the strategic relevance of the evaluation studies will improve. It also means we can do more in-depth analysis of each topic, including literature research, finding benchmarking information from other organizations and holding external stakeholder interviews.

We expect to see an improvement in the quality of the recommendations so that they become more actionable. Furthermore, we can create more added value by comparing results across departments, which in turn strengthens internal learning. This can also contribute to breaking the organizational silos, and encourage more inter-departmental collaboration and learning.

Learning: the six factors of success

By looking at our journey with different approaches, we have managed to identify seven factors that make for a successful ‘learning’ evaluation.

  1. Define evaluation topics by consultation: when defining the topics for the evaluation input from different FMO departments is crucial, and the strategy, investment and impact teams should be the starting point for determining the focus of the evaluation. 
  2. Assess the available information: Before developing a study, identify what knowledge is already available. Duplicating existing research is time wasting.
  3. Explicitly state the goals of the evaluation: Evaluators should be explicitly asked to look for ‘learning’. Learning as an objective should be explicit in the Terms of Reference, and asking for actionable recommendations should be an important component in the selection of the evaluator. Many evaluators are used to focusing on accountability, so both internal and external evaluators need to shift to a learning-focused mindset.
  4. Combine multiple projects or investments in one evaluation: By evaluating multiple investments in one evaluation we can see what does and doesn’t work. Basically, comparative analysis leads to stronger conclusions.
  5. Learning as an end product: The implementation of an evaluation’s recommendations should be seen as the end product, rather than the report itself. Senior management responsibility for overseeing the implementation of recommendations ensures this shift in thinking happens.
  6. The place of evaluations in the organization: Where, evaluations are placed in an organization matters. We have found that placing evaluations within the Strategy Stakeholders and Knowledge management department gives us a closer link with strategy, both in terms of input for new topics, as well as following up on findings.

Ultimately, change takes time. Certainly, the mindset around evaluations needs to change from a focus on the ‘results’ to that of an opportunity to learn. When we achieve this turnaround, all of us at FMO can do an even better job of empowering entrepreneurs to build a better world.