Evaluations are a very useful tool because they provide us with tangible evidence of a project’s impact and the results achieved.
Yet, they are not always seen in a positive light, either because some people see it as an assessment of their personal performance, or because they believe that a lot of effort goes into an evaluation only for it to end up in a desk drawer unread.
Also, evaluations have primarily been focused on proving the results for the benefit of stakeholders. What has this investment or project achieved, what is the impact and can you show us the results are standard questions from stakeholders, rather than focusing on what we can learn from them to stimulate change in our organization. In our experience, this can limit their usefulness. Instead, it is our goal to strengthen their learning opportunities.
We have studied FMO’s journey with investment evaluations since the Evaluation Unit was set up in 2001, and by doing this we have identified six success factors needed to improve our learning.
But first let’s look at the three phases of the journey, reckon the pros and cons of the different methods, and understand how learning can be brought to the forefront.
In 2001 when the Unit was set up, we decided to use the Good Practice Standards for Evaluation of Private Sector Investment Operations. Investment Officers assessed performance on a scorecard in terms of development, investment outcomes and operational effectiveness five years after the initial investment.
This approach was thorough and is widely used by multilateral development banks and international financial institutions because it demonstrates ‘results’, and therefore an organization’s effectiveness. However, while it is a robust tool for accountability, it is less strong on learning. Furthermore, it consumes a significant amount of staff time. It limited the potential to explore policy-relevant questions in any depth, (e.g. renewable versus non-renewable energy), and there was little focus on on-site evaluations.
In brief, this approach was useful for validating project outcomes on the ground, but many were anecdotal and overall the information was very limited in terms of developing policy. Thus, after nine years of following this approach, FMO decided to revise it.
In 2010 FMO decided to adopt a sector-based annual evaluation covering the investment departments on a rotating basis. The evaluations assessed a representative sample in that sector each year. This approach had a fixed structure, focusing on standard questions about development impact, ESG and additionality. While these evaluations did provide learning that led to improved investment focus on specific sub-sectors and impact measurements, they still leant heavily towards accountability rather than learning.
Furthermore, sector evaluations were conducted independently of other FMO organizational processes, such as. the Strategy Cycle which meant they didn’t focus on relevant strategic questions, or the specific needs of the organization. It was also time consuming and the team felt that the ‘added value’ of evaluations could be improved, especially in relation to strategic relevance and organizational learning opportunities.
At the same time, we saw that with the increasing demand for more analytic information that demonstrates impact and the improvements of our impact systems that will easily provide us with the ‘results’, we can pay more attention to all the areas where we could learn from more than just the numbers.
In 2020, we changed our approach to one that would enhance our learning opportunities.
In 2020, we changed our approach to one that would enhance our learning opportunities. We did this by moving from sector evaluations to ones focused on a specific theme across sectors.
We believe that by sourcing the evaluation themes from the strategic planning cycle, and by feeding the results back into the next planning cycle, the strategic relevance of the evaluation studies will improve. It also means we can do more in-depth analysis of each topic, including literature research, finding benchmarking information from other organizations and holding external stakeholder interviews.
We expect to see an improvement in the quality of the recommendations so that they become more actionable. Furthermore, we can create more added value by comparing results across departments, which in turn strengthens internal learning. This can also contribute to breaking the organizational silos, and encourage more inter-departmental collaboration and learning.
By looking at our journey with different approaches, we have managed to identify seven factors that make for a successful ‘learning’ evaluation.
Ultimately, change takes time. Certainly, the mindset around evaluations needs to change from a focus on the ‘results’ to that of an opportunity to learn. When we achieve this turnaround, all of us at FMO can do an even better job of empowering entrepreneurs to build a better world.