Datei
Background

Carefully evaluated

Martha Gutierrez explains how GIZ evaluates its work, what it learns from such evaluations and why no policy area in Germany is more scrutinised.

GIZ is often faced with questions such as: do your projects actually have an impact? And do they benefit anyone? The Evaluation Unit is well placed to give clear answers, because its role is to provide constant scrutiny of GIZ’s work.

And not just a small part of it. Every year, on average, we look at over 80 projects worldwide. A third of all the money spent by GIZ on behalf of the German Federal Ministry for Economic Cooperation and Development (BMZ) is independently evaluated. In addition, there are specific analyses on corporate policy issues and on behalf of other commissioning parties. In fact, development cooperation is the German Government’s most thoroughly evaluated policy area. Moreover, GIZ ranks highly in international comparisons, coming second only to South Korea.

We do all this for two reasons. Firstly, to demonstrate to the public that GIZ uses taxpayers’ money responsibly. It allows us to show where we are successful and where we are not. This is important information, not only for the German Government, but also for our partners and interested members of the general public. But there is also a second function. We use our own evaluations as well as external evidence to help us raise our game. In this way, GIZ can learn from its mistakes and refine its approaches.

Image
Martha Gutierrez Martha Gutierrez

Martha Gutierrez is Director of the GIZ Evaluation Unit, which reports directly to the Management Board.

Another reason we can perform this task so effectively is that we act independently. Our unit is not integrated into GIZ’s operational business, which ensures the distance necessary for constructive criticism. Furthermore, we don’t evaluate the projects ourselves; this is done by independent evaluators. Every three years, we organise a Europe-wide tender procedure, resulting in a pool of around 100 evaluators. It’s important to know that evaluations are always carried out by teams rather than individuals. The teams are made up of national and international experts and always include someone from the partner country. And it is these teams that award the ratings.

Evaluation in line with OECD criteria

The evaluations follow criteria recognised by the Organisation for Economic Co-operation and Development (OECD) for sustainability, relevance, coherence, effectiveness, impact and efficiency. For projects commissioned by BMZ, we have added an additional decisive quality criterion: if a project scores 4, 5 or 6 in one of the OECD criteria of effectiveness, overarching development results (impact) or sustainability, it is considered ‘unsuccessful’ overall. We therefore consider these three criteria as ‘knock-out criteria’. Basis for scoring is a six-point scale, similar to the scale used for school grades in Germany  – 1 is the best grade (excellent) and 6 is the worst (insufficient). 

Datei

The evaluations usually take place during the final year of a project. Why? Because changes can then be incorporated into planning for the follow-on phase or applied to the sector. As such, there is a lot more to it than just awarding grades. It is a threefold approach: recording results, communicating results and learning from results.

Although they only represent part of the evaluation work, the ratings always attract particular interest. Projects implemented by GIZ on behalf of BMZ from 2018 to mid-2022 achieved an average rating of between 2.2 and 2.3 on the six-point scale. From mid-2022 to mid-2024, the rating was between 2.5 and 2.6. This means that the vast majority of projects are performing well. That’s because they often have a good system in place for monitoring the route to achieving their objectives, and they can take corrective action if things start to go wrong. The highest scores were achieved in technical and vocational education and training (TVET) and climate and energy. Only a small fraction of projects are classified as ‘unsuccessful’.

Learning from evaluations

The following examples show what impacts evaluations can have. When evaluation findings were taken into consideration in Viet Nam, it led to the strategy for an entire sector being revised and a fundamental reorientation. In future, the support for individual vocational schools will involve greater cooperation with the private sector. In addition, the government will receive more advice on how to safeguard these individual projects in the longer term and apply them to other parts of the country. This is how results become sustainable.

Sometimes partner organisations also learn from each other, as in Kyrgyzstan. Here, the evaluation of a mother-and-child project showed that the patient-centred model was particularly effective. The neighbouring countries of Tajikistan and Uzbekistan are now set to benefit from this knowledge as they aim to apply the model in a joint regional project.

We also learned lessons from a cross-cutting evaluation of GIZ’s work in fragile contexts, which is particularly important because the majority of our partner countries fall into this category. Among other things, it became clear that it is vital to maintain a presence in areas outside the capital, especially when conditions are unstable. Elsewhere, a corporate strategic evaluation a few years ago revealed that GIZ’s security and risk management system is fundamentally sound, but it is not implemented with consistent standards across the board. As a result, the Corporate Security Unit was set up to deal with the issue, companywide.

In short, thanks to the evaluations, which anyone can read online at www.giz.de/knowingwhatworks, we are always subjecting ourselves to scrutiny. This prompts new ways of thinking and improvements that ultimately ensure GIZ’s activities are not an end in themselves, but actually have an impact.

Datei