Program Implementation – The Alphabet Soup of M&E: PDIA, HICD, MM, SCBM, etc.

A friend of mine recently commented on Monitoring & Evaluation (M&E) processes, which made me ponder as to why they are found baffling by the average person, no matter how many years of experience and education that person may have.

I discovered that many proposal evaluators get confused when reading the proposed M&E section and will acknowledge without compunction that they just could not quite follow what the organization writing the M&E plan was actually proposing.   I have also witnessed intelligent individuals turn glassy eyed at hearing about the M&E work plan’s development, that includes outputs vs. outcomes, inputs vs. indicators, activities vs. results, and the concept of an “iterative adaptation”.

Below I share some of the M&E resources that I found helpful in trying to understand what different donors had in mind when referring to the elusive “monitoring for results” in capacity building projects.  However, I have yet to find answers to my concerns about conflicts of interest and other problems in M&E and program implementation:

  • Who are the evaluators?
    • Evaluating the competition:  There is an inherent conflict of interest when the evaluators are hired to do M&E work on an implementing entity and they themselves are competitors in the contracting/grant implementation world.  This situation places the implementer in a very vulnerable position, as the competitor/evaluator is in the enviable position of learning proprietary information.
    • Evaluating a former employer:
      • When a disgruntled or aggrieved former employee is hired to evaluate the former employer’s work by the donor, who is aware of the complaints and grievances of this former employee, the integrity and the objectivity of the evaluation are in peril.
      • When a former employee is knowingly hired by the donor to evaluate that former employee’s own work, there is an inherent conflict of interest that taints the evaluation from its very beginning.  How unbiased can that former employee be?
  • How does one ensure true transparency in the M&E process?
    • Learning from failure:
      • Will the program implementer that the M&E shows is failing in certain aspects of the project not worry about the potential risk of losing the project to a competitor?
      • Donors face budgetary pressures to work on successful programs.  But M&E points out to what does not work, what needs improvement.  If the M&E plan is done internally, by the implementer itself, there are conflicts between those program experts who want to apply the learned lessons of the M&E -even if it means revising the program, readjusting it, or removing parts of the program that don’t work, and those administrators who mostly pay attention to the bottom line and do not want to see the program shrink at all.  One could argue the same conflicts exist between donor and contractor.  See the tension?
  •  How can you guarantee complete accuracy of the data being entered into a database?

    • Self-assessment via an implementer’s internal M&E process relies on the honesty, good faith, and accuracy of the employees providing the data and those entering the data.  However, when the donor is under immense pressure to produce results, the temptation to churn information that may not be verifiable is real.
    • The same issues above apply to third parties hired by the donor to gather the implementers’ data and produce charts and graphs that make beautiful infographics for future publications.  However, who monitors these third parties, who may be using flawed algorithms or erroneous excel sheet mathematical equations?

So, is M&E really that difficult to understand?  I have my own theory on why Rule of Law/Justice Sector projects are so hard to assess, but this is for another day.  Here is a list of methodologies and other resources for you to decide:

 

SIGAR’s Advice on Program Implementation.

The Special Inspector General for Afghanistan Reconstruction (SIGAR) produced an audit of a Department of Defense (DOD) $635 million program in Afghanistan -the Task Force for Business and Stability Operations (TFBSO), which yields some self-evident and interesting points:

Taking the following actions might improve such an entity’s ability to implement programming and achieve results:

• Define the entity’s mission, scope, and objectives in clear and measureable terms.
• Authorize the entity for longer than 1-year intervals to reduce uncertainty about its future and allow it time to plan ahead for its projects.
• Direct the entity to:

o Develop contract planning policies that emphasize the importance of understanding host-country or local dynamics and obtaining buy-in from all stakeholders before executing a project;
o Develop and implement action plans to minimize the award of  oncompetitive and sole-source contracts;
o Develop and implement action plans to ensure that its staff has adequate training and experience in developing contract requirements and providing contract oversight;
o Work with a single primary contract administration office when developing performance work statements to ensure consistency in drafting requirements;
o Develop management systems to track project metrics, civilian travel, and government-furnished equipment;
o Develop and implement a document retention policy; and
o Develop monitoring, evaluation, and sustainment plans for all projects so that their economic impacts can be accurately measured and sustained, and if necessary, assets can be transferred to an enduring partner.

SIGAR mentions that DOD was given the opportunity to comment on the audit.  Something that struck me was SIGAR’s comment to DOD’s comment, which -in my experience- is the crux of development aid or foreign assistance (emphasis in bold below is mine):

It is important to understand the difference between projects that met or partially met their contractual deliverables and projects that actually met or partially met their program objectives.  DOD is correct in observing that this report finds the contracts directly supporting 16 TFBSO projects generally met their contract deliverables and that contracts directly supporting 12 projects partially met their contract deliverables (or in one case, met them after significant delay). However, just because some TFBSO contractors met their contract deliverables in whole or in part does not necessarily mean that the projects they supported had successful or sustainable outcomes. For example, there are several documented cases where TFBSO contractors completed construction and equipment of a facility, but TFBSO was unable to locate a private company able to operate and maintain it, leading that facility to fall into a state of disuse or disrepair.  Furthermore, as we note in the report, because TFBSO did not consistently track outcomes data, such as the jobs created and government revenues generated by their projects, TFBSO was generally unable to demonstrate whether its projects met its overall objectives to “reduce violence, enhance stability, and support economic normalcy in Afghanistan.”

At the end of the day, what worries me is that many in government and the private sector voice these concerns; however,  not often are the solutions offered taken into account, or, worse still, they are quickly forgotten.  In my own experience, these concerns and suggestions have been made for decades.  I have a theory -which I will try to articulate later- as to why we seem to reinvent the wheel…

 

Resources on Monitoring & Evaluation (M&E)

Below are some resources that I have found helpful.  There are many documents, tools, papers, how-to suggestions, primers, etc., freely available to help beginners and experts alike.  I especially enjoy reading program evaluations and audits of program implementers that identify successes, inefficiencies, and failures.

In my own experience, I have found that one learns most from failures.  They may be a hard pill to swallow, but, at the end of the day, failures make one more perspicacious.

INPROl’s Qualitative and Quantitative Approaches to Rule of Law Research

From USAID: The Monitoring and Evaluation Handbook for Business Environment Reform 

USAID’s Project Monitoring, Evaluation and Learning (MEL) Plan

USAID Evaluation Toolkit

World Bank’s Tools, Methods and Approaches 

UNDP’s Handbook on Planning, Monioring an Evaluating for Results

Musings on Monitoring & Evaluation (M&E) of Justice Sector Reform Programs

Some U.S. Government agencies were late in understanding the importance of M&E to determine the impact that foreign assistance programs were having.  In the last few years, I always kept hearing that we needed to answer then Secretary of State Hillary Clinton’s “so what?” question regarding how effective our international aid projects were.

Many multi-million dollar programs had no internal nor external M&E experts to provide guidance.  In Afghanistan, for example, the U.S. Embassy’s 2013 rule of law strategy failed to incorporate any performance measures.  (For an interesting report that reveals what the problems relating to M&E were at the time, I suggest you read the Special Inspector General for Afghanistan Reconstruction (SIGAR) audit).

Through evaluation tools, M&E programs aim to demonstrate program impact.  This, in turn, provides feedback to guide program implementation staff to enhance future programming by identifying planned and unplanned results to allow donors, implementers and host country beneficiaries to understand what works and does not work, how to maximize efficiencies, and address any issues that might arise before they become a problem or a cataclysmic risk.

In government contracts, the Statement of Work (SOW) may provide the indicators to be used.  Sometimes, the implementer may develop a series of iterative evaluations as well, which might include a training evaluation and an audit, a trainee-satisfaction survey, a mentoring plan, and -depending on the program- a public outreach component.

Performance indicators may combine the Foreign Assistance Framework Indicators (F-Indicators), as well as customized indicators, with the goal to develop and utilize indicators that measure outputs and impact in the short, medium and long-term of the project.

Of course, the most perfect and all-encompassing M&E plan will not work unless both donors, implementers and beneficiaries take into account the critical risks inherent in, or coming from, the place of performance, and agree on some critical assumptions that, at the very least, encompass three contexts: political, security, and operational.

What I have learnt is that decision-makers and bureaucrats from both the government side and the corporate side make choices and issue “diktats” without having had the benefit of operating in the environment where the program is being carried out.  I never gave it much thought until I witnessed it first-hand.  Therefore, it is imperative that the “experts” who are hired to handle M&E issues understand that they may be dealing with people who have little or no knowledge of the hurdles the technical staff face day in and day out.

Sometimes, the mere fact that electricity is not available or the internet connection does not work, may mean that M&E data cannot be incorporated into a database.

While I applaud the importance of M&E in program management, I see some problem areas:

  1. Who monitors and evaluates the authenticity and the accuracy of the M&E plan and its implementation in-house?  In other words, if I am the donor, would I fully trust the contractor or grantee to monitor and evaluate itself?
  2. If the donor hires a third-party to do an independent M&E of a program, how comfortable can the donor and implementer be that the third-party will do an unbiased and truly objective M&E assessment?  What are the chances that the M&E firm will have a former implementer employee evaluating the very same program that person put in place?

Rule of law programs are not immune from a myriad of conflicts of interest.  Who pays attention to these things?