Of Interest: SIGAR’s Audit Report on PROMOTE Program in Afghanistan.

The name of the audit says it all, don’t you think?

“Promoting Gender Equity in National Priority Programs (Promote):  USAID Needs to Assess This $216 Million Program’s Achievements and the Afghan Government’s Ability to Sustain Them”

There was so much excitement at the time of the program’s announcement by the then USAID’s Administrator at the U.S. Institute of Peace (USIP), which I remember well.   It was a grandiose $216 million project with the expectation that other international donors would contribute an additional $200 million in funds.  SIGAR’s recommendations are three:

1. Conduct an overall assessment of Promote and use the results to adjust the program and measure future program performance….

2. Provide written guidance and training to contracting officer’s representatives on maintaining records in a consistent, accurate manner. …

3. Conduct a new sustainability analysis for the program.

Of the SIGAR recommendations to USAID above, I find #2 quite sad, because in my experience, record keeping has deteriorated to the point of oblivion.  Institutional knowledge has waned in many organizations, whether they belong to the private sector or the public one.

Anyone involved in government contracting work ought to read this audit, because it highlights some major flaws in how we run (or not!) multi-million dollar taxpayer-funded programs.

You can read more about the audit at the Stars and Stripes.

Program Implementation – The Alphabet Soup of M&E: PDIA, HICD, MM, SCBM, etc.

A friend of mine recently commented on Monitoring & Evaluation (M&E) processes, which made me ponder as to why they are found baffling by the average person, no matter how many years of experience and education that person may have.

I discovered that many proposal evaluators get confused when reading the proposed M&E section and will acknowledge without compunction that they just could not quite follow what the organization writing the M&E plan was actually proposing.   I have also witnessed intelligent individuals turn glassy eyed at hearing about the M&E work plan’s development, that includes outputs vs. outcomes, inputs vs. indicators, activities vs. results, and the concept of an “iterative adaptation”.

Below I share some of the M&E resources that I found helpful in trying to understand what different donors had in mind when referring to the elusive “monitoring for results” in capacity building projects.  However, I have yet to find answers to my concerns about conflicts of interest and other problems in M&E and program implementation:

  • Who are the evaluators?
    • Evaluating the competition:  There is an inherent conflict of interest when the evaluators are hired to do M&E work on an implementing entity and they themselves are competitors in the contracting/grant implementation world.  This situation places the implementer in a very vulnerable position, as the competitor/evaluator is in the enviable position of learning proprietary information.
    • Evaluating a former employer:
      • When a disgruntled or aggrieved former employee is hired to evaluate the former employer’s work by the donor, who is aware of the complaints and grievances of this former employee, the integrity and the objectivity of the evaluation are in peril.
      • When a former employee is knowingly hired by the donor to evaluate that former employee’s own work, there is an inherent conflict of interest that taints the evaluation from its very beginning.  How unbiased can that former employee be?
  • How does one ensure true transparency in the M&E process?
    • Learning from failure:
      • Will the program implementer that the M&E shows is failing in certain aspects of the project not worry about the potential risk of losing the project to a competitor?
      • Donors face budgetary pressures to work on successful programs.  But M&E points out to what does not work, what needs improvement.  If the M&E plan is done internally, by the implementer itself, there are conflicts between those program experts who want to apply the learned lessons of the M&E -even if it means revising the program, readjusting it, or removing parts of the program that don’t work, and those administrators who mostly pay attention to the bottom line and do not want to see the program shrink at all.  One could argue the same conflicts exist between donor and contractor.  See the tension?
  •  How can you guarantee complete accuracy of the data being entered into a database?

    • Self-assessment via an implementer’s internal M&E process relies on the honesty, good faith, and accuracy of the employees providing the data and those entering the data.  However, when the donor is under immense pressure to produce results, the temptation to churn information that may not be verifiable is real.
    • The same issues above apply to third parties hired by the donor to gather the implementers’ data and produce charts and graphs that make beautiful infographics for future publications.  However, who monitors these third parties, who may be using flawed algorithms or erroneous excel sheet mathematical equations?

So, is M&E really that difficult to understand?  I have my own theory on why Rule of Law/Justice Sector projects are so hard to assess, but this is for another day.  Here is a list of methodologies and other resources for you to decide:

 

Resources on Monitoring & Evaluation (M&E)

Below are some resources that I have found helpful.  There are many documents, tools, papers, how-to suggestions, primers, etc., freely available to help beginners and experts alike.  I especially enjoy reading program evaluations and audits of program implementers that identify successes, inefficiencies, and failures.

In my own experience, I have found that one learns most from failures.  They may be a hard pill to swallow, but, at the end of the day, failures make one more perspicacious.

INPROl’s Qualitative and Quantitative Approaches to Rule of Law Research

From USAID: The Monitoring and Evaluation Handbook for Business Environment Reform 

USAID’s Project Monitoring, Evaluation and Learning (MEL) Plan

USAID Evaluation Toolkit

World Bank’s Tools, Methods and Approaches 

UNDP’s Handbook on Planning, Monioring an Evaluating for Results

Musings on Monitoring & Evaluation (M&E) of Justice Sector Reform Programs

Some U.S. Government agencies were late in understanding the importance of M&E to determine the impact that foreign assistance programs were having.  In the last few years, I always kept hearing that we needed to answer then Secretary of State Hillary Clinton’s “so what?” question regarding how effective our international aid projects were.

Many multi-million dollar programs had no internal nor external M&E experts to provide guidance.  In Afghanistan, for example, the U.S. Embassy’s 2013 rule of law strategy failed to incorporate any performance measures.  (For an interesting report that reveals what the problems relating to M&E were at the time, I suggest you read the Special Inspector General for Afghanistan Reconstruction (SIGAR) audit).

Through evaluation tools, M&E programs aim to demonstrate program impact.  This, in turn, provides feedback to guide program implementation staff to enhance future programming by identifying planned and unplanned results to allow donors, implementers and host country beneficiaries to understand what works and does not work, how to maximize efficiencies, and address any issues that might arise before they become a problem or a cataclysmic risk.

In government contracts, the Statement of Work (SOW) may provide the indicators to be used.  Sometimes, the implementer may develop a series of iterative evaluations as well, which might include a training evaluation and an audit, a trainee-satisfaction survey, a mentoring plan, and -depending on the program- a public outreach component.

Performance indicators may combine the Foreign Assistance Framework Indicators (F-Indicators), as well as customized indicators, with the goal to develop and utilize indicators that measure outputs and impact in the short, medium and long-term of the project.

Of course, the most perfect and all-encompassing M&E plan will not work unless both donors, implementers and beneficiaries take into account the critical risks inherent in, or coming from, the place of performance, and agree on some critical assumptions that, at the very least, encompass three contexts: political, security, and operational.

What I have learnt is that decision-makers and bureaucrats from both the government side and the corporate side make choices and issue “diktats” without having had the benefit of operating in the environment where the program is being carried out.  I never gave it much thought until I witnessed it first-hand.  Therefore, it is imperative that the “experts” who are hired to handle M&E issues understand that they may be dealing with people who have little or no knowledge of the hurdles the technical staff face day in and day out.

Sometimes, the mere fact that electricity is not available or the internet connection does not work, may mean that M&E data cannot be incorporated into a database.

While I applaud the importance of M&E in program management, I see some problem areas:

  1. Who monitors and evaluates the authenticity and the accuracy of the M&E plan and its implementation in-house?  In other words, if I am the donor, would I fully trust the contractor or grantee to monitor and evaluate itself?
  2. If the donor hires a third-party to do an independent M&E of a program, how comfortable can the donor and implementer be that the third-party will do an unbiased and truly objective M&E assessment?  What are the chances that the M&E firm will have a former implementer employee evaluating the very same program that person put in place?

Rule of law programs are not immune from a myriad of conflicts of interest.  Who pays attention to these things?