Most of us have seen it. The grant application has a logframe. The logframe has indicators. The indicators have targets. Box ticked.
Then the program begins, data starts coming in, and nobody is quite sure what to do with it. The monthly collection happens, the numbers land in a spreadsheet, and three months later someone exports that spreadsheet to produce a donor report. The indicators are filled in. The targets are compared. A traffic light system might even be applied.
But the question nobody is asking: has this data changed anything we are doing?
That gap, between collecting indicators and actually using them, is where most monitoring systems live permanently. The structure is there. The habit of use is not.
What a monitoring framework actually is
A monitoring framework is not a table of indicators. It is a system. It has three moving parts that most programs skip.
The first is a feedback loop. Data comes in, gets reviewed, and the review produces a decision. Not a report. A decision. Something changes, or someone confirms that nothing needs to change. Either way, the review has a conclusion.
The second is a named audience. Someone specific receives the data and is accountable for doing something with it. Not "the M&E team." A named person in a named role, with a named meeting where the data is discussed.
The third is a cadence that allows course correction. The review happens at regular intervals short enough to respond to problems before they compound. A quarterly review of monthly data is almost useless for program management. By the time you are looking at it, three months of the problem have already happened.
"Data should not remain static in reports. The review is not the endpoint. The decision is."
Why this matters in practice
We have worked with programs where indicators were being collected correctly, targets were being met on paper, and the program was quietly failing. The indicators were measuring the right things. The system for responding to what they showed did not exist.
One specific example: a nutrition program tracking the number of caregivers attending training sessions. Attendance was consistently above target. Coverage looked good in every report submitted. What the data was never triggering a review of was the geographic distribution of attendance. It turned out that most of the "caregivers" attending were from two wards out of nine, and those two wards happened to be where the field staff were based. Seven wards were nearly untouched.
The indicator was not wrong. The monitoring system was not asking the right questions of it. Nobody had agreed on who was responsible for reviewing geographic spread, or when, or what they would do if it looked uneven. So it never got reviewed.
The thing programs skip when building monitoring systems
There is a tendency to invest heavily in data collection tools, indicator refinement, and reporting templates, while assuming that the review process will figure itself out once the data starts coming in. It rarely does.
Review processes need to be designed deliberately. Who attends the data review meeting? What are they looking at, exactly, and in what format? What questions are they expected to answer by the end? Who is responsible for following up on the action points before the next review? These are not logistical details. They are the mechanism by which monitoring translates into management.
If those questions have not been answered before data collection starts, the data will be collected but not used. This is not a technology problem. It is not solved by a better dashboard or a more comprehensive indicator list. It is a governance and process problem.
What to fix first
If you are building or reviewing a monitoring system, start with this question before anything else: what decision will this data inform, and who will make it?
If the answer is "we will include it in the report," that is not a decision. That is filing.
Build the review meeting first. Decide who attends, what they look at, and what authority they have to change something. Then build the data collection system around that meeting's needs. Most programs do it the other way around, and the data ends up ahead of the system designed to use it.
Indicators are inputs. The monitoring framework is the machine that turns them into something useful. You can have all the inputs in the world and still produce nothing if the machine is not running.