3 Ways to Measure What Really Matters (Without Overwhelming Your Team)

Ever feel like your team is reporting more than it’s learning? You’re not alone.

Sheryl Foster

7/25/20253 min read

Ever feel like your team is reporting more than it’s learning?

You’re not alone. In a lot of nonprofits, data collection turns into a ritual. Dashboards get built, numbers get dropped into reports, and no one looks at them again unless there’s a deadline breathing down their neck.

I’ll admit it. I like looking at data. But most nonprofit staff are juggling enough already. They're raising money, running programs, putting out fires, and holding everything together. They can’t be pulled into data collection every week or handed another spreadsheet with vague instructions and a smile.

Evaluation has to earn its place. If it doesn’t help people do their jobs better, it’s just noise.

When done right, though, evaluation is sharp. It clarifies, guides, and tells you what’s actually working, not just what looks good on paper.

Meehan and Jonker, in their book, Engine of Impact, say the problem isn’t a lack of tools. We’ve got plenty. There’s an “ever-expanding ecosystem” of methods, frameworks, and fancy evaluations waiting to be used. The problem? Most nonprofits don’t touch them.

Here’s how to make evaluation useful without burning out your team.

1. Measure what drives decisions, not what’s easiest to count.

Meehan and Jonker put it plainly: nonprofits often “measure what they can count, instead of what counts”. Think of the old joke where someone’s looking for their lost keys under a streetlight. Not because they dropped them there, but because it’s where the light is.

That’s what it looks like when we collect data just because it’s accessible.

Start by asking, What are we trying to decide? What would change if we had better information? If you can’t answer that, hold off on measuring anything.

For example, if you're testing a new outreach strategy, don't just count how many emails you sent. Track who opened them, who took action, and which messages landed. Then use that to improve what’s next.

2. Involve the people doing the work.

Top-down evaluation misses things. The folks on the front lines know what’s really happening. They also know what’s worth measuring, but they rarely get asked.

Too often, evaluation feels like a report card. Worse, a report card someone else designed. Meehan and Jonker note this is structural: funders rarely pay for evaluation, and leaders often avoid it altogether. In one survey, only 9% of nonprofit leaders said their funders “often” or “always” cover the cost

Ask your team what success looks like in their world. Then figure out how to track it, simply and honestly.

One youth development team ditched basic attendance numbers and tracked something else: whether participants came back and stayed engaged over time. The numbers were less tidy, but far more useful.

3. Track progress, not just outcomes.

Not every win shows up in the final numbers. Meehan and Jonker borrow a quote often pinned to Einstein: “Not everything that counts can be counted, and not everything that can be counted counts”.

They still push organizations to quantify where they can, but with eyes wide open. The process of translating insight into data sharpens your understanding, even if it’s imperfect.

Look for early signals. What behaviors are shifting? What’s happening between “we launched this” and “it worked”? Rigid outcome-only thinking often misses the real movement, or early warning signs.

Helen Keller International, for example, uses one deceptively simple question to evaluate its strategy:
“Are we doing the right thing, for the right people, in the right place, at the right time, and in the right way?”
It’s not a metric in a dashboard. But it’s the kind of question that leads to better metrics.

How to keep this from getting overwhelming

Meehan and Jonker describe evaluation as part of a strategic cycle:
Mission → Theory of Change → Strategy → Evaluation → Learning → Refined Strategy.

If your plan hasn’t changed in three years, something’s off. Either your evaluation isn’t working, or your team’s ignoring it.

So, start small. Pick three to six metrics:
One (or two) that tracks real strategic movement.
One (or two) that’s actually helpful to your staff.
One (or two) that tells your story clearly to the outside world.

Use them in meetings. Revisit them every quarter. Drop them when they stop being useful.

And yes, be prepared for discomfort. As GiveWell’s Elie Hassenfeld once said,
“If you’re hearing some bad news, that’s a good thing. If you’re only hearing good news, that’s a bad thing.”

Let’s talk.

What’s one metric your team stopped measuring and never missed?
Have you found a simple way to measure something “fuzzy”?