The Problem With Best Practices in Nonprofits

Organizations chase “best practices” because the phrase promises simplicity, legitimacy, and speed. Copy it, roll it out, and success should follow. In nonprofit work, we sometimes see the opposite. What worked somewhere else occasionally fails here.

Sheryl Foster

1/27/20264 min read

Organizations chase “best practices” because the phrase promises simplicity, legitimacy, and speed. Copy it, roll it out, and success should follow. In nonprofit work, we sometimes see the opposite. What worked somewhere else occasionally fails here.

At heart, a best practice is something that worked somewhere and is assumed to work again. In formal terms, it has clear goals, measurable progress, and proof that it can be copied. That’s the theory. Reality is messier.

Implementation studies have shown that context drives outcomes, not just the idea itself (Damschroder et al., 2009; Nilsen & Bernhardsson, 2019). Culture, leadership, communication, resources, and internal politics decide whether a practice lands or flops.

In practical terms, a practice is welded to the environment where it was built and tested. A health care intervention that thrives in a well-funded urban clinic can sputter in a rural setting with thin staffing and different norms (Greenhalgh et al., 2004). Same method, different results.

Nonprofits are not special in this regard, but they are exposed. Their communities, funding cycles, capacity, and stakeholder demands vary widely, making copying riskier than most leaders admit.

If copying doesn’t work, what does? Fit.

Across implementation research, “fit” means alignment between a practice and the conditions where it will live (Chambers et al., 2013). Education scholars define it as how well an intervention matches the values, needs, skills, and resources of its setting (Horner et al., 2004). Different fields, same warning.

In nonprofit terms, a good strategy fits your mission, your people, your limits, and your community. Without that alignment, even a “proven” practice is brittle. It looks solid until you lean on it.

Copying becomes gambling when you skip diagnosis. The line between smart adaptation and blind imitation is whether someone took the time to understand what they were really adopting.

Start with the problem the practice actually solved in its original home. Not the glossy case study version, but the working reality. Was it built to boost efficiency, placate funders, reduce risk, improve outcomes, or calm internal politics. Many practices get sold as universal when they were built for one narrow pressure in one specific place (Ansari et al., 2010). If you are not facing that same pressure, you may be solving the wrong problem.

Next, surface the assumptions baked into the practice. Every “best practice” carries quiet expectations about staffing, leadership, authority, data systems, and culture. Some presume low turnover and high trust. Others depend on tight compliance or expensive technology. When those assumptions go unchecked, leaders blame execution, when the real issue is that the practice belongs to a different organizational ecosystem (Nilsen & Bernhardsson, 2019).

Then ask what breaks if you try it here. This question makes most teams squirm because it drags tradeoffs into daylight. What gets displaced. What gets drained: time, attention, political capital. Where will it rub against how your organization really works, not how it says it works. A strategy that looks neat on paper can unravel fast when it hits informal power, legacy processes, or fragile relationships (Weick & Sutcliffe, 2015).

These questions do not slow strategy. They save it from false confidence. They turn adoption into design and borrowed ideas into something that actually belongs to the organization, instead of sitting there like a badly tailored suit.

Nilsen and Bernhardsson (2019) show that organizations must account for their own financial, social, and relational environments when implementing evidence-based practices. When they do not, fidelity and outcomes slide. A practice can be technically sound and still useless somewhere else.

There is another risk in “proven” solutions. They create a false sense of safety.

Once a practice gets stamped “best,” people stop poking at it. The label suggests that the hard thinking happened somewhere else, by someone smarter, richer, or more successful. That suggestion quietly changes behavior.

Teams start to feel less responsible for results because the decision feels outsourced. If the idea came from a peer organization, a funder, or a glossy report, failure becomes an execution problem, not a design problem. The question shifts from “Is this right for us?” to “Why did we mess it up?” That shift is small but costly.

The “best” label also breeds overconfidence. Teams assume they do not need to adapt because the practice is already proven. What is usually proven, as Ansari et al. (2010) remind us, is that something worked somewhere, under particular conditions, with specific people and systems. Preserve the form and ignore the function, and you lose what made it work in the first place.

Most quietly, best practices dull curiosity. Once a solution is pre-approved, teams stop asking whether it fits their context, community, or capacity. They spend more time defending the choice than questioning it. Strategy looks safer but grows weaker.

“Best practices” shape not only what organizations do, but how they think. When thinking stops, strategy does not get safer. It gets thinner.

What feels like security often turns out to be complacency. A solution that is not adapted to the local context will often be silently decoupled, written into plans but absent from daily work, a pattern Meyer and Rowan (1977) described decades ago.

In nonprofits, decoupling shows up as mandated practices that never take root, or programs that live in reports but not in operations. Leaders see progress on paper while real change stalls.

So if copying logic fails, what should we borrow instead?

Study how organizations made decisions, not just what they decided. Look at the tradeoffs they accepted. Ask what failed before it worked and what they stopped doing once they learned better. Those lessons travel far better than templates.

More importantly, design resources like playbooks instead of recipes. Playbooks preserve flexibility and offer principles that teams can adapt to their own limits and conditions (Snowden & Boone, 2007). They respect differences instead of flattening them.

Furthermore, no organization perfectly represents another, even when missions overlap.

The goal is not to find the “best practice.” It is to design the right practice for your mission, in your context, with your people. That takes judgment and a bit of nerve. It also beats copying someone else’s homework.