Founder’s Briefs: An occasional series where Mongabay founder Rhett Ayers Butler shares analysis, perspectives and story summaries.
“We already knew that.”
I frequently receive complaints from readers about findings in scientific papers being commonsense or obvious. And yes, it’s true: science often confirms what we’ve long suspected or seen in practice. By its nature, science is slow and methodical. It seeks to verify, quantify, and understand patterns — often in complex, real-world systems where intuition can be misleading.
But that doesn’t mean we shouldn’t do it.
In fact, the apparent obviousness of a result doesn’t make the evidence any less important. In conservation science, where interventions often affect both ecosystems and human communities, assumptions can lead to ineffective, even harmful, strategies. Systematic evidence helps replace well-meaning guesswork with informed action.
That’s what we set out to explore with Mongabay’s Conservation Effectiveness series several years ago. We wanted to know: What does the science actually say about what works in conservation? To find out, our team dove into six widely used strategies: forest certification, payments for ecosystem services, community-based forest management, terrestrial protected areas, marine protected areas, and environmental advocacy.
These approaches are common in the global conservation toolbox. They’re often portrayed as proven solutions. But our investigation revealed that for many strategies, the evidence base was surprisingly thin. Several studies we reviewed lacked the rigor needed to establish causation — whether the strategy itself actually produced the observed environmental or social outcome. Many studies only offered correlations. Some strategies hadn’t been studied much at all.
That doesn’t mean these tools don’t work. It just means we don’t always know for sure how well they work, under what conditions, or why. In a field where resources are scarce and the stakes are high, that uncertainty matters.
Conservation doesn’t happen in a lab; practitioners often rely on local knowledge, trial and error, or strategies that attract funding or political support. These, too, are part of the evidence landscape: observations, anecdotes, and lived experience all have value. But just as we expect medical treatments to be backed by research, we should expect conservation strategies to be informed by the best available science.
Since we published the series, the field has moved forward. Initiatives like the Conservation Evidence Project at the University of Cambridge, U.K., are helping build a stronger, more accessible evidence base. But researchers still note a gap between science and practice. Too many conservation decisions are made without consulting the research, or without monitoring outcomes at all.
And while success stories make for compelling headlines and glossy reports, failures are too often ignored. Yet learning what doesn’t work is just as essential to improving outcomes. Without that learning, we risk repeating the same mistakes — or mistaking “common sense” for effectiveness.
In conservation, the obvious still deserves to be tested. Because lives, livelihoods, and entire ecosystems depend on getting it right.
Banner image: Rhett A. Butler/Mongabay.