The GuideStar Blog retired September 9, 2019. We invite you to visit its replacement, the Candid Blog. You’re also welcome to browse or search the GuideStar Blog archives. Onward!

GuideStar Blog

Is Your Advocacy Making a Difference?

Young man with tattoos raising his hand an outdoor demonstrationIf you run an advocacy campaign, you definitely have ideas about the ways it’s making progress. You can see the stumbling blocks, and you may be able to point to dramatic and obvious success. If you are documenting and reflecting on all this, you are also engaging in evaluation and learning—even if you don’t call it that! But along the way, chances are you have bumped up against these types of questions:

  • How do we know (and show) we’re making progress when the time horizon is long, and we don’t expect to achieve our ultimate outcome for years?

  • How should we think about campaign success even when things didn’t go as planned?

And then there is the fundamental question of how to spend time and money: We live in the real world of resource constraints—should we be spending valuable resources on data collection and analysis when that means cutting into time and money that we could spend on our campaign?

Let’s take this last question first. The short answer is: yes! More to the point, this may be easier and more useful than you think. It is easier because you have an enormous advantage over any evaluator: you are your own best data source. To understand a campaign and its outcomes, evaluators will usually gather data by talking to the advocates—but you know your campaign inside and out, and can cut out the middle person by documenting your own tactics, strategies, and results. The question then becomes: why would it be useful to track your own data? One obvious answer relates to funding: your funders will want to know about the payoff of their investment in your work, and you can share information about your past work to gain future funding. But more importantly, you can use evaluation as a way to systematically reflect on your campaign and make decisions about strategy and tactics.

But what about the inherent challenges of evaluating advocacy?

The field has made a lot of progress over the past decade in terms of thinking about how to approach advocacy evaluation. One of the most consistent pieces of advice is to focus on interim outcomes. Don’t think of success as a yes/no proposition. For example, either a law passed, or it didn’t (or a bad law was blocked, or it wasn’t)—consider all of the advances made even if a law did or didn’t  pass. Focusing on interim outcomes is helpful both because the time horizon for advocacy is often very long (think here of mitigating the climate crisis), and because things may not go as planned.

On June 20, you can join us for a webinar that will share approaches to identifying interim outcomes using a framework from Julia Coffman of the Center for Evaluation Innovation. As we introduce this framework, we will discuss how it can help you choose good outcomes to track, and we’ll talk about how to address some of the challenges of advocacy evaluation. On July 25, we will continue the discussion, sharing an approach to developing your own customized method for data tracking and analysis, based on interim outcomes derived from Coffman’s framework. Our focus will be on utility, relevance, and reflection—we want to help you find a way to develop a customized approach that you will find useful not only for demonstrating results, but also for your own internal learning.

This post is reprinted from the GrantSpace Blog.

Nancy LathamAs Learning for Action’s director of research & evaluation, Nancy Latham supports all employees to further develop their technical skills. She coordinates staff trainings, creates toolkits and other resources, and champions systematic feedback loops for organizational learning. She is especially passionate about helping LFA staff develop their skills in research design and statistics, and has written a statistics textbook for internal use. As a senior fellow with the Center for Evaluation Innovation, Nancy created a toolkit for evaluating systems change initiatives.

Topics: Impact Measurement Advocacy Nonprofit Advocacy In-Person Meetings