GuideStar Blog

Measuring Nonprofit Impact, Part IV: Philanthropedia's Survey Process

philanthropedia-p-logo.jpgWe hope our previous articles (see the links in the right column) have been helpful in explaining the value of expert reviews of nonprofits. Now we'd like to tell you more about Philanthropedia's actual survey process: what questions we ask and how we conduct our research.

Our survey is a two-part process that in total takes experts about 40 minutes to complete. In the first survey, we ask experts to recommend high-impact nonprofits. To frame the survey, we start by defining the cause we're asking experts to consider. We explain the scope of the research, which kinds of nonprofits we aim to include, and which we aim to exclude from consideration. We don't, however, provide experts with a list of nonprofits from which to choose; rather, we allow them to recommend any that they think are having the best outcomes. We're not satisfied with simple recommendations, though. We ask experts to justify their recommendations with real evidence.

Here's the exact language we use when asking for the nonprofit recommendations; this example is for workforce development nonprofits in Minnesota. We ask this series of questions four times, allowing experts to recommend up to four nonprofits:

Please recommend a workforce development nonprofit that you think has had high impact in Minnesota in the last few years. We define a high-impact nonprofit as one that produces lasting improvements that address the core problems in a particular social cause. We'd like you to recommend nonprofits based on their impact rather than their other organizational strengths such as leadership, staff, marketing, operations, and finances.

  1. Based on what IMPACT are you making this recommendation? Please be as specific as you can.
  2. In addition to impact, what are other strengths of this organization (for example: leadership, staff, marketing, operations, finances)? In a few sentences, please share 2 or more strengths of this nonprofit.
  3. Even high-impact nonprofits could further improve. In a few sentences, please share 2 or more areas for improvement for this nonprofit.

Note: experts are not allowed to recommend the organizations for which they work.

We recognize that impact is a backward-looking measure, based on past performance. Therefore, this question will be biased toward older, more established nonprofits. In order to identify young and promising start-up organizations, in another question we ask experts to recommend up to two start-up nonprofits they think are promising or innovative and explain why.

Once we collect this information, we analyze the results. We find that no matter what the cause, our results look the same: a small number of nonprofits (10-20) are mentioned a lot and a lot of nonprofits (100-200) are mentioned just once or twice.

We then run a second survey where we share the top-recommended nonprofits (the 10-20) and ask the participating experts to what extent they agree or disagree with their colleagues' recommendations. We also give experts the opportunity to comment on the nonprofits in the list that they may not have recommended in the first survey. We take this extra step because we want to be sure to identify any outliers or responses that may be distorting the results. By re-vetting the list with the experts, we feel much more confident that most of the expert group members agree with their colleagues' assessment. And in some cases, we remove a top nonprofit from the list because there is not enough agreement from the crowd.

At the end of this process, we determine a ranking system that combines the responses from both surveys. A vote in the first survey is equal to one "thumbs-up." An "I agree" vote in the second survey also equals a "thumbs-up" and an "I disagree" vote in the second survey equals a "thumbs-down." So, what we end up with us an "up" and "down" count, the difference of which determines the ranking. (If you go into any organization's profile on our site, you will see this thumbs-up and thumbs-down count.)

We then share all of the comments we have collected for all top nonprofits (the 10-20) and all of the comments we collected for all of the other reviewed organizations (the 100-200 others) on our site so donors can review everything.

In order to protect the privacy of our experts, we don't link an expert's name with a specific comment. We do, however, code comments by expert type so donors can sort comments by only funders or only researchers. Because we feel it's important for donors to know who participated in our research, we publish separately the names and biographies of every single expert who participated in our research.

What's really neat about this methodology is that it allows donors to get a view across an entire sector—not just learn about one or two nonprofits in isolation. That means you really have the chance to learn about how specific nonprofits are doing relative to one another.

Read More from this Series

Read Part I, Spotlight on Philanthropedia

Read Part II, The Value of Experts

Read Part III, A Deeper Look at Philanthropedia's Experts


Erinn Andrews, Philanthropedia
© 2011, Philanthropedia

Erinn has been the chief operating officer of Philanthropedia since the nonprofit's inception in June 2009. She was primarily responsible for developing and scaling Philanthropedia's methodology and conducting the social cause research. As a new member of the GuideStar family, Erinn, now director of data, research, and partner relationships, will continue to oversee Philanthropedia's research but is assuming new responsibilities within GuideStar as well.

Topics: Impact