The GuideStar Blog retired September 9, 2019. We invite you to visit its replacement, the Candid Blog. You’re also welcome to browse or search the GuideStar Blog archives. Onward!

GuideStar Blog

Follow-up to Webinar Discussion with Sacha Litman of Measuring Success

 

Below is a follow-up to the questions submitted by participants during the September 29 GuideStar-hosted webinar, “The Seven Steps for Data-Driven Decision-Making.” To view or hear a live recording of the presentation, please click here.

Q: How do you adjust for discrepancies between what people say on polls and what they do?

This is an excellent question. There are two prevailing theories as to the reason for discrepancies between what people say and what they do:

  1. People honestly answer surveys according to their expressed needs (what they think they need), but they are not conscious of their “latent needs” that emerge differently in their behavior; and
  2. People answer surveys according to how they think the surveyor wants them to answer, because people act differently when they are being observed or their responses monitored.

There are many practical examples you can probably think of regarding situation B: for example, in Freakonomics by Steven D. Levitt and Stephen J. Dubner, the authors point out that researchers noticed a marked difference between how often people said they washed their hands after using a public restroom and how often they actually did when monitored by hidden camera. This means you shouldn’t use surveys to collect objective information – get that from other sources. For attitudinal information, when conducting surveys it is absolutely critical to ensure that you do everything possible to assure the respondents of the confidential nature of their responses. This is why for many of our clients we will serve as the independent third party collecting the survey data so that the client never sees the individual responses. We also encourage clients to create a “covenant” with participants that promises that their data will never be examined individually and only in the aggregate with other responses.

It is trickier to eliminate the bias identified in situation A. One way we like to do it is to compare what participants say is important to them in the surveys compared to what a regression analysis of the data suggest are the issues that are the most critical. For example, your participants may say that the care they received from your professional staff was the most important factor in their choosing to participate in your programs, but the regression analysis may show that it really didn’t make a difference, and in fact it was another factor that really did.

Q: What do you see evolving in terms of infrastructure support for educating staff in the use of data — i.e., internal design, ongoing collection & evaluation?

We believe that the key to changing the culture of nonprofits from anecdotal to data-driven decision making is through increasing the confidence and knowledge level of nonprofit professionals in using data. “Building Data Competency,” as we like to call it, is crucial, especially given that the background of many nonprofit professionals is not a quantitative one. In fact, we find that a lot of our client engagements now involve training them to build their own data skill sets.

Another key change needed is to make data a core competency of nonprofits, most likely through embedding skilled statisticians inside nonprofits in order to handle the heavier data needs. Just as baseball and many other sports have been transformed by statisticians, so too have nonprofits. In the book Moneyball written by Michael Lewis, he illustrated how statisticians always loved to analyze baseball data but the baseball organizations never paid attention to these “data nerds” in recruiting players and composing their team. Teams relied instead on the anecdotal views of their scouts as to “who looked like they’d be a good baseball player” or using statistics that were not predictive of what mattered in baseball. For example, wins and losses, or even earned run average, are far less predictive of a pitcher’s skill than strikeouts, walks, and home runs allowed. When the Oakland A’s, at the time a small market team, hired a statistician to exploit the inefficiencies in player recruitment and identified players that all of the other teams missed, , people began to take notice when the A’s kept winning their division. Now all baseball teams have their own in-house statistician.

We envision the same for nonprofits: nonprofits housing a statistical team capable of helping management and board make data-driven decisions around tricky issues.

Q: How can people learn to “trust” statistics MORE than their intuition, especially for non-numerical types?

Q: Organizational change is difficult. What is a good first step in convincing the Board that change is necessary? I’m looking for a lever.

[answering both of these questions]

The first thing to recognize is that data-driven decision making is not a simple technical fix. It is a cultural change. The best motivator we’ve seen is to show the board and management team who may not trust the statistics that their intuition is off 80 percent of the time. When we are reviewing reports with our clients (survey results, financial analysis, etc.), before we unveil the results, we ask the management team or board to review their original hypotheses. Which factors do you feel in your gut make a difference? Which demographic groups of your constituents do you think you do the best (or worst) job serving? We write all these hypotheses on the board. And then we check them, one by one, against the data. Invariably, across all our clients, 80 percent of these hypotheses have no basis in the data. The response to this “set up” is usually a mix of shock, vacillation as people come up with new justifications to support the findings they’ve just seen, and some resistance by arguing with the methodology. We try to hold up a mirror to everyone by saying that we are happy to change the methodology (the results almost never change), or explore new hypotheses or mechanisms (again, invariably those hypotheses are not validated by the data either).

Another important point here: we are not saying that anecdote, emotion, and intuition don’t have their place. Read Malcolm Gladwell’s books to understand that they are critical – mostly in situations that require extremely rapid response where you do not have time to think (e.g., a car suddenly stops in front of you) and your brain can process a response based on intuition faster than you can think. Plus, emotion is the reason many of your board members and management team are so committed to your organization. It’s just that for organizational decision-making, which involves deciding what is best for the mission of the organization and usually offers a lot of time and are often novel new situations, anecdote is usually not helpful, and data is more powerful.

You may continue this discussion with Sacha at sacha@measuring-success.com.

lindsay-nichols.jpgThe preceding is a blog post by Lindsay Nichols, Vice President of Marketing and Communications at America’s Charities, the leader in workplace giving and philanthropy. As a member of the organization’s senior leadership team, Lindsay guides and oversees the strategy and execution of all marketing and communications efforts with a major emphasis on strategy and tactics that support increased growth for the organization. Lindsay has been quoted in the New York Times, Wall Street Journal, Chronicle of Philanthropy, NonProfit Times, St. Louis Post-Dispatch, St. Louis Public Radio, Dallas Morning News, and more.

Topics: Events Webinars Nonprofit