In order to measure something, one must have a “standard”: an agreed-upon yardstick, so to speak, by which things are evaluated and compared. When measuring is extended from the concrete (say, the dimensions of a room in a house) to societal applications (such as measuring the social impact of a program on a community), it is very far from neutral. Who has designed the yardstick, who is doing the measuring, and who is doing the evaluating have significant implications for the “outcomes”/“outputs” produced by social impact measurement and evaluation. Indeed, the significance given to such outcomes, the values placed on them, and any analysis brought forward from them are intrinsically culture-laden.
None of us has ever seen or fully experienced a truly equitable organization or society; so, how are we to hold to measures that likely have never captured the full picture of impact or even depicted what that full picture could be? When you add to that the racial makeup of the research community historically, a lack of equity expertise among research practitioners, and the annual demand for outputs from funders, you ensure a recipe for measurement and evaluation that, at best, cannot effectively tell the stories of Black communities, and, at worst, promulgates inaccurate and harmful ones.
One of the first mega-donors to call for accountability in evaluation for social impact work was steel mogul Andrew Carnegie. He would lament that his fellow millionaires were squandering their money on unworthy charitable causes. [1] It is ever to be remembered that one of the chief obstacles which the philanthropist meets in his efforts to do real and permanent good in this world is the practice of indiscriminate giving.” [2] His desire was likely steeped in the aspiration for what he deemed better outcomes and efficient and effective decisions about the allocation of resources, but it likely led to some of the current challenges.
Historically, assessments used by nonprofits and philanthropy have not valued the perspective of communities— especially that of Black communities, Indigenous communities, and others—but instead focused on measuring outputs that organizations defined as success. We have centered values often present in white-centric spaces, and highlight what donors request to see. This is slowly shifting, and a focus on developing pro-Black measurement and evaluation processes and tools is burgeoning. We see a beautiful opportunity here to study these tensions in the composition, administration, and analysis of measurement and evaluation, and design pro-Black approaches to the practice whereby impact equals what Black people need to thrive. [3]
Our understanding of impact and of what equity and justice can look like continues to evolve; consequently, what to look for and how to measure it does, too. This means that previous measures will be insufficient, again and again, as our learning of what’s possible in an equitable world deepens.
As a community of practitioners working to advance racial justice, the three of us have often found ourselves discussing the understanding, importance, and practice of measurement and its impact on our work. We present this article as a reflection of our learning, our hopes, and the opportunities we see when partnering with measurement professionals to center equity, diversity, inclusion, and antiracism (EDIAR) in the work.
According to the Equitable Evaluation Initiative (EEI), “Evaluative work should be designed and implemented commensurate with the values underlying equity work: Multi-culturally valid, and oriented toward participant ownership.” focuses attention on how well evaluation captures meaning across dimensions of cultural diversity, and [4] [5] participant ownership refers to evaluation oriented toward the needs of the program’s or system’s stakeholders. This [6] looks like engaging stakeholders from the community as producers—not just recipients—of outcomes. In a 2020 report on education philanthropy, Alex Cortez writes:
As the author and Mexican political leader Laura Esquivel wrote, “whoever controls information, whoever controls meaning, acquires power.” Measurement is an act of power. We measure what we value, and so what we measure reflects our values. If we are imposing measures of success on communities, we are essentially also then imposing our values and agenda on them. [7]
Rather than only engaging community members when the time comes to review results, engagement should occur further upstream, where critical decisions are made about the initiative or program being evaluated (such as how impact should be defined, what success for the program looks like, and so on). It should then continue through the data collection and analysis phases, and, finally, factor into the recommendations and implementation development process arising from the evaluation. To do this effectively, evaluative efforts must be flexible and require a reasonable allocation of resources, opportunities, obligations, and bargaining power for all stakeholders. Once engaged, community knowledge can be leveraged to understand the local context, interpret results, and allow for resulting strategies to be adapted to the local environment and culture, thus increasing the likelihood of sustainability.
Essentially, evaluators must recognize and respect the unique insights and assets that community members bring to an initiative, especially in instances when those evaluators are not proximate to the communities involved.
For example, in the wake of the mass school closures in spring 2020 as a result of COVID-19, the Bill & Melinda Gates Foundation funded YouthTruth—an organization that gathers and leverages student perceptions to help educators and education funders accelerate improvement—to administer student perception surveys nationwide so as to capture how these unprecedented events were impacting students. When the fall 2020 survey results came in, YouthTruth found that a greater number of students who identify as female (57 percent) or “identify another way” (79 percent)—in comparison to students who identify as male (33 percent)—indicated that feeling depressed, stressed, or anxious was creating an obstacle to learning. [8] YouthTruth engaged the students as active participants in the survey data analysis process, and together they developed hypotheses and conclusions based on the survey data.
Initial interpretations of the data had led some in the organization to consider focusing mental-health supports on female- and nonbinary-identifying students; but when students were brought into the analysis process, they provided feedback about the potential impact of societal gender norms on survey responses. One student posited that students who identify as male may not feel as comfortable divulging feelings of depression and anxiety, because that is often viewed as a sign of weakness, especially within communities of color. This shifted how this and other data points were interpreted, which in turn impacted the conclusions that were drawn and the decisions that were made based on those conclusions. [9]
Centering equity in evaluation requires a shift from the status quo and an emphasis on innovation to expand helpful tools that exist and develop new measurement frameworks. In many places, evaluators are partnering with equity leaders, building on traditional evaluation tools and methods, and incorporating an equity frame where possible. In other instances, new frameworks are being developed to fully reinvent how evaluation is conducted, placing equity at the core. For example, Community Responsive Education (CRE), a national nonprofit that provides consulting services to schools and districts to make their pedagogy and curricula more reflective of the youth and families they serve, has been developing a youth wellness index. This index is based on a student survey that focuses on what CRE calls “leading” indicators of students’ well-being, including students’ sense of self-love, empathy, connectedness, and agency. [10]
CRE’s work is grounded in the idea that education’s focus on lagging indicators (signs that only become apparent after what has driven them has passed), such as grades and test scores, diminishes the incentive to address students’ overall well-being as a precondition for success in school. This example is a reminder of our ability to discover new standards by which to define success/impact, and to recognize the continuous evolution possible in measurement and evaluation when we hold a stance of curiosity and focus on learning.
We know that even with the redesign and creation of measurement and evaluation tools focused on racial equity, these are not as widely used as they should be, and we must continue to share our learning on what’s possible and how to be more effective. Our focus should always be on developing more effective tools that center community success and focus on equitable evaluation as both a process and an outcome.
The events of the past two years have more than laid bare the fact that we know too much now to keep operating in the same ways, and awakened calls for pro-Black systems change in how we define and measure success and impact. To be relevant and authentic in capturing the dynamism of and empowering real sustainable change in our communities, those who practice measurement and evaluation must shift the power from funders—and the set of researchers they have historically funded to undertake traditional methodologies—to those who are most closely experiencing the problems we are trying to solve. To effect that power shift, we recommend the following approaches at the individual, organizational, and system levels. We believe each of these levels has unique opportunities for action, and that they are interconnected. By incorporating a multilevel approach that involves all of us contributing from our respective places of influence, we can work toward building a pro-Black measurement and evaluation system.
We acknowledge that none of our organizations is living these completely, and we challenge you—and ourselves—to take action from our respective roles. We leave you with the following specific recommendations for evaluators, funders, consultants, and intermediaries based on our work in this space:
1. Katie Cunningham and Marc Ricks, “Why Measure?,” Stanford Social Innovation Review 2, 1 (Summer 2004): 44– 51.
2. Andrew Carnegie, “The Best Fields for Philanthropy,” North American Review 149, 397 (December 1889): 682–98.
3. See Cyndi Suarez, “Going Pro-Black,” Nonprofit Quarterly, January 20, 2022, org/going-pro-black/.
4. Center for Evaluation Innovation, Institute for Foundation and Donor Learning, Dorothy Johnson Center for Philanthropy, and Luminare Group, “Equitable Evaluation Framework™ (EEF) Framing Paper,” Equitable Evaluation Initiative, July 2017, equitableeval.org.
5. Karen Kirkhart, “Seeking multicultural validity: A postcard from the road,” Evaluation Practice 16, no. 1 (February 1995): 1–12.
6. Michael Quinn Patton, Utilization-Focused Evaluation, 4th (Thousand Oaks, CA: SAGE, 2008).
7. Alex Cortez, Systems Change and Parent Power (Boston: New Profit, Fall 2020).
8. YouthTruth, Students Weigh In, Part III (San Francisco: YouthTruth, August 2021),
9. Anecdote shared with Titilola Harley during a grantee check-in, which Harley confirmed with YouthTruth’s executive director, Jen Vorse Wilka, in February 2022.
10. This information is based on a talk attended by Titilola Harley, given by one of Community Responsive Education’s codirectors, Jeff Duncan-Andrade, as part of the Equal Opportunity Schools’ (a Bill & Melinda Gates Foundation grantee) 2020
11. Think of Us, COVID-19 MicroCash Grant Application Data Briefing (Washington, DC: Think of Us, March 10, 2021).
ABOUT THE AUTHORS
Angela N. Romans is the founding executive director of Innovation For Equity. She bring…
Candace Stanciel is a principal at The Common Good Agency, and a partner at New Profit.
Titilola Harley is the founder of Harley Consulting Group and a program officer on the K…