Culturally Responsive Teaching: A Promising Approach—But ‘Evidence-Based’?

Over the last two decades, teachers, administrators, and policymakers have taken part in a hard-won campaign to find “evidence-based” solutions that can promote equity in student outcomes. No Child Left Behind (NCLB) launched the movement, increasing pressure on states, districts, and schools to find “what works”—and just in case they missed the message, the word “scientifically based research” was famously included in the bill over 100 times. Its successor, the Every Students Succeeds Act (ESSA), extended efforts to promote the use of evidence, becoming the first federal education law to distinguish between levels of evidence and to provide clear guidance on “evidence-based” decision making.

This effort to promote evidence-based decision-making is reasonable—we should try to make better-informed decisions about interventions to improve student outcomes. But what should decision-makers do when the availability of proven interventions is slim?

Consider the example of culturally responsive teaching—an approach that prepares educators to work with culturally, ethnically, racially, and linguistically diverse students. There is a growing push for schools to reform their policies and practices to better align with evidence-based culturally responsive practices in order to promote better outcomes for diverse students. Undertaking this charge is crucial, considering the long-standing opportunity gaps diverse students currently face. Unfortunately, a new analysis by researchers from the University of Virginia and John Hopkins University indicates that empirical research on the effectiveness of school-based culturally responsive interventions is critically lacking.

Consequently, decision-makers seeking to implement a culturally responsive intervention at their district or school – whether a training workshop, coaching session, or policy change –  may have no way of distinguishing between interventions that work and interventions that don’t.

The literature review, published in the Journal of Teacher Education, suggests that of hundreds of studies published about culturally responsive teaching over a span of almost two decades, only a surprisingly small number have measured the impact of educator-level interventions—the type of evidence you need to show interventions “work”. According to the researchers, out of 179 peer-reviewed empirical studies published from 1998 to 2014 that evaluated culturally responsive interventions in K-12 settings, only two studies measured the impact of interventions on student outcomes, and only eight measured the impact of interventions on educator outcomes. And of those ten studies, none were able to establish a causal relationship between implementation of the intervention and educator or student outcomes. What’s more, none of the studies employed rigorous enough design features to be disseminated in the Department of Education’s trusted inventory of evidence-based interventions, What Works Clearinghouse (WWC).

These results become more concerning when stacked up against ESSA’s new tiers of research evidence. The federal government’s guidance document on evidence-based decision-making, suggests that to have “strong” evidence – the highest level achievable – an intervention must be supported by at least one experimental study that demonstrates a significant effect and is “well-designed and well-implemented” enough to meet WWC standards without reservations (or is of equivalent quality). None of the studies assessed in the literature review meet these rigorous standards.

According to the same document, to reach the second level of evidence, or “moderate” evidence, an intervention must be supported by at least one quasi-experiment that is “well-designed and well-implemented” enough to meet WWC standards with reservations. And to be regarded as “promising” – the third level of evidence – an intervention must be supported by at least one correlation study that is also “well-designed and well-implemented” but does not have to meet WWC standards. Again, based on the body of work the researchers assessed, none of the studies met the second or third tier of evidence.

However, federal guidelines do provide a fourth option for interventions based on “rationale” that do not meet these top three levels of evidence. These are interventions that show some evidence of effectiveness based on research and are driven by a strong theory that they can improve student outcomes, but they need to be undergoing evaluation. That the interventions assessed in the literature review can be regarded as having sufficient “rationale” depends whether investments are being made to evaluate them.

Ensuring that support for culturally responsive interventions meets the top three tiers of evidence is imperative because in some cases ESSA restricts states to using only interventions that are supported by the most rigorous levels of evidence. It requires, for example, that interventions for low-performing schools meet the highest levels of evidence, and restricts access to Title I school improvement funds to schools that engage in activities that fit within the top three tiers of evidence. And even though the use of fourth-tier interventions can be funded through other funding streams, without a solid empirical base culturally responsive interventions can be sidelined in schools with the most need.

Such a reality is frustrating because educators (myself included) have witnessed culturally responsive teaching strategies improve classroom culture, student-teacher relationships, student engagement, and academic outcomes—and we have seen these outcomes for students who are culturally, ethnically, racially, and linguistically diverse. But our experiences have not yet been documented, published, or shared. And it’s unlikely that districts currently implementing culturally responsive interventions have the capacity or know-how to engage in rigorous evaluation of their own programs. Fortunately, ESSA may enable us to overcome some of these challenges.

ESSA has generated opportunities to advance culturally responsive teaching by giving states more flexibility to implement and evaluate promising interventions—but the extent to which this approach is scaled and disseminated hinges on states’ commitments to take advantage of these opportunities and engage in rigorous evidence-building. Among states’ priorities should be to foster partnerships with researchers and invest in training so that school and district staff can conduct their own evaluations of culturally responsive interventions. Doing so could prove critical to increasing our understanding of what interventions work, for whom they work, and under which conditions they work.

Equally important is the need for states, districts, schools, and research organizations to facilitate the wide-scale collection, dissemination, and implementation of new findings. Such actions could help spur the wide-scale implementation of culturally responsive teaching—which would allow more students to gain access to responsive, inclusive, and effective learning opportunities.

 

Jenny Muñiz is a  Public Policy Fellow for New America’s Education Policy program.