Bodong Chen

Crisscross Landscapes

Notes: Gutiérrez & Penuel (2014). Relevance to Practice as a Criterion for Rigor



Citekey: @Gutierrez2014-qm

Gutiérrez, K. D., & Penuel, W. R. (2014). Relevance to Practice as a Criterion for Rigor. Educational Researcher , 43(1), 19–23.



The authors argue for a reconceptualization of rigor that requires sustained, direct, and systematic documentation of what takes place inside programs to document how students and teachers change and adapt interventions in interactions with each other in relation to their dynamic local contexts. (p. 1)

They close by describing how this new criterion—“relevance to practice”—can ensure the longevity and efficacy of educational research. (p. 1)

the Education Sciences Reform Act in 2002, it called for scientifically based research that would “apply rigor- ous, systematic, and objective methodology to obtain reliable and valid knowledge relevant to education activities and programs” (Pub. L. No. 107-279, p. 116). That same year, the National Research Council (2002) produced a report and Educational Researcher published a related article, “Scientific Culture and Education Research” (Feuer, Towne, & Shavelson, 2002), writ- ten by several of the report’s authors. (p. 1)

For us, conse- quential research on meaningful and equitable educational change requires a focus on persistent problems of practice, exam- ined in their context of development, with attention to ecologi- cal resources and constraints, including why, how, and under what conditions programs and policies work. (p. 1)

However, there was and still remains a concern from the field about the narrow set of criteria used to define rigor. Erickson and Gutiérrez (2002) questioned the publications’ call for a “scientific culture” that prescribed and relied primarily on “gold standard” random assignment studies of program effects as the remedy for the failures of education research to offer credible guidance for policy and practice. As we (Erickson & Gutiérrez, 2002) argued then, rigor in studies that aim to draw causal inferences about policies, programs, and prac- tices requires in-depth qualitative research. In particular, scien- tifically rigorous research on what works in education requires sustained, direct, and systematic documentation of what takes place inside programs to document not only “what happens” (cf. National Research Council, 2002) but also how students and teachers change and adapt interventions in interactions with each other in relation to their dynamic local contexts. (p. 1)

a study’s relevance to transforming practice (p. 1)

As director John Easton (2013) recently noted, through these and other initiatives, IES is “promoting research use, but not in a unidirectional ‘research to practice’ sense but in a more reciprocal ‘practice to research’ pathway” (Easton, 2013, p. 18). This new research program calls for “empirical tinkering” (Morris & Hiebert, 2011) in which partners collaborate “to fine tune programs, interventions or regimens of activities through iterative processes that rely heavily on measurement, quick stud- ies and refinement” (Easton, 2013, p. 18). (p. 2)

For these new programs to be successful, relevance to practice must be an explicit criterion for judging the quality of research proposals. (p. 2)

Educational systems have multiple layers of infrastructure that have accumulated over time and that must be engaged directly if they are to support, rather than obstruct, transformation (Penuel & Spillane, in press). (p. 2)

Interventions themselves are contested spaces, filled with tensions and resis- tance from a range of stakeholders. Supporting and engaging more diverse stakeholder engagement in defining the focus of research and development will require researchers and reviewers to rethink the nature of educational interventions. (p. 2)

This dialectic of “resistance and accommoda- tion” in practice is what Pickering terms “the mangle of practice” (Pickering, 2010, p. 10). (p. 2)

These models do not require researchers to specify ahead of time all the elements of an intervention, since practitioners par- ticipate in design, and implementation data inform an iterative design process that often transforms interventions. (p. 3)

Some teachers discontinued use of the materials because of perceived policy pressures from within their school or district to adopt different materials and approaches to teaching mathematics. (p. 3)

This is a particularly relevant dilemma for education research- ers who rely on extramural funding to support their empirical work. (p. 3)

we understand that review panels need criteria to ensure rigorous, systematic examination of an educational problem with a prob- ability of success in its execution. (p. 3)

In our view, sustaining nearly any robust intervention will require ongoing work, work of the kind that went into making the SimCalc study a success and the program a good temporary fit to the goals of teachers for their students in the study. This includes work to craft profes- sional development, curriculum, and technology into a coherent “curricular activity system” that could be used in a wide variety of classrooms, work to align this system to standards, and work to support implementation of the system in the field (Roschelle, Knudsen, & Hegedus, 2010). (p. 3)

The work of mutual adjustment of powerful interventions and local contexts does not end when the research ends, but sus- taining an intervention requires uptake by schools and districts (Coburn, 2003). (p. 3)

For us, this represents an important advance for the agency, because greater weight is given to the importance of generalization from research findings. The challenges to generalization in education research are many (Berliner, 2002); here we highlight two challenges that strike us as particularly challenging for IES, given the kinds of projects the agency has funded in the past. (p. 3)

This requires a shift in focus of research and development efforts, away from innovations designed to be implemented with fidelity in a single context and toward cross-setting interventions that leverage diversity (rather than viewing it as a deficit). It also suggests the need to focus some research and development projects on the design of new organizational routines and infrastructures for improvement (Bryk et al., 2011; Penuel & Spillane, in press). It also implies the need for efficacy and effectiveness research that addresses how to make programs work under a wide range of cir- cumstances and for all groups (Bryk, 2009; Bryk et al., 2011). (p. 4)