read

References

Citekey: @Snyder2011-ss

Snyder, P. (2011). Implementing Randomized Controlled Trials in Preschool Settings That Include Young Children With Disabilities: Considering the Context of Strain and Bovey. Topics in Early Childhood Special Education, 31(3), 162–165.

Notes

A commentary on @Strain2011-sn, highlighting its contribution to finding the “best-available” evidence by running a randomized controlled trial. History of IES’ emphasis on RCT is also explained.

Highlights

The intent of this federal support is to increase the number of scientifically valid efficacy or effec- tiveness evaluations conducted on “early” interventions with the goal of identifying “what works.” It is expected that these studies will help inform and advance practices in the field and contribute to activities related to evidence-based practice, particularly activities focused on identification of the “best-available” evidence (Snyder, 2006). (p. 1)

Several developments have shaped not only how this RCT was designed and implemented, but also how other RCTs are being conducted to evaluate promising interven- tions for young children with disabilities or the practitio- ners who support them. (p. 1)

The purpose of this commentary is to describe these developments and to identify issues likely to be encountered when conducting these studies or review- ing findings. (p. 1)

Findings from a review of 450 studies published between 1990 and 1998 that used quantitative group designs revealed a paucity of RCTs conducted during this period (10% of the reviewed studies). In addition, data suggested that the best-available evidence was not uniformly convincing and compelling given issues related to the methodological integrity of the reviewed studies (p. 1)

In the seminal text Experimental and Quasi-Experimental Designs for Research, Campbell and Stanley (1966) pro- vided an historical account of the enthusiasm and disillu- sionment with rigorous experimentation in education—dating back to Thorndike. Over the years, our field also has debated the value and feasibility of conducting rigorous experiments, such as RCTs, in applied field settings (e.g., Bricker, Bailey, & Bruder, 1984; Snyder, 2006; White & Pezzino, 1986). (p. 1)

What is relatively new, however, is the availability of fed- eral support from the U.S. Department of Education to con- duct scientifically valid efficacy or effectiveness evaluations of interventions relevant for early intervention and early learning in special education. (p. 1)

the National Center for Special Education Research (NCSER) was established within IES as part of the reauthorization of the Individuals with Disabilities Education Improvement Act (IDEA, P.L. 108-446). (p. 2)

rom a best-available evidence per - spective, these designs generally produce evidence at an individual study level that is less convincing and compel- ling than experimental designs, particularly randomized controlled trials. (p. 2)

The Act specified, in Section 201, that research conducted under the auspices of NCSER should “conform to high standards of quality integrity, accuracy, validity, and reliability” and be “carried out in accordance with the stan- dards for the conduct and evaluation of all research and development established by the National Center for Education Research.” (p. 2)

The team acknowledged that the general lack of methodological rigor across the group quantitative studies might not reflect poor science or scien- tists but realities and resource constraints associated with conducting applied group experimental intervention research in authentic settings. In addition, at this time, emphasis was shifting from using group experimental designs to address “first-generation” questions about the effectiveness of early intervention to using these designs to address second- generation research questions (Guralnick, 1997). (p. 2)

help advance understandings about “what works” (on average) for particular groups under certain (controlled) circumstances. (p. 2)

The Strain and Bovey (2011) study was among the first of the efficacy studies funded by the IES. The authors acknowl- edge that although 28 peer-reviewed component analysis stud- ies have been conducted to evaluate the Learning Experiences Alternative Program (LEAP) intervention, the “overall effi- cacy of enrollment in LEAP could not be argued without a randomized trial” (Strain & Bovey, 2011, p. 3). In IES par- lance, the Strain and Bovey study was a “Goal 3” study. Goal 3 studies focus on examining intervention efficacy or the rep- lication of intervention findings (IES, 2011). (p. 2)

One major development that has influenced the wider use of RCTs to address second-generation research ques- tions was associated with the passage of the Education Sciences Reform Act of 2002 (PL 107-279), which estab- lished the Institute of Education Sciences (IES). IES was charged with supporting rigorous research relevant for edu- cation practice and policy. (p. 2)

projects of explo- ration (Goal 1), development and innovation (Goal 2), scale-up (Goal 4), and measurement (Goal 5). (p. 2)

the Act specified that scientifically valid efficacy or effectiveness evaluations of educational interventions should “employ experimental designs using random assignment, when feasible, and other research methodologies that allow for the strongest possible causal inferences when random assignment is not feasible” (Education Sciences Reform Act of 2002 (p. 2)

Many of the require - ments are consistent with recommended practices related to the implementation of rigorous experimental designs, (p. 2)

including RCTs (p. 3)

Third, rather than compare the LEAP intervention to a “business-as-usual” condition, teachers in comparison classrooms received LEAP intervention manuals, videos, and training presentation materials. Obtained effects that favored the intervention condition occurred under circum- stances where the two levels of intervention shared compa- rable components, which often results in smaller obtained effect sizes. By carefully selecting the counterfactual condi- tion (and measuring what occurred in this condition), Strain and Bovey (2011) have information that is useful for informing post hoc interpretations of treatment effects rela- tive to the counterfactual. (p. 3)

Fourth, the social validity of the LEAP intervention was systematically examined and evidence presented by Strain and Bovey (2011) suggest noteworthy and positive associa- tions between the implementation fidelity scores for the subgroup of teachers in the intervention cohorts at the end of coaching and their social validity ratings. (p. 3)

Conducting these evaluations in authentic pre- school settings with specific populations of teachers or chil- dren, however, presents opportunities and challenges. Strain and Bovey (2011) described how they addressed several of these opportunities and challenges. (p. 3)

First, “typical” intervention agents were supported through training workshops, provision of materials, and on- site support to implement the LEAP intervention in authen- tic preschool settings. (p. 3)

Strain and Bovey (2011) also encountered challenges not unfamiliar to others who conduct RCTs in authentic settings: (a) missing data, (b) participant attrition, (c) variations in treatment implementation, (d) variations in treatment effects across individual and sites, and (e) design “trade-offs” related to available resources and costs. Their acknowledg- ments of these (and other) limitations help remind us that no single study should or would be expected to offer definitive evidence of what works for whom and under what circum- stances in relation to promising interventions. (p. 3)

Teachers’ implementation of LEAP practices likely resulted in shifts in children’s classroom experiences that, in turn, were associated with differential developmen- tal and learning outcomes (Wolery, 2011). (p. 3)

Much remains to be learned about the type, quality, and dosage of implemen- tation supports that might be needed to achieve and sustain implementation fidelity, particularly for multicomponent interventions like LEAP. (p. 3)

watch for these words (p. 3)

Second, fidelity of implementation was evaluated in both experimental conditions using the Quality Program Indicators (QPI) measure. (p. 3)

Despite identified challenges, I believe we should remain optimistic about our ability to conduct suffi- cient rigorous experimental evaluations to answer important second-generation research questions. Strain and Bovey have made a noteworthy contribution to the “best-available” research evidence, particularly in relation to interventions for young children with autism spectrum disorders. (p. 3)

Blog Logo

Bodong Chen


Published

Image

Crisscross Landscapes

Bodong Chen, University of Minnesota

Back to Home