Bodong Chen

Crisscross Landscapes

Notes: A review of e-learning in Canada



Citekey: @abrami2006review

Abrami, P. C., Bernard, R., Wade, A., Schmid, R. F., Borokhovski, E., Tamin, R., … others. (2006). A review of e-learning in Canada: A rough sketch of the evidence, gaps and promising directions. Canadian Journal of Learning and Technology/La Revue Canadienne de L’apprentissage et de La Technologie, 32(3).



This review provides a rough sketch of the evidence, gaps and promising directions in e­learning from 2000 onwards, with a particular focus on Canada. (p. 2)

In total, there were 726 documents included in our review: 235 – general public opinion; 131 – trade/practitioners’ opinion; 88 – policy documents; 120 – reviews; and 152 – primary empirical research. (p. 2)

The Argument Catalogue codebook included the following eleven classes of variables: 1) Document Source; 2) Areas/Themes of e­learning; 3) Value/Impact; 4) Type of evidence; 5) Research design; 6) Area of applicability; 7) Pedagogical implementation factors; 8) A­priori attitudes; 9) Types of learners; 10) Context; and 11) Technology Factors. (p. 2)

We found that over half of the studies conducted in Canada are qualitative in nature, while the rest are split in half between surveys and quantitative studies (correlational and experimental). When we looked at the nature of the research designs, we found that 51% are qualitative case studies and 15.8% are experimental or quasi­experimental studies. It seems that studies that can help us understand “what works” in e­learning settings are underrepresented in the Canadian research literature. (p. 2)

We found, generally, that the perception of impact or actual measured impact varies across the types of documents. They appear to be lower in general opinion documents, practitioner documents and policy making reports than in scholarly reviews and primary research (p. 2)

The impact of e­learning and technology use was highest in distance education, where its presence is required (Mean = 0.80) and lowest in face­to­face instructional settings (Mean = 0.60) where its presence is not required. (p. 3)

Interestingly, among the Pedagogical Uses of Technology, student applications (i.e., students using technology) and communication applications (both Mean = 0.78) had a higher impact score than instructional or informative uses (Mean = 0.63). This result suggests that the student manipulation of technology in achieving educational goals is preferable to teacher manipulation of technology. (p. 3)

professional development was underrepresented compared to issues of course design and infrastructure/ logistics; most attention is devoted to general population students, with little representation of special needs, the gifted students, issues of gender or ethnic/race/religious/aboriginal status; the greatest attention is paid to technology use in distance education and the least attention paid to the newly emerging area of hybrid/blended learning; the most attention is paid to networked technologies such as the Internet, the WWW and CMC and the least paid to virtual reality and simulations. Using technology for instruction and using technology for communication are the two highest categories of pedagogical use. (p. 3)

We examined 152 studies and found a total of 7 that were truly experimental (i.e., random assignment with treatment and control groups) and 10 that were quasi­experimental (i.e., not randomized but possessing a pretest and a posttest). For these studies we extracted 29 effect sizes or standardized mean differences, which were included in the composite measure. The mean effect size was +0.117, a small positive effect. Approximately 54% of the e­learning participants performed at or above the mean of the control participants (50 th percentile), an advantage of 4%. However, the heterogeneity analysis was significant, indicating that the effect sizes were widely dispersed. It is clearly not the case that e­learning is always the superior condition for educational impact. (p. 3)