Bodong Chen

Crisscross Landscapes

Notes: Roll - 2015 - Understanding, evaluating, and supporting self-regulated learning using learning analytics



Citekey: @Roll_undated-pn

Roll, I., & Winne, P. H. (2015). Understanding, evaluating, and supporting self-regulated learning using learning analytics. Journal of Learning Analytics, 2(1), 7–12.



In terms of one widely cited model of self‐regulated learning proposed by Winne and Hadwin (1998; Winne, 2011; Greene & Azevedo, 2007), learners exercise agency across four loosely sequenced phases: (1) They scan their environment to identify internal factors (cognitive, motivational, affective) and external features that may influence a task. (2) They frame goals and design plans to approach them. (3) They implement actions to animate their plans, monitor the match between a plan and its actualization and modestly adjust actions as they judge appropriate. And, (4) they re‐examine aspects across these three prior phases to consider major, strategic revisions to understanding and action if progress toward goals is blocked, too slow or in some other way unsatisfactory. (p. 1)

The diversity of papers in this special section offers an interesting snapshot of the current state of research into SRL and learning analytics. Several themes emerge: (p. 3)

 Choices. Rather than looking at the content of learners’ actions, we can look at the choices that learners make. SRL theories describe how students manage their learning, and open ended environments that gather trace data about these processes allow us to evaluate the types of actions students choose to perform, rather than only their content (Cutumisu; Colthorpe) (p. 3)

 Relativity. Actions are neither inherently good nor bad. Rather, choices students make reflect their perceived challenges, knowledge, prior experiences, and habits. Thus, actions should be interpreted in relation to a learner’s context (Biswas; Nussbaumer). (p. 3)

 Reusable analytics. The volume of learning data gathered in online settings affords new opportunities to explore complex research questions. New methods and analytical techniques that provide insight into the complexity of SRL represented in these spaces are required. Specifically, we see a shift from looking at individual events to looking at sequences of actions (Biswas; Bannert). Developing new tools that can be reused across contexts and settings can contribute to the generality of theories and to support and evaluate transfer of SRL patterns (Biswas; Colthorpe). (p. 3)

 Challenge: operationalization. Many of the papers address similar constructs (e.g., planning, revising) and operationalize them in different ways, contextualized in their respective systems. Learning analytics allows us to take abstract constructs and instantiate them in different learning environments. Comparing and contrasting these instantiations can help us evaluate the effect of contextual and environmental factors on learners’ SRL. (p. 3)

The world, however, is stochastic; so methods for detecting patterns – human and machine – must wrestle with noise and indeterminacy. (p. 4)

One approach to this problem requires two distinct moves. The first is gathering an increased variety of types of data such that each type of data is relatively statistically independent of other types. The second move is to massively increase the volume of data. (p. 4)

This two‐pronged approach leads to four implications for learning analytics and research on self‐ regulated learning. First, instrumentation needs to be designed that learners can use nearly ubiquitously. This helps to increase the volume of data. Winne and his colleagues’ nStudy platform (e.g., Winne, 2015) and content‐agnostic learning management systems illustrate this step. However, as learning technologies evolve, the footprint of any single system shrinks. Thus, systems need to support better interoperability. New technologies and standards (such as TinCan xAPI) attempt to bridge this gap. (p. 4)

Second, algorithms capable of detecting partitions or clusters in vastly dimensional, big data need to be improved or invented. Presently, analytics that are supposedly general are often tested only in one setting. Researchers should evaluate the generalizability not only of theories, but also of tools and methodologies. (p. 4)

Third, as data are collected more ubiquitously, we need to do a better job of attending to and capturing features of context. (p. 4)

Wait – isn’t this temporal structure fundamentally the same as temporally unfolding SRL wherein an agentic learner reviews her learning history to judge and adjust learning activities? Yes, it is. We surmise a bridge to join the solo to the social context for regulated learning activities lies in using the topic a learner studies as a means to follow how “threads” of content unfold across time in conjunction with patterns of self‐regulated learning. To our knowledge, methods are not yet available that can simultaneously track patterns in which elements simultaneously represent the topic (subject matter, features of group process, a learner’s affect) in the same unit as a representation of the cognitive operations applied to that topic. We recommend this as a topic worthy of future work. Furthermore, as learning is often a social process, designs for learning analytics need to take account of how a group regulates its learning, i.e., forms of co‐regulated and socially‐shared regulation of learning (Hadwin, Jarvella, and Miller, 2011). The field needs to make progress toward mapping and developing learning analytics for nested models of regulated learning. (p. 5)

Finally, many of the challenges faced by learning analysts who explore self‐regulated learning are shared with colleagues working in the field of data mining (see Winne & Baker, 2013). (p. 5)