Article Image
Article Image
read

As the second iteration of my Learning Analytics course concludes at UMN, I thought it might be a good time to reflect on various issues we’ve explored in this course and I’ve been personally reflecting on.

Learning analytics is an emerging, highly interdisciplinary field where many disciplines–such as education, computer science, and engineering–intersect. Since its first significant scholarly gathering in 2011, learning analytics has been increasingly mentioned in news, technical reports, academic publications, and grant solicitations. The surge of this nascent field rests on a promise–and also a premise–that digital traces of learning could be turned into actionable knowledge to promote learning and teaching.

What is learning analytics?

The definition of learning analytics is plural and multifaceted due to the kaleidoscopic conceptions of learning and analytics introduced from various disciplines. The most widely adopted definition of learning analytics originated from its first conference: “learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011, p. 34). This definition captures key components of learning analytics, which are further articulated in a conceptual model composing seven components, including collection, storage, cleaning, integration, analysis, representation, visualization, and action (Siemens, 2013).

One approach to understanding learning analytics is to recognize what are not learning analytics. Efforts have been made to distinguish learning analytics from its adjacent areas. Academic analytics, an area that was mostly inspired by business intelligence (Goldstein & Katz, 2005), have been mentioned frequently with learning analytics. In contrast to academic analytics, which is focused on institutions and administrators without much attention to pedagogy, learning analytics is more directly concerned with teachers and learners by attending to micro-patterns of learning (Long & Siemens, 2011). Educational Data Mining (EDM), a field that emerged a few years earlier, is also associated with learning analytics. According to Siemens (2013), while learning analytics is more concerned with sense making and action, EDM is more focused toward developing methods specifically for the exploration of data that are originated from educational settings. Although the techniques used are similar in both fields, EDM has a more defined focus on reductionist analysis (Siemens & Baker, 2012), while learning analytics attends more to practice (Chatti, Dyckhoff, Schroeder, & Thüs, 2012). Despite the distinctions, however, it is expected learning analytics, EDM and academic analytics will continue to intersect and overlap in the future.

With learning and analytics as two pivotal concepts of the field, conceptions of learning analytics may stress one over the other. On the one hand, when a data- or analytic-centric perspective is taken, the phrase “learning analytics” often evokes images of data–especially “big”, quantitative data–in education. For example, the 2013 Horizon report defines learning analytics as “education’s approach to ‘big data,’ a science that was originally leveraged by businesses to analyze commercial activities, identify spending trends, and predict consumer behavior” (Johnson et al., 2013, p. 20). This emphasis on data is not surprising since the abundance of data has been recognized as a key driver of this field (Long & Siemens, 2011). Such a data-centric view can also be observed in conceptual models of learning analytics. For instance, a model proposed by Chatti et al. (2012) includes data collection and pre-processing, analytics and action, and post-processing as three major components of learning analytics–without mentioning pedagogical contexts, sense-making, and intervention designs. This data-centric view of learning analytics is also seemingly magnified by the connection and co-evolution between learning analytics and massive open online courses (MOOCs), since MOOCs have produced “big” learning-related data and have attracted research attention from learning analytics and EDM researchers.

On the other hand, scholars argue that conceptions of learning analytics need to recognize the nuanced aspects of learning, as learning analytics is essentially about learning (Gašević, Dawson, & Siemens, 2015). Suthers and Verbert (2013) stressed that “research on learning analytics may vary in the degree to which it makes technical contributions, but the connection to learning should be present” (p. 2). This point is often easily neglected in practical efforts of creating learning analytics. Important arguments have been made that we need to contextualize specific analytics in its epistemological stances, pedagogy, and theories of assessment (Knight, Buckingham Shum, & Littleton, 2014). Rather than treating analytics as being “pedagogically neutral,” an alternative approach is to recognize the highly nuanced nature of learning analytics and bring advances in learning and analytics into coordination. Thus, despite the prominent growth of big data in recent years, premature simplification of learning analytics–and over-emphasis on data especially–would potentially do much “harm” to education without enough thinking about learning and education (Dringus, 2011).

Furthermore, learning analytics deals with educational phenomena at multiple levels. Work in learning sciences has recognized that learning happens at multiple levels–not only in individual minds, but also in small groups and larger learner communities (Stahl, 2013). For instance, collaborative knowledge building as a group phenomenon depends on contributions from individuals, but cannot be reliably inferred from individual learning (Scardamalia et al., 2012). This recognition of learning at multiple levels urges learning analytics to account for analytic levels beyond individuals, which has been favored in traditional classroom settings as well as in education regimes where accountability is the major concern (e.g., in most MOOC research). Expanding the focus from learning to a broader scope of education, Buckingham Shum (2012) differentiates macro- (region/state/national/international), meso- (institution-wide), and micro-levels (individual user or cohort) of learning analytics. Traditional scholarly analysis of learning in fields such as learning sciences has been focused on the micro-level. To assist decision making at different levels, learning analytics also need to attend to higher level educational data, as well as integration and mutual compliment among different levels.

To summarize, the meaning of learning analytics as a term is plural and multifaceted. Work in this field could emphasize learning and/or analytics in many different ways, and at different levels. Such diverse understandings and approaches make it difficult to create a unified definition of learning analytics. Therefore, it is necessary for researchers in this field to provide operational definition of the learning analytics they employ and how they relate learning and analytics.

Key tensions

The interdisciplinary, cross-level nature of learning analytics have given rise to a variety of tensions that further development in this field need to be aware of. Attending to those tensions is necessary for effective and ethical design, implementation, and use of learning analytics. Salient tensions I have observed from the literature to date include the tensions (1) among various conceptions of learning, (2) between learning and computer algorithms, (3) between agency and control, and (4) surrounding ethical access and use of educational data.

Different conceptions of learning

Learning analytics is not only about a set of analytic techniques, but is also concerned with feeding results to relevant stakeholders to improve educational practice. On the surface level, what we consider as learning would dictate the types of data collected for the analysis “learning.” Going further, researchers may hold different epistemological assumptions–about where knowledge resides and how knowledge grows–and will thus consider learning drastically differently. For instance, a researcher considering learning as knowledge acquisition in individual minds is more likely to emphasize test scores as hallmarks of academic performance. In contrast, another researcher who considers knowledge as dependent on social interactions would look into group activities for indicators of learning (Buckingham Shum & Crick, 2012). These examples illustrates that different conceptions of learning commonly exists in many research communities of education, but the divide is even wider when educational theorists, computer scientists and engineers gather around the same table. As we go forward, any effort in creating learning analytics would require the researchers and the developers to minimally make one’s conception of learning explicit so that everyone could at least try to stay on the same page.

Learning and computer algorithms

Learning analytics is broadly conceptualized as “translating” digital traces into numbers for interpretation. For example, it is compared to immersing a thermometer into water to gauge the temperature. Learning analytics, in this case, is considered as being “pedagogically neutral.” However, this is hardly the case when making sense of learning from learners’ “digital shadow” in learning environments (Buckingham Shum, 2015). During the process, critical decisions are made on data collection, choice of algorithms, interpretation of results, and possible courses of action from the results of LA. These decisions are laden with values and beliefs about learning. From a different perspective, any results of computer algorithms are contingent upon informed and contextualized interpretations; real changes in education cannot be brought about by simply giving people volumes of numbers through learning analytics (Macfadyen & Dawson, 2012). Thus, oversimplification of learning may lead to misinformed practice that neglects important aspects of learning. Siemens (2013) cautions learning analysts to “keep human and social processes central in learning analytics activities,” as the learning process is essentially social and creative (p. 1395). A similar caution is made by Gašević et al. (2015) who have problematized practices of counting certain activities for correlation with academic performance. These researchers are calling for work around coherent theoretical models of learning behaviors. It is especially dangerous if algorithms exist in “black boxes” and produce numbers that are in the hands of those in power to influence behaviors (Buckingham Shum, 2015), in the name of “optimizing” learning. This tension between learning and algorithms will continue to figure in learning analytics.

Agency and control

Learning Analytics influences learner’s agency when it is used to shape how learning is interpreted and how learners need to act (Buckingham Shum, 2015). So a constant tension that exists between the analytics and the learners is how much agency is taken away from the learners when analytics come into play. Analytics could disempower learners, making them increasingly reliant on institutions to provide them with continuous feedback rather than developing meta-cognitive and “learning-to-learn” skills and dispositions (Buckingham Shum & Ferguson, 2012). This is especially the case when analytics is contained in “black boxes” and when learners are giving results to react to without understanding how the algorithms work.

In addressing this tension, agency has been recognized as an important principle in designing learning analytics interventions (Wise, 2014). A special issue of the Journal of Learning Analytics to appear in 2016 would feature the application of learning analytics to promote high-order competencies. The balance between control and agency is high-stake, so that work in learning analytics will not undo decades of work in education to promote high-level agencies that are important for people in the knowledge age. In addition to facilitating administration, awareness, reflection and sense-making, learning analytics should invest in helping learners and stakeholders of learning to make local decision-making rather than taking such power away from them.

Data access and ethics

Ethics issues related to learning analytics are multi-dimensional. First, learning analytics researchers are divided on who owns the data and whether usage of certain learning platforms should incur consent of data use for analytics purposes. For instance, Macfadyen and Dawson (2012) mention that “data that is gathered through institutional research is subject to the provisions of the Freedom of Information and Protection of Privacy Act,” rather than traditional ethics review process for academic research. While it may be unreasonable to require researchers to obtain consent from all learners, research ethics need to be regularly evaluated even if data are already made accessible (Boyd & Crawford, 2012).

Ethical issues may figure more deeply in power relations between educational institutions and stakeholders. In higher education, for instance, power relations among students and universities should be brought to bear on when designing learning analytics (Slade & Prinsloo, 2013), as they may lead to important consequences (e.g., graduation) for individuals. In classrooms, ethical use of learning analytics needs to consider the vulnerability of students to protect them from possible harms. For example, publicly comparing all students’ performance may undermine confidence of low performing students, while presenting one’s individual performance with a class average may de-motivate middle-high performing students who could actually invest more. The impact of power relations stretches from data collection to how analytics could get interpreted and have real-life, powerful impact on individuals.

These tensions explored so far are interconnected and should not be viewed as separate. For example, moral practice of learning analytics does not only depend on ethical use of data but also moral applications that look beyond academic “effectiveness” (Slade & Prinsloo, 2013). Critically exploring different conceptions of learning would lead to more sensible discussion of agency and control when designing learning analytics. Kaleidoscopic as the field is, those tensions will exist as long as there is participation from multiple disciplines. The goal is not to achieve a unified view of the field, but to create a fruitful “common ground” for further development.

(Note: Most references used in this post can be found in my Learning Analytics course. Leave a comment if there is any interesting paper mentioned hear you cannot locate.)

Blog Logo

Bodong Chen


Published

Image

Crisscross Landscapes

Bodong Chen, University of Minnesota

Back to Home