Article Image
Article Image
read

(Photo Credit: Dustin Gaffke)

Windchill

I was listening to a science radio program the other (other) night when the host was talking about ‘windchill being inaccurate,’ even though many of us have never questioned its accuracy at all. Scientific American has published an article on the same topic, saying:

Wind chill is a mathematically derived number that approximates how cold your skin feels—not how cold your skin actually is. … Wind carries some of that heat away, however, and the faster the wind, the faster the heat loss. Once the wind surpasses 25 mph or so, it whisks away heat more quickly than your body can emit it, leaving your skin exposed to the full low temperature.

Indeed, the windchill number I check every day in winter (yea, I live in Minneapolis) is actually based on a formula, with air temperature and wind speed as two most critical factors. The central issue discussed in the radio program was there are a bunch of formulas and none of them are definitely accurate. Based on this article from USA Today:

The new formulas are based on greater scientific knowledge and on experiments that tested how fast the faces of volunteers cooled in a wind tunnel with various combinations of wind and temperature.

As someone may continue to ask: Well, there could be a range of factors leading to inaccuracy of even the newest formulas. One is humidity in air. Another one is those volunteers’ perception of coolness which may be influence by their body fat and tolerance of cool temperature. This list could keep growing…

Looking at the definition of windchill below, my understanding of it gets even muddier.

A quantity expressing the effective lowering of the air temperature caused by the wind, especially as affecting the rate of heat loss from an object or human body or as perceived by an exposed person. (see Oxford Dictionaries)

Does it mean the actual air temperature gets lowered or the perceived temperature from a subjective human? Do we consider windchill for an object like a chair? How about an object with different levels of humidity or heat? Such a definition, together with a bunch of other conflicting definitions (e.g., on Wikipedia), is getting me nowhere in understanding what windchill really is.

Nevertheless, people including me still find windchill useful, especially if they grew up in a warmer place and got fooled by winter days with bright sunshine.

Validity, Accountability, and Integrity in Learning Analytics

I think the debates on windchill are interesting because this concept seems to combine both objective reality and human-constructed perception together. If there is a “scientifically” validated formula that can validly and reliably handle the lowering of the actual air temperature, how could scientists – and the public – reach a consensus on the human perception part?

This issue sounds a lot like long-standing “paradigmatic wars” in education research and more recent discussions of validity in learning analytics, none of which are going away anytime soon.

As for learning analytics, broader concerns on accountability and integrity could kick in as well, in addition to an algorithm’s capability in capturing the “reality” of learning processes. In a provocative blog post by Simon Buckingham Shum, he situates this discussion in a broader societal dependence on algorithms (e.g., search engines, social feeds, airport security check); while he argues for more algorithmic transparency and accountability, he also calls for a more wholistic view of the ‘analytics cycle’ and proposes to focus on the Analytic System Integrity instead of algorithms that only constitute a part of the system.

My interpretation is that for any learning analytic to navigate the nuanced space involving various stakeholders and agents (including algorithms), the unidimensional focus on algorithmic validity is not getting us very far. While there are numerous ways to err in the learning analytics cycle – which could include data collection, data wrangling (cleaning, integration, transformation), analysis, modeling, representation (visualization), sense-making, action-taking, and so on – one promising approach is to always remain open to conversations – conversations among stakeholders, data, socio-technical environments, etc.

Another way to think about the discussion of validity in learning analytics is to reflect on a positivist orientation underlying many learning analytics solutions, especially those that aim for predictions with high-stake consequences (e.g., identifying at-risk students). A positivist orientation may help vendors sell their products, as it supports claims of ‘rigor’ leading to client buy-in. However, together with widespread opaque or ‘black-box’ algorithms, this orientation may also throttle important conversations and set the stage for nonproductive questioning or interrogations of a solution. Therefore, instead of being fixed upon validity in a positivist term, it could be wise to recognize the problem-solving nature of learning analytics and embrace a pragmatist orientation. In that way, we will not only look for validity of algorithms, but more broadly for usefulness, credibility, and trustworthiness that are situated in a wholistic analytic system. In this case, we will not pretend to invent a solution that fits all, but will engage in tuning an analytic in a local context and more modestly look at transferability (instead of external validity) among near-neighbor contexts, with dependability of a solution documented for potential transferability.

Probably what we can learn from the windchill discussion is that various formulas – less complex than learning analytics solutions (if I may claim) – are transparent and open to conversations. Its usefulness can therefore be more likely to be appreciated, rather than being dismissed by endless questioning of its accuracy.

Blog Logo

Bodong Chen


Published

Image

Crisscross Landscapes

Bodong Chen, University of Minnesota

Back to Home