Bodong Chen

Crisscross Landscapes

Notes: Hickey. (2011). Reading Moby-Dick in a Participatory Culture



Citekey: @Hickey2011

Hickey, D. T., McWilliams, J., & Honeyford, M. A. (2011). Reading Moby-Dick in a Participatory Culture: Organizing Assessment for Engagement in a New Media Era. Journal of Educational Computing Research, 45(2), 247–263. doi:10.2190/EC.45.2.g



This article identifies discrepancies between traditional instructional practices that emphasize individual mastery of abstract concepts and skills and new media literacy practices that rely upon collaborative, social, and context-specific activity. (p. 1)

This framework, which we call “participatory assessment,” builds on previous work in science and math instruction, as well as in immersive video games, and extends that work into the secondary English language arts classroom. This article describes the curriculum, the approach, and some of the assessment design principles that emerged. (p. 1)

Teachers know that most students spend much of their free time “hanging out, messing around, and geeking out” in digital social networks (Ito, Baumer, Bittani, boyd, Cody, Herr-Stephenson, et al., 2009). (p. 2)

This volume and other publications (e.g., Coiro, Knobel Lankshear, & Leu, 2008; Greenhow, Robelia, & Hughes, 2009) summarize the many reasons for including new media practices such as social networking, social bookmarking, and blogging in schools, and teaching so-called “new media literacies” (Jenkins, Clinton, Purushotma, Robinson, & Weigel, 2009). (p. 2)

This strategy transforms student assessment, an established school practice that otherwise presents one of the most daunting challenges. Building directly on the participatory nature of digital social networks (Brown & Adler, 2008; Jenkins et al., 2009), participatory assessment fosters authentic school-based engagement in new media literacies and practices while indirectly (but consistently) yielding improvement in more traditional literacies and practices. (p. 2)

  1. How can we leverage the skills and mindsets young people are developing in their use of digital technologies in order to enhance their engagement with traditional texts? 2. Howcanweuseinnovativeassessmentpracticestodosogiventherealities of classrooms and schools? (p. 2)


Some observers are beginning to recognize that the social nature of learning and knowledge in digital social networks confounds education’s traditional focus on helping individual students master well-defined bodies of stable knowledge. Digital social networks are continually shaped by shared control (where content and expertise are continually co-created by participants) and transformative interaction (where individual users and groups of users are customizing both the content and format for enjoyment or ease of use) (Xenos & Foot, 2007). The knowledge in these networks is persistent, searchable, and easily replicable (boyd, 2008). In addition, many collaborative online resources rely on appropriation and repurposing of content, complicating the issue of plagiarism and introducing potentially offensive or objectionable content. (p. 3)


By treating conventional assessments and tests as “peculiar but necessary” forms of educational discourse (Hickey & Zuiker, 2005), we align classroom discourse to assessments and tests. However, we do so in a way that protects that discourse from classroom assessments. (p. 4)

Concepts as Formalisms and Boundary Objects (p. 5)

Rather than working backward from the abstract definition of genre in the standards (e.g., Wiggins & McTighe, 2005), we work forward from the communal practices and the social and technological contexts in which students can use the idea of genre to engage with others around creative artifacts. In this sense, we characterize the proficiencies targeted by a particular curriculum and represented in externallydeveloped standards as formalisms (Barab & Roth, 2006), conceptual tools used to make meaning (i.e., enlisted) in particular discourse contexts. Participatory assessment helps students appreciate how core ideas like genre are enlisted differently in different contexts (e.g., genre study vs. remixing). In this sense, participatory assessment reframes literacies and language arts practices as “boundary objects” (Bowker & Star, 1999) that can inhabit multiple activity (p. 5)

systems. (p. 6)


Assessment Development Effort and Examples (p. 8)

The distinction between increasingly formal and abstract levels is crucial to participatory assessment. We label these levels immediate, close, proximal, distal, and remote. (p. 8)

Immediate-Level Event Reflections (p. 8)

These assessments concern the enactment of a curricular activity in a particular classroom context. (p. 8)

Two design features illustrate how these prompts are part of a larger, systematic approach. First, they are carefully phrased to focus on the activity, rather than individual understanding of a targeted formalism. In other words, they are carefully worded and aligned with the activity in order to focus students’ attention— and classroom discourse—around the domain-specific nuances that give the formalisms their communicative power. The second feature is their careful alignment with the close-level activity reflections that occur at the end of the activities. (p. 9)

Close-Level Activity-Oriented Reflections (p. 9)

These assessments are oriented toward the designed lesson (i.e., the way the lesson was intended), which is somewhat removed from the way that the lesson was enacted. Activity-oriented reflections consist of reflection questions posed to students after a lesson is completed. While still quite informal, these questions promote communal reflection on the practices (e.g., annotation and ornamentation) and formalisms (e.g., genre, remix, source material) around which the activity was designed. (p. 9)

Returning to the prior discussion of lurking, a core principle in participatory assessment is that prematurely and/or directly interrogating individuals has an evaluative tone and a summative function that undermines crucial initial participation. Immediate-level and close-level prompts focus discourse toward the curriculum and away from individual understanding and proficiency. We contend that deferring individual assessment lets students enlist the big ideas of the unit without the fear of doing so incorrectly; by carefully aligning these communal assessments to subsequent individual assessments, we provide useful motivation for students, feedback for teachers and designers, and evidence for skeptics. (p. 10)

Proximal-Level Artifact-Oriented Reflections (p. 10)

In our initial implementation, the teacher graded both the artifact and the reflections. From a participatory perspective, grading both proved problematic. While the reflective rubric helped direct attention toward the big ideas and away from the specifics of the artifact, grading the artifacts directly still led students to insist on specific examples and guidelines and obstructed students’ ability to engage with the big ideas of the unit. (p. 11)

Distal-Level Standards-Oriented Assessments (p. 11)

Distal-level assessments directly assess an individual student’s understanding of the targeted formalism(s), but they do so in a new and a more abstract context that is more akin to the way those formalisms are represented in content standards. (p. 12)

Remote-Level Achievement-Oriented Tests (p. 12)

Remote-level tests are “achievement-oriented” because they measure gains in achievement of standards across multiple curricula, over longer timescales, and by various groups of students. (p. 12)

Indeed, we worry that the “participation gap” in new media practices (Jenkins et al., 2009) will widen with increased use of integrated curricula and tests of 21st Century Skills.3 (p. 13)


This is difficult for two very different reasons. On one hand, our approach is solidly rooted in newer situative and participatory views of cognition and learning. These views have yet to receive wider acceptance and appreciation by the broader assessment and measurement community (Moss, Pullin, Gee, Haertel, & Young, 2008). Conversely, because our approach is designed to yield increased scores on externallydeveloped achievement measures, we anticipate puzzlement from innovators and educators who oppose the continued focus on such measures. We contend that participatory assessment can ultimately address the seemingly incompatible goals of these two very different communities. (p. 14)