More about forum participation in education

Another random set of notes about forums I’m afraid this week, but we’re close to finishing the paper that I’m co-authoring so normal service will be resumed shortly.

Initially, on-line forums were offered in addition to print-based correspondence courses, and were, alongside email and web-based articles, considered optional “so as not to reduce access to students without internet or computer facilities”  (Bates, 2008)

The 2002 paper by Wu and Hiltz sets out possible benefits of forum participation in education:

Online discussions that persist throughout the week should motivate students to be more engaged in their  course on a continuous basis […] Secondly, active participation in online discussions, which are student-dominated rather than instructor-dominated, should be enjoyable for the students. It should  make learning more active and “fun.”

To test these hypotheses, they surveyed 116 participants in three face-to-face courses (two undergraduate  and one graduate) for which active participation in forms was a requirement of the course. It’s important to note trial was observational, not a randomised, controlled trial, and the surveys tested perceptions of learning rather than testing learning itself. However, they concluded that students did find that the asynchronous discussion afforded by forums did make the course more enjoyable and increase motivation. They also discovered that the amount of previous experience with distance learning courses didn’t appear to affect how enjoyable or motivating student found the on-line discussions on the observed courses. The importance of the instructors’ involvement in setting topics for discussion, offering feedback and guiding discussion was highlighted by the students’ responses, with one saying that instructors should be “online for for two to three hours every day.”

Two years later, Biesenbach-Lucas (2004) put forward her interpretation of the benefits of forum participation (particularly in teacher training) :

Positive interdependence: Students organize themselves by assuming roles which facilitate their collaboration.

Promotive interaction: Students take responsibility for the group’s learning by sharing knowledge as well as questioning and challenging each other.

Individual accountability: Each student is held responsible for taking an active part in the group’s activities, completing his/her own designated tasks, and helping other students in their learning.

Social skills: Students use leadership skills, including making decisions, developing consensus, building trust, and managing conflicts.

Self-evaluation: Students assess individual and collective participation to ensure productive collaboration.

Her paper only expects the instructor to act as “Observer/evaluator, perhaps some participation” however, she admits that, over the course of her five semester experiment with forums, the instructor carried over more and more outputs from the forum into face-to-face sessions.

Vonderwell, Liang and Alderman (2007) explored asynchronous online discussions, assessment processes, and the meaning students derived from their experiences in five online graduate courses, and concluded:

Educators need to look more carefuly into the notions of “assessment for learning” as well as “assessment of learning.” online learning pedagogy can benefit from a notion of “assessment as inquiry” and “assessment of constructed knowledge” in asynchronous discussions.

Kearns (2012) offers a reasonable summary of the challenges including the sheer number of posts that might need the instructors’ attention:

One problem arising from the asynchronous nature of online discussion is the impact of late posting. For a discussion that runs from Monday to Sunday, for example, students in the discussion group may miss the opportunity to fully engage if some wait until Saturday to begin. On the other hand, even in classes where discussion is sometimes less than robust, students may face the challenge of having to keep up with voluminous postings across multiple groups and discussion forums. As one of the participants pointed out, “Sometimes it’s hundreds of entries.” […] A recurring theme among instructors who participated in this phase of the study was the amount of time and effort involved in providing effective feedback to online students. One source of demand was online discussion. Several instructors reported being overwhelmed with the amount of reading this required. As one instructor remarked, the discussion board became “cumbersome when done every week.” Another demand on an instructor’s time that was raised was having to enter comments on student papers using Microsoft Word rather than being able to handwrite in the margins. For one instructor, this was “time consuming” and “more tedious” than annotating the hard-copy assignment. One instructor mentioned needing a greater number of smaller assessments to oblige students to complete activities that might otherwise be completed during F2F class time. In her words, “If there is not a grade associated with an assignment, it is completely eliminated.” Finally, several instructors commented on having to answer the same questions more than once in the absence of a concurrent gathering of students.

Mentioning peer assessment as as useful strategy in coping with this challenge, she cites Yang and Tsai (2010) which relates an interesting experiment with peer assessment, finding peer reviewed marks reasonably comparable with an those of an external assessor, and measuring teh impacts of the students on perceptions and approaches to peer review. Though in the context of MOOCS, I’m not looking at this stage for a robust marking procedure, I am interested in peer review as a way of tagging posts so that they can be used to create procedural or semi-random narratives.

So there may be lots of analysis of that challenge, and some useful ideas in the packed paper by Meyer (2006). For example:

Many research studies do not use multiple raters to code the content in online discussions. This may be  owing to a number of factors, including the instructor’s preference for working alone or a lack of interested colleagues to help with coding. Researchers may not have the time to train other coders or the  money to pay them, or perhaps the aim is simply to collect data about the learning of a given set of  students rather than to produce reliable findings. In other words, there may be understandable reasons for not using multiple raters, despite the greater reliability that might result from their use.

Crown sourcing the rating, from other students, in peer review, may be an answer.