This is the first of what I hope will be a few posts about a “Politics and the Mass Media” course that I taught in the Winter 2015 semester. I experimented with a few things in that course, at least partly because when I was starting to prepare the course last summer, I was reminded by someone who teaches such courses regularly (and, like me, is in his mid-40s), that today’s “students inhabit a media universe that is not at all like the one we have in mind when we use the term ‘mass media.'”
I learned a lot teaching that course. It was a busy term so I didn’t have the chance to write about it much at the time. Although I did whip off a couple of Twitter essays that I posted to Storify – here and here – my attempt to catch up to the new media universe.
This post is about an experiment with an assignment, that wasn’t necessarily specific to a Politics & Mass Media course: getting students to engage in double-blind peer review. It was similar to the double-blind peer review process that we often undertake as academics, but in this case it was a process for improving, not screening or filtering, written work.
One more point to set the context: I think that learning to write clearly and persuasively is one of the most important skills that students can get out of university (and particularly a BA degree). And yet, for what I think are good reasons, I am reluctant to mark drafts of students’ essays. I tried this as a way to get around that problem. I’ll briefly describe the process, how student work was assessed, and then say a few things about the pluses and minuses of doing this.
- About a month into the semester, students were to submit a 2-page overview of their proposed essay topic. The assignment gave them quite a bit of latitude (a comparative media analysis on an issue or event of their choosing). I had a marking template for these (below), which included a space for me to include an “author ID number,” as well as feedback on their proposed topic.
- The week after the mid-term break, students had to bring to class 3 copies of a draft essay. No name or identifying info on the paper, except for their author ID number. I organized the redistribution of those essays (recording who got which papers on a spreadsheet), and then handed them out the beginning of the following week.
- Students had one week to review the three draft papers that they were assigned. The instructions were to provide constructive feedback (not assess or grade), and a template was provided (below). Reviews were submitted to me, and I gave them back to their authors, matching up ID numbers to student names.
- One week after that, each student submitted a package that included at least one of the draft copies of their essay (usually more than one, since most reviewers marked up the drafts), all three of the reviews that they received, and a 1-2 page “response to reviewers.” Like a response to academic reviewers, the assignment was to discuss how the paper was going to be revised in light of the reviewers’ comments.
- With all that paper, I assigned a grade for each of the reviews, marked the response to reviewers, and then returned it all to the authors (emailing people their review marks). Marking the response also gave me the chance to guide authors, if I thought the reviewers had all missed something important, or (more often) if an author wasn’t giving serious enough consideration to a reviewer’s ideas.
- The final paper was due a couple of weeks after that, at the end of the semester.
Assessment of students’ work:
This was the major assignment for the course, worth a total of 50% of students’ final marks. It was broken down as follows:
- The topic overview was worth 5%
- Draft essays were not graded, but the logistics of this process made it very important that these be submitted on time. So there was a penalty imposed (or at least threatened) for late drafts.
- Reviews were worth a total of 10%. Three reviews were graded out of three marks each, according to a rubric (most got 2 or 2.5 out of 3). One additional point was given if all three reviews were submitted by an early deadline.
- Response to reviewers was worth 10%
- Final paper was worth 25%
At least based on what they told me, students seemed to really enjoy this exercise and found it valuable. One student commented on the fact that there are surprisingly few occasions when they actually get to see their peers’ work. I was also pleasantly surprised by both the amount of effort that students put into the peer reviews (often marking up draft essays quite extensively), and by the positive character of the feedback (see below). Students did not use the veil of anonymity to deliver unduly harsh comments. Although when students know their classmates fairly well, that veil can be pretty thin: author identities can be guessed based on topic choice, writing style/ability, etc.
Reviews generally tended to focus on the mechanics of how the essay was put together: was there a clear thesis statement, did it flow logically, was the writing clear and grammatically correct, and so on. I’m not sure whether I was happy or disappointed to find that students who turn in poorly written work can actually edit the work of others reasonably well. Although I don’t have a clear baseline to compare it to, I think most if not all of the final essays ended up being more polished in terms of clarity of presentation than they otherwise would have been. Some of them were probably improved in terms of content (soundness of analysis, theoretical sophistication, methodological rigour, etc.), but that was more hit-and-miss.
Words of caution:
Overall, this was a good experience, and I will probably do it again, in some classes. But a couple of cautionary notes. First, I think this size of class (38 students) is about the maximum. The logistics of shuffling around that much paper were fairly daunting. 38 students x 3 drafts x 10 pages = a lot to carry around. And re-sorting 100+ papers to distribute to peer reviewers (and then back to authors) required time and quite a bit of floor space. I briefly considered doing it electronically, but based on my skills and what I have available, I think it would have been even more work. In a smaller class (under 20-25) even the pretence of anonymity would be gone.
Also, a process with this many steps necessarily means working to tight timelines, so the scheduling (including scheduling the time to grade, where necessary) needs to be done pretty carefully, in advance, with some cushions built in for unanticipated disruptions. A couple of students’ drafts or reviews were late or didn’t come in at all, which was not surprising, but made things more complicated. I knew it would happen, but couldn’t have known which students would be delinquent, nor whether promises like “I will submit it first thing Monday morning!” would in fact be kept. The process had to be flexible enough to deal with this, and the fact that delinquent students were also harming their peers.
When assigning peer reviewers, where possible, I tried to have students get at least one reviewer who was writing on a similar topic, since I thought they would be more likely to provide good feedback on the content and not just structure or mechanics. I also wanted to avoid having people paired up (A reviews B’s paper and B reviews A’s paper). And I know that some students are, shall we say, more careful readers than others, and so wanted to try to have each student get reviews that were roughly similar overall in terms of quality. The spectrum of reviewer quality turned out to be narrower than I thought it would be, in a pleasantly surprising way.
As I said, overall, I am happy enough with how this went, that I will probably do it again. Both the iterative writing process, and the opportunity to teach and learn from their peers, are good experiences.