Author Bio

Stephanie Chasteen

Mon

Jan

31

Listening to student conversations during clicker questions: What you have not heard might surprise you!

posted: January 31, 2011 by

600px-Two-people-talking-logoWe have our classroom spies.  And they have sent us their report in a forum that will probably not be seen by the students that they were observing: In a study released last week in the American Journal of Physics, two authors report on over 300 recorded student conversations during clicker questions within 3 introductory astronomy classrooms.  And the results are fascinating.

When we write clicker questions, we typically choose some topic or concept that we think will be difficult for students.  We devise answer choices to capture common pitfalls we think that they’ll fall into.  We expect them to read the question, and discuss amongst themselves — using critical thinking, logic and reasoning to decide between the different choices, until settling on the best choice.

James and Willoughby, however, found that this happened less than half the time (38%).  The rest of the time, students were doing one of the following

  • Falling into pitfalls that didn’t lead to productive conversation (38%) – such as deferring to a confident student, ending the conversation before fully discussing the answer choices, not talking about the reasoning behind answer choices, or just not having a productive conversation to begin with.  Note that this was more of a problem in classrooms where students were heavily graded on getting the correct answer!
  • Giving some answer that didn’t really represent their thinking (26%) -- like using another student’s answer (even when the student doesn’t agree), guessing, or looking for clues in the way the question was phrased
  • Bringing up ideas that weren’t included in the existing answer choices (12%) – like gaps in their fundamental science knowledge, or bringing in irrelevant ideas

This is big news, and gives us insight into what is really happening during peer instruction in a way that we could only have guessed at.  To me, what this highlights is the need for two things during the implementation of a clicker question — things that I’ve always advocated for, but now I have some hard data to back me up:

  1. The instructor should circulate and listen to student conversation. By listening in, the instructor can get some sense of what students are discussing, ask Socratic questions to spur productive conversation, and see how the question might be revised to more accurately capture student thinking.  If the instructor is alone in a large lecture room and can’t cover the whole room, consider using graduate TA’s or undergraduate learning assistants to help circulate, facilitate and listen.
  2. Facilitate a whole-class debriefing conversation at the end of the peer discussion, and discuss the reasoning behind the right answer (and why the wrong answers are wrong) — even if there appears to be consensus. If nothing else, this study highlights that students very very very often give the right answer for the wrong (or confused) reasoning.  Having a discussion about the question, listening to multiple student reasoning, and clearly indicating why the instructor favors the right answer and rejects the others, is critical.
  3. Provide credit for incentive, but not high-stakes. At Colorado we use Mazur’s suggested method, where we give participation credit for clicking in, but extra credit (which counts against the exam scores) for getting the right answer.

James and Willoughby offer some additional suggestions:

  1. Encourage students to share their ideas that do not match the question answers listed, during whole-class discussion or via written feedback at the end of class
  2. Add “none of the above” as a common answer choice
  3. Ask students to rate their degree of confidence in their answer
  4. Ask series of questions, each focusing on one link in a logical chain, to more clearly highlight where students are having difficulties
  5. More clearly guide student interactions during clicker questions (e.g., assess all answer choices, generate your own answer choices if necessary, make note of questions and confusions, ask for help from other students and instructors if you don’t know how to start your conversation).

Comments: (2) RSS
Categories: Classroom Response Systems, Formative Assessment, Higher Education, Peer Instruction
Read All Stephanie Chasteen

Save to del.icio.usDigg This!Share on FacebookTwit This!

2 Responses to “Listening to student conversations during clicker questions: What you have not heard might surprise you!”

  1. Peter (@polarisdotca) Says:

    It is interesting to examine their findings about low-stakes (participation point) vs high-stakes (participation + bonus for correct) classes. They found students much more likely to just give in and vote like their peers, even if they disagree, in the high stakes classes. It’s like the appeal of that extra point over-rides their willingness to stick up for their own ideas. I’ve advocated for the bonus mark – I thought it encouraged engagement – but I’m starting to rethink that. In the high-stakes class, you are, in fact, penalized for failing. I don’t think that promotes effective think-pair-share. We should do more research!

  2. Stephanie Chasteen Says:

    I agree — it is interesting. I’ve also often advocated some small correctness marks, since then students are motivated to actually get the right answer. But it seems that we are particularly sensitive, as humans, to the “get points for right answer” meme, and that goal can easily override all else. I wonder if getting points stimulates the same part of the brain that responds to other reward mechanisms (like winning the lottery, gambling, or cocaine) — and so we’ll unconsciously do whatever it takes to get “paid.” It’s probably less salient than gambling, because the reward is delayed, but there is still the knowledge that “correct answer = reward” so maybe the brain is positively stimulated even just by getting the right answer in that moment when the correct answer is displayed.

    We probably don’t need to do more research — the behavioral economists have already researched this sort of thing to death, and we just need to apply it to our field.

Leave a Reply

 

Go back to main content | Go back to main navigation