I want to quickly review my use of online quizzes this summer, returning to a post I startedhere. As I noted, I use “low-stakes retrieval practices,” i.e. quizzes, regularly in my courses so students can assess their efforts and make any micro-adjustments to their study habits they deem necessary. Instead of diverting time in class to taking quizzes I decided to have students take quizzes at home online as part of their homework. Here are some quick thoughts.
Practice Syllabus Quiz: Not too many students are familiar with taking quizzes at home, nor was I familiar with the mechanics of making them. I decided to give my students a syllabus quiz after the first day of class, making sure they understood the requirements of the course as well as trying to troubleshoot any issues that may arise when the real quizzes start. As enticement, I offered minimal extra credit for completing the quiz with 100%.
Multiple Choice (MC) Only: Clearly, this will depend on an individual instructor’s educational goals, but I shifted from the norms of my paper quizzes. I would often add a few short answer questions to the end of in-class quizzes, but found that grading these online took a bit too much time. Clicking, loading, scrolling, typing, and saving took far longer than flipping a page. After including short answer questions to my first online quiz I decided to make all subsequent online quizzes just multiple choice so everything could be graded automatically. If this was not a time-condensed summer class I may have kept the short answers, but I looking to save some time overall with online quizzes.
Open-Book/Timed: Honestly, I just don’t trust anyone to not use their notes or book when taking a quiz at home, so I just decided to make them open-book. To offset this a little, I made all the quizzes timed meaning that students would not be able to casually get 100% on every test just by flipping through their notes ormy slides. I told them they need to study before starting the quiz. I gave them just over 1 minute per multiple choice question.
MC Strategy I: Crafting good multiple choice questions can be askill in itself, especially ones that test higher-level thinking (application, analysis, evaluation) as opposed just memory (recognition and recall). The first question below is an example of simple recognition and the correct response (“B”) would (or should) be found word-for-word in students’ notes.The second question combines recognition with application, asking the students to apply their knowledge (in this case, to real world examples). In addition to testing higher-level thinking, the correct answer could not be found word-for-word in students’ notes. Because the main focus of my quizzes is to test simple recognition and recall (midterms and finals are different), I mix in only a handful of MC questions to test higher-level thinking. Yet, it is possible in some cases to transform lower-level cognitive MC questions into higher-level ones.
MC Strategy II: While generally not regarded as a good strategy for crafting MC questions, I’ve found that using some “exception” questions also requires to students to know (recognize) more about a concept.
Testing Recall: Another method I did not employ this summer is to use “fill in the blank” questions that are automatically graded.
Overall, I will continue to develop my use of online quizzes. For me, the benefits of saving valuable class time and automatic grading offset the issues of limiting (or avoiding) short answer questions.
This is odd to admit, but I run a “coercive classroom.” And there is nothing more coercive in my mind than having my students write daily reflections on assigned readings. I have little reason to think they (or anyone, really) would keep up with the readings without a regular assessment of some sort. Of course, open class discussion on the day’s readings can “peer pressure” some into regular reading habits. I’ve found, however, only a select few are consistently willing to offers their insights, while others are more content to simply listen. (Cold-calling students is a craft I have not yet mastered, but will be attempting next semester. I plan to write about it here for another post.)
There are several ways to gauge if a student has read, but many require a lot of additional effort of the instructor. One may prepare a series of comprehension questions that are handed out just previous to the assignment. These have the benefit of focusing the student’s attention, but I would argue also have the same drawback (to tell the student what they should find interesting or important). It also takes time to craft thoughtful questions that genuinely move beyond basic fact-finding. I personally tend to save good questions like this for class discussion.
Daily (or surprise) reading quizzes are another means of coercion. I’m not convinced of the value of these either, mainly because the questions have to be “easy,” allowing for the student to signal to the instructor that the reading was done even though it may not have been fully comprehended. And grading these can be surprisingly difficult, especially if the question is too easy.
So I’ve veered in another direction, pulling an idea from my time in the Writing Program. I ask my students to respond to every reading by asking the three same questions.
What is New? What is old? What is odd?
New, Old, Odd, that’s it. I sometimes joking call this my N-O-O assignment. The first covers an idea they find interesting. Something they can be motivated to explore in more depth if need be. The second idea covers finding a topic they’ve seen elsewhere, or at least can create a parallel for. This allow students to build on top of old knowledge. The last concept requires them to critique an author’s point or to ask an clarifying question about a topic.
Currently I have my students post these responses on a Forum in GauchoSpace. After posting they can read other student’s posts, though I have not required them to post comments this time. My practice is to go through them shortly before class, and when I have time, to post a brief comment. Typically I will respond to their questions, but will also encourage their curiosities. Even if I do not have time to write responses, just browsing the posts will give me ample ideas of where to take my lecture and what to go over in more detail. I have hesitated to call out students by name about their (insightful) comments, but hope to make this a more common part of my practice.
I grade the reflections based on a simple “did it” or “didn’t do it” scale, though I’ve contemplated a three point scale of “outstanding,” satisfactory,” and “unsatisfactory” (plus “did’t do it”). I typically give my students a few “days off” as well.
Below are the directions I’ve used this summer (I tweak them for each class I teach).
Directions When approaching the reading assignments for this course, I want you to pay attention to three critical aspects: what is New to you, what is Old to you, and what is Odd to you. Your written reflection for each one of these critical aspects should be at least a few sentences in length. Provide page numbers from the readings as necessary.
Below are some of the types of questions you can ask yourself for each aspect.
1. New – What was something new and interesting? What was particularly useful or insightful? What quote or passage was able to reveal something interesting and/or helpful for you? Why was it so? If anything, clearly locating these sections will make the time you spent reading seem worthwhile.
2. Old – What was familiar? What quote or passage claimed something that you already knew? Was there something that seemed familiar or had a potential parallel to another religious tradition you know? Locating these sections will give you a clear foundation should you encounter other sections that are not as clear to you.
3. Odd – What was confusing or unexpected? What quote or passage did not make sense or were you critical of? What problem did you have with it? Locating these sections will help you keep a healthy and critical attitude towards the readings and suggest areas that require further exploration.
Your response will have to be posted before class for you to receive credit.
This summer I’ve tasked my students with writing a final paper that argues for their own definition of “religion” based solely on the Asian traditions we cover in class. As part of this process, I’ve required them to craft a rough draft that was due during our mid-term exam. Technically, this was a slightly different shorter assignment that built towards their final product.
I assigned this shorter assignment with three specific goals in mind. One was to motivate them to think about their project early. The second was to force them, through peer review, to see how their fellow students tackled them same problem and hopefully to inspire their own approach. The last goal was to allow students the opportunity to practice the (slowly acquired) skill of good critique.
While this last objective really has little to do with the content of my course, I feel it is incumbent on me to teach writing in a Humanities course even when I am not formally teaching writing. (Yes, I have been indoctrinated.) Of equal importance, this provides my students insight into my criteria several weeks before they will hand in their final project. Consequently, this requires some type of peer-review rubric. If you haven’t tried it already, open-ended peer-review sessions – where students are just told to write whatever commentary they desire – are not worth anyone’s time.
One can find peer-review handout templates online, but it is important that your peer-review rubric contains elements that are related to your own grading rubric for the assignment. In fact, there is no reason your peer-review rubric and grading rubric cannot be the same thing!
Overview & Prep
Prep work: Each student had to bring in two printed copies of their short paper: one went to the peer-reviewer, the other was handed in for my commentary. I crafted a reader review rubric that each student had to fill out for the paper they read. In making the rubric, I was also drafting my own grading rubric for later in the term. Consequently, this peer-review exercise was also a means for me to gauge how students were interpreting the prompt and where I should re-examine my evaluation parameters.
Overall, I divided the rubric into three sections: 1) basic requirements, 2) organization & structure, 3) overall quality.
Set-up: The students took the midterm the same day we did reader review (summer sessions are rough!), so there was limited time. I wrote basic instructions on the top of the sheet and read them aloud. I regularly remind my students that there are real human beings reading these comments, so be nice; the tone can be colloquial. I also tell them to cite praise as well as criticism as long as it’s constructive (i.e. I want them to consistently tell the author why they made the specific comment).
In this case, I had the students pass their papers to a random person, and then again to a random person until they “lost” their paper. In hindsight I should of had them trade with a partner so they could talk about their papers with each other, but I knew time was going to be tight as it was and didn’t know if time would allow for it.
Practice: We had about 20 minutes total to do this exercise, which was a bit rushed. After a few minutes for instructions, less than 15 minutes were left to do a read through and write comments. I encouraged marginal comments, but also directed students to read the rubric and fill it out as much as they could. With about 2-3 minutes left in class I had the students hand back the papers to the authors so they could look over their comments and ask any final questions.
Outcome: As I mentioned, I wish I had made time to allow the students to talk to one another about their papers after the review session. Some shouted back a few comments to one another as we ended class. The class seemed engaged and invested. My curiosity overcame me and I asked each student to hand in their rubric with their “clean” paper. I wanted to see the type of comments given and gauge how constructive or helpful this exercise might have been. Overall, the rubric appeared to help focus comments on higher-order issues, like argumentation and organization, not just spelling. At least one conversation with a student revealed to me that exposure to another student’s take was key to her understanding the assignment.
Buddhist monks chanting at Tayuan Temple 塔院寺, Mt. Wutai, China, summer 2016. Photograph Peter Romaskiewicz.
Introduction
Grading is one of the toughest parts of teaching. It also gives you immediate feedback on how well students are grasping the materials. Thus I’ve come to find that frequent, low stakes assessment is helpful in preparing students for larger and more complicated tasks.
In my summer course I’m having quizzes each week which review the material from the previous week. I’ve decided that quizzes will mostly be multiple-choice for a few reasons. First, these assessments are making sure students are familiar with basic terms and themes, mostly focused on recognition and recall. These ideas form the basis for more analytical and critical writing assignments in the coming weeks. Second, because of the pace of the course (we meet four times a week), I cannot spend too much time grading. Perfect for multiple choice.
The new angle I am trying this summer is online quizzes. Thus, I am also attempting to make this class hybrid, as I expect to teach a form of it fully online in the near future. The students have to take the quiz before they attend Monday’s class. I allow them a 36 hour window to take the test, opening it Sunday morning. To prepare them, and myself, for this new experience, I offered an online quiz (for minimal extra credit) on the syllabus after the first day of class.
I decided on the online quiz after much thought. The main concern I had was missing 10-15 minutes of class for these quizzes. I found them to be useful, even necessary, for low-level learning, but they also ate into lecture and discussion time. In addition, I was hoping that automatic grading would save me time throughout the summer session.
I decided that I would make the quizzes open book and open note. Perhaps it is poor judgement on my part, but I just do not fully trust my students to not use notes for an assessment like this (!). This this is a concern I have for online quizzes, especially of the multiple choice variety.
To counterbalance this leverage, I decided to make the quizzes timed, instructing my students that they would need to study beforehand in order to answer all of the questions. My hope with this set-up is that the students would get used to the type of questions I would ask and potentially become familiar with the adequacy of the notes they are taking. (The midterm and final are in-class.) The first quiz was 10 multiple choice questions with two short answer questions. The total time I allowed for the quiz was 15 minutes, just about the time I would allow in class.
I will review the use of online quizzes at the end of the course.