Can you judge a book by its cover? A course by its syllabus?
I’d say many university syllabuses[1] are like an end-user license agreement or a kitchen blender manual – in other words, documents I wouldn’t be caught dead reading either.
Additions like images can help with student interest in the material
While thoughtful syllabus design is not a major area of research, there is a small, but informative pool of literature on the topic.[2] Not surprisingly, a well-designed, engaging syllabus has been shown to prime student motivation and interest in the course more than a poorly designed syllabus.[3] I’d content that since the syllabus is one of the first ways we interact with our students, it’s a genre that deserves thoughtful planning.
Many of us use the syllabus as a kind of legal contract, sometimes incorporating “legalese” into our course policies and even have our students sign the syllabus to signal a binding agreement.[4] (Or, to test if the students actually read the syllabus, some will hide “Easter eggs” and reward those students who “passed the test.”) Indeed, the expansion of the syllabus over the decades – or “syllabus bloat” – has been mainly due to the growing abundance of policy statements used to settle any potential student grievances.
While I understand the appeal of the contractual model, I’ve never found it matched my teaching persona or fed into the classroom culture I was trying to establish.[5] Of course, listing course policies is necessary, but the syllabus does not need to be limited to this purpose. [6]
After reflecting on the intended audience and purpose of the syllabus genre, I’ve come to see it as one of the numerous pedagogical tools at my disposal. My syllabus design falls somewhere between a chapter in an introductory textbook, a promotional advertisement, and a monthly newsletter – at least this is my intention, every syllabus is an open-ended project.
One of the first hurdles I had to overcome was fully identifying my students as my target audience, not my colleagues. This may seem commonsensical in hindsight, but this realization immediately impacted my tone, the type of information I incorporated, and the overall graphic design of this new “learner-centered” and “engaging” syllabus [see chart].[7]
I had two specific revelations regarding content – essentially incorporating “hows” and “whys” into the copious amount of “whats.”
I include a section on student motivation (borrowed from Tona Hagan) for class discussion
First, I realized that several comments I would normally make about how to do well in my classes could easily and beneficially be incorporated. For example, I still hold a “what’s your motivation?” discussion in class, but now have some additional text in my syllabus that students can refer to during our conversation. I’ve found that it anchors discussion (like any class reading) and helps students make more pointed comments.
Additionally, I decided to include some material about why I designed the course the way I did, helping students to see my intellectual interests and pedagogical motivations. Accordingly, I write about the types of assignments and activities I employ and the value I find for students in assigning them.[8] I also try to reveal the scaffolding I’ve designed into the course, cuing students into the importance of daily foundational activities and how those are intended to build into larger, more complex, projects throughout the term. This is intended to help students with metacognition about their own learning behaviors and to see a clear pathway to achieve the learning objectives I’ve set for them (I’ve been inspired by the work of Tracy Zinn on this front).
Because I include a broader array of topics into my syllabus, I also do not go over the whole document on the first day of class. I introduce certain sections when necessary and have students refer back to it throughout the semester. I even have students look at it on one of the last days of the course.
The graphic design of my syllabus is almost entirely a result of the “syllabus makeover” by historian Tona Hangen (most notably, incorporating the trifold division of student success and motivation [9]). Good visual design not only captivates student interest, but also models professionalism (even enthusiasm), indicating the entire course will be handled with similar care.[10] Moreover, spending time working with principles of good design makes us aware of important information hierarchies which can be expressed through visual hierarchies (text color, text size, use of boxes, images, etc.).[11] As a consequence, students have an easier time parsing out more important information.
Certainly, an outstanding syllabus design will not make up for poor course design, but it might be worthwhile to consider the syllabus as an integral tool in helping students find success in our courses.
Tone: Given that my students are the primary audience, I’ve consciously adopted a more friendly and positive tone (including using we/us/our, not “students”), and especially compassion and humor where I can.
Visual Hierarchy: I use color and colored shapes to direct student attention to more important information
Images: To promote some of the topics we will discuss, I try to incorporate images that foreshadow these ideas (I also try to creatively incorporate a reference to the image in adjacent text)
Hows: I include text that helps students reflect on their motivations for taking my course and how they can succeed
Whys: I provide a rationalization for assignments and activities I employ, not simply the explanation of their execution
Quick Reference Sources:
A summary of effective syllabus components (and many useful links):
[1] The plural form of syllabus is quietly debated in the halls of The Academy. I had a professor in grad school who preferred (ahem, actively and regularly commented upon) the proper plural form as syllabuses since the original term is not derived from Latin, but Greek (it’s a little more funky, actually). Thus, using a proper plural Latin declension (=syllabi) is unwarranted. (The same goes for octopus, coincidentally.) Interestingly, the Google Books NGram Viewer shows that syllabi is more common than syllabuses. The debate continues…
[2] A list of references can be found at the beginning of the document here [https://cte.virginia.edu/sites/cte.virginia.edu/files/Syllabus-Rubric-Guide-2-13-17.pdf].
[5] While I believe a syllabus should clearly state course policies and try to consider numerous “what ifs,” I do not think the nearly endless implicit agreements between student and instructor need to be made explicit. In my view, signing a syllabus makes the motivation for adhering to policies external (abiding by the law), rather than internal (it’s the right thing to do), and it undermines trust. Of course, this is my personal view. Lastly, some folks are also fans of the “syllabus quiz.”
[6] Description of a syllabus as a mix between a contract, permanent record, and learning tool can be found here [https://jan.ucc.nau.edu/~coesyl-p/syllabus_cline_article_2.pdf]
[7] I am borrowing the “learner-centered” and “engaging” syllabus from the typology in Ludy et. al. 2016 (see table above). It also amazes me that scholars might think that a good visual design cheapens the “scholarly” integrity of a text. What one might dismiss as “flash,” actually has an integral rhetorical purpose. Research on the impact of syllabus tone can be found in Harnish & Bridges 2011.
[8] Incidentally, I also chose to incorporate these things because I would notice students would rarely ever (maybe never) take notes about these aspects when we discussed them in class. Upon reflection, I felt knowing the hows and whys were central to my class, and while I couldn’t test my students on these aspects, they could at least have an easy way to consult them.
[9] It’s interesting to note that the “engaged” syllabus in Ludy et. al. 2016, p. 11, also adopted a similar approach, using the categories of “diet” and “life-style change.”
[10] See other insights about good visual design here [http://www.pedagogyunbound.com/tips-index/2014/2/7/make-your-course-documents-visually-engaging].
[11] I am only aware of one experimental study that compared a text-rich syllabus to a graphic-rich syllabus, i.e. Ludy et. al. 2016. Here is its principal finding: “Students perceived both types of syllabus positively, yet the engaging syllabus was judged to be more visually appealing and comprehensive. More importantly, it motivated more interest in the class and instructor than the contractual syllabus. Using an engaging syllabus may benefit instructors who seek to gain more favorable initial course perceptions by students.” Ludy et. al. 2016: 1.
References:
Doolittle, P. E., & Siudzinski, R. A. 2010. “Recommended Syllabus Components: What do Higher Education Faculty Include in their Syllabi?” Journal On Excellence In College Teaching, Vol 21, No. 3, pp. 29-61.
Ludy, Mary-Jon; Brackenbury, Tim; Folkins, John Wm; Peet, Susan H.; Langendorfer, Stephen J. & Beining, Kari. 2016. “Student Impressions of Syllabus Design: Engaging Versus Contractual Syllabus,” International Journal for the Scholarship of Teaching and Learning,” Vol. 10, No. 2, Article 6.
Harnish, Richard J. & Bridges, K. Robert. 2011. “Effect of Syllabus Tone: Students’ Perceptions of Instructor and Course,” Social Psychology of Education, Vol. 14, No. 3, pp. 319-330.
Perrine, R. M., Lisle, J., & Tucker, D. L. 1995. “Effects of a Syllabus Offer of Help, Student Age, and Class Size on College Students’ Willingness to Seek Support from Faculty.” Journal of Experimental Education, Vol. 64, No. 1, pp. 41-52.
Saville, B. K., Zinn, T. E., Brown, A. R., & Marchuk, K. A. 2010. “Syllabus Detail and Students’ Perceptions of Teacher Effectiveness.” Teaching Of Psychology, Vol.37, No. 3, pp. 186-189.
Zinn, Tracy E. 2009. “But I Really Tried! Helping Students Link Effort and Performance.” Observer, Vol. 22, No. 8, pp. 27-30.
This post primarily deals with how instructors can best use the problematic instrument of student evaluations. A recent, accessible post for administration who are determining how (or if) to use student evaluations can be found with Elizabeth Barre’s Research on Student Ratings Continues to Evolve. We Should, Too. We both cover many of the same resources.
Introduction
Having just finished another summer of teaching, last week I participated in the seemingly timeless ritual of passing out and administering student course “evals.”
Over four decades of research has shown, however, that student evaluations of testing (SETs[1]) are poor and often problematic measures of teaching effectiveness. Not only do SETs exhibit systematic bias, the data is also often statistically misused by administration and instructors themselves. These problems forced one recent study to flatly proclaim that “SET should not be relied upon as a measure of teaching effectiveness,” and that “SET should not be used for personnel decisions.”[2]
When we accept that SETs are neither statistically reliable (consistent across measures), statistically valid (testing what they claim to test), or appropriately applied, we are left to question if student evaluations offer any viable information for the instructor.
I suggest that student feedback, especially written comments when provided with proper instructions, can open crucial lines of communication with the class and can be used in a limited capacity – in coordination with other measures – for instructors to critically self-assess. I offer end-user advice for university instructors on how to best prepare students for creating constructive feedback and utilizing that information to become more critically reflective and effective teachers. I draw upon scholarly research and recently implemented institutional initiatives, illustrated with personal practices, to comment on how best to incorporate student feedback into our teaching.
(1) A Broken Instrument – Overview
I take as axiomatic that SETs as currently designed and interpreted are poor proxies for measuring teaching quality or effectiveness.[3] For example, a range of studies have shown that among the best ways to predict student evaluations is to examine the responding students’ interim grades. In other words, it has been shown that students’ anticipation of a good course grade is highly correlated with positive teaching evaluations.[4] This conflation between grade expectation and teaching effectiveness is just one of the reasons the validity of SETs have been called into question. Clearly, instructors can also engineer positive feedback by “teaching to the test” (instead of having students do the more difficult work of learning the skills required to do well on a test) or having generous grading policies, among other tactics.[5]
One of the more prominent areas of research has focused on gender and racial biases exhibited in SET data. Dozens of empirical research papers point to statistically significant biases against female instructors, and one recent randomized, controlled, blind experiment at a US institution bolstered these findings. Regardless of student performance, instructors perceived as male (the experiment was performed with an online course) received significantly higher SET scores.[6] This holds true even for measures one would have expected to be objective, such as perceived promptness in returning graded assignments – regardless that both male and female instructors returned assignments at the same time. In aggregate, studies have shown that both women and men are as likely to make biased judgments favoring male instructors.[7]
Additionally, there is evidence suggesting that students who rate their instructor’s effectiveness highly and who take subsequent advanced classes will perform more poorly (i.e. receive worse grades) than students who rated their previous instructor’s effectiveness as low. This means that more effective instructors can actually be evaluated more negatively by students in comparison to their less effective teaching counterparts.[8] Part of this poor evaluation may be due to using more challenging active or deep-learning strategies that ultimately have been shown to be more effective techniques for teaching, but sometimes elicit active student resistance.[9]
Despite their ubiquity on college campuses, it has been shown that SETs do not primarily measure teaching effectiveness, but instead measure student biases and overall subjective enjoyment.
I do not mean to attempt to convince skeptics of the reliability of this stunning research; there is plenty of research available to comb through to make your own opinion.[10] One can view the online bibliography of gender and racial bias in SETs, regularly updated by Rebecca Kreitzer, here.[11] Additionally, there are at least two peer-reviewed journals dedicated to exploring evaluation more broadly, Assessment & Evaluation in Higher Education (1975-) and Studies in Educational Evaluation (1975-). For a summary overview of SETs biases, meta-analyses are offered in Wright & Jenkins-Guarnieri 2012 (which concludes that SETs are apparently valid and largely free from bias when paired with instructor consultations) and Uttl et. al. 2017 (which concluded SETs are invalid).
For the TLDR crowd, I would simply suggest reading Boring et. al. 2016, a work presented with a high level of statistical rigor examining two separate randomized experiments. It also received a fair amount of popularpress. There is also a presentation on some of its principle findings by one of the contributing authors available online.
I also make no attempt to argue that one can read course evaluations in a manner that adjusts for student bias – the factors contributing to that bias are so numerous and complex that SETs should not be treated as sole objective measures for teaching quality under any interpretive lens. Recommendations on how best to use SETs in hiring, firing, or tenure decisions have also been discussed in the academic literature. A qualified (and sometimes apologetic) defense of SET is put forth by Linse 2017, while a point-counterpoint perspective is provided in Rowan et. al. 2017. In general, incorporating SETs as part of a much more comprehensive teaching portfolio appears to be the middle ground adjudicated by many university administrations. (The American Sociological Association also published its suggested guidelines for using student feedback, in light of recent research, this week.)
(2) Finding the Critical Perspective – Brookfield’s Four Lenses
From the perspective of an instructor, we must remember that student feedback constitutes only one window to our teaching. Stephen Brookfield has developed a method to help assist instructors become more critically reflective teachers by using four lenses, often simply referred to as Brookfield’s Four Lenses.[12] In the hopes of increasing self-awareness, one must draw from several different vantage points to provide a more comprehensive perspective. These “lenses” include 1) the autobiographical lens, 2) the student’s eyes, 3) colleague’s perspectives, and 4) theoretical literature. These roughly correlate to self-reflection, student feedback (or SET), peer evaluation, and exploration of scholarly research.[13]
Among these Four Lenses, arguably the most important is self-reflection which ultimately encompasses the other three since they all require comparative reflection. This heightened self-awareness forms a foundation for critical and reflective teaching and informs us where adjustments in our teaching may be necessary.
Lens 1A – Annotated Lesson Plans: In terms of the autobiographical lens, on a practical level, I regularly take notes after individual lectures (sometimes, simply, in the time between when one class ends and the next begins), noting things I found pertinent to the effectiveness of conveying the material, such as how long class activities took, good questions that were asked by students, insightful discussion topics, and sticking points or conceptual hurdles. Undoubtedly, these notes have become the most valuable information I consult when revisiting lectures in later semesters. Specifically, these lecture annotations allow me to adjust future material, activities, and discussions, or timing allowances.
Lens 1B – Annotated Syllabus & Journaling: Another helpful self-assessment activity has been annotating my syllabus throughout the semester, culminating in a significant review at the end of the term. By regularly taking notes on readings, class policies, grading procedures, and course organization, this information has assisted me in re-conceptualizing my courses and tracing out new areas to explore. Lastly, I have implemented journaling – primarily in the form of this blog – as a means to reflect upon my experiences in the classroom (both positive and negative) and chronicle my discoveries about teaching.
Lens 2 – Mid-Term Evaluations: From this bird’s eye view, student feedback operates as just one measure of teaching quality and should be balanced against other critical perspectives. Importantly, gathering student feedback should not be reserved for only the end of a course. Informal, anonymous mid-term evaluations can provide actionable ideas that could help correct teaching oversights – or encourage us to continue what we are doing.
Typically, I will ask a pair of subjective questions: 1) “What is working well for you?” and 2) “What is not working well for you?” – both in relation to my teaching of the course material. I will also direct students to think about numerous facets of the course, including the readings, assignments, class activities (group activities or student-led discussions) and lectures – or anything else – for comment. Admittedly, not all of the anonymous feedback is constructive or actionable, but if I see clear patterns in comments I will take them into consideration when planning future classes. I also spend a few minutes at the beginning of the following lecture to discuss the feedback with the class and allow for further discussion. Moreover, I also use this as an opportunity to discuss what comments were actionable (positive or negative) and which comments were irrelevant (such as the time of the class, the size of the class, or the temperature of the room). Students need training in providing relevant and actionable narrative commentary, a point I will return to below.[14]
Lens 3 – Teaching Community: As is commonplace in graduate school, I received no formal training in teaching as part of my program, but it was through collegial conversations with peers that my interest, and confidence, in teaching grew.[15] Even if my colleagues did not possess formal training in pedagogy, this informal community functioned as a place to discuss classroom successes and failures and still provided another valuable perspective. In many cases, these conversations revealed the diversity of possible approaches in the classroom and inspired me to take a few pedagogical risks (or what I originally perceived to be risks).
Lens 4 – Scholarship on Teaching and Learning: In order to best make sense of the insights drawn from three lenses of the self, student, and peer, instructors should also consult literature or engage with established theory. This oftentimes provides us with technical vocabulary that can better describe the experiences were all often share.[16] Fortunately, most universities offer workshops that instructors can attend to improve the quality and effectiveness of their teaching. Moreover, the Scholarship on Teaching and Learning (SoTL) is quite voluminous, including many journals such as College Teaching, International Journal of Teaching and Learning in Higher Education, Journal of Effective Teaching in Higher Education, and the Journal on Excellence in College Teaching, among others. There are also numerous disciplinary journals dedicate to teaching, including Teaching Theology and Religion, and the Journal of Religious Education in my home discipline of religious studies.[17]
(3) Revisiting Student (Written) Feedback – And Hope Remained?
There is significantly more research on the close-ended ordinal scale questions of SETs than the open-ended “narrative” commentary that often accompany them. Several studies have noted that written comments can provide more useful and important feedback than statistical reports.[18] Of course, this does not mean that all comments are necessarily relevant to teaching effectiveness nor should they be assumed to be free of bias.[19] While a lot more research remains needs to be done in this area, written comments can contain more course (and instructor) specific details and provide actual ideas to improve teaching.[20] Because of the potentially actionable and specific nature of written comments, instructors should strategize on how best to administer the written portion of student evaluations.
It is important for instructors to make sure students are aware of the purpose of student feedback and possibly explain how feedback may have been used in the past to create better learning environments. In order to help students reflect on the effectiveness of my teaching I will often revisit the course syllabus and have the student reread the learning outcomes, directing them to further think about how the structure of the course, readings, assignments, and activities helped or inhibited the realization of those outcomes. Focusing student attention on teaching effectiveness and quality can help minimize irrelevant commentary or comments on (perceived) instructor identity.[21]
It is also important to inform students about the value of written comments and invite them to write down their insights. Research shows between 10% and 70% of SETs include written comments, thus asking students directly to write commentary is necessary. To ensure the comments are actionable, I also ask students to provide the rationale for their opinions (simply, I tell them to always use “because statements,” e.g. “I (dis)liked this course because…). Importantly, I also give students ample time to discuss and complete the evaluation task, around 10-15 minutes (I leave the room when students begin the evaluations).
Some institutions are beginning to start initiatives that explain the importance of student feedback to students directly and describe how to provide effective feedback. For example, the Faculty Instructional Technology Center at UC Santa Cruz provides instructions to students about crafting effective comments (with examples) and what types of comments (emotionally charged and identity-based) to avoid (see here). Moreover, the center provides instructions for instructors on how to craft the most beneficial questions, focusing on specificity, open-endedness, and goal-orientation (see here). (Similar instructions can be found at UC Berkeley Center for Teaching and Learning, University of Wisconsin-Madison, Vanderbilt University Center for Teaching.)
A more innovative approach was recently taken by the UC Merced Center for Engaged Teaching and Learning which produced a set of short 3-7 minute videos for instructors to show in their class (instructors choose which length to show, the 3-minute version is embedded above). Promoted as “Students Helping Students,” the videos feature university students talking about the importance and purpose of feedback and provides guidelines on crafting useful comments (see here).
After receiving written student feedback, instructors should pay attention to recurring themes or stories that emerge in the commentary. Non-corroborated comments mean very little, especially if they do not align with your own reflections, the observations of colleagues, or insights taken from scholarly literature. In the end, mid-term and end-of-term student feedback, especially written commentary, can offer crucial insights that allow instructors to critically self-assess pedagogical strategies and develop into reflective teachers.
Notes
[1] Given the subjective nature of student evaluations (described below), some institutions and researchers read the acronym SET as “student experiences of teaching.”
[3] “Teaching effectiveness” is generally, though not universally, defined as the instructor’s capacity to facilitate or increase student learning.
[4] Economist Richard Vedder comments that grade inflation in American universities began roughly around the same time in the late 1960’s and early 1970’s (SETs were first used in the 1920’s, see Linse 2017) when student evaluations became a common evaluation tool. The classic study on his phenomenon appears to be Johnson’s Grade Inflation: A Crisis on College Education (2003). Irrespective of the title, a large portion of the book is dedicated to analyzing SETs and their relation to course grades. More recent studies include Griffin 2004 and Stroebe 2016. To be fair, some debate the magnitude of the correlation between SET and grade expectation, see e.g. Gump 2007 and Linse 2017. One can refer to meta-analyses presented in Wright & Jenkins-Guarnieri 2012 and Uttl et. al. 2017 (latter summarized here [https://www.insidehighered.com/news/2016/09/21/new-study-could-be-another-nail-coffin-validity-student-evaluations-teaching]).
[5] This could also include timely psychological priming, such as telling students they are doing exceptionally well with extraordinarily difficult materials, or giving easy assignments early in the term to set up higher than normal grade expectations.
[6] MacNell et. al. 2015. The data was further analyzed in Boring et. al. 2016. Most of the empirical research in this area incorporates incomplete censuses of the student population (the students who simply return their evaluations) as opposed to truly random samples of the population, thus this is an important study confirming the finding of other reports.
[7] There are numerous other biases that have been detected in SET data, none of which is related to teaching quality, such as age, attractiveness of instructor, time of day, class size, etc.
[8] See Carrell & West 2010, Braga et. al. 2014, and Stroebe 2016.
[9] See Pepper 2010 and Carrell & West 2010. For more varied results on the relationship between low evaluations and active learning, see Henderson 2018. An overview of some of these issues for teaching physics, but relevant to other disciplines, can be found here.
[10] While the empirical evidence is “decidedly mixed,” (Peterson et. al. 2019), there is undeniable evidence that biases are widespread. Among the resources listed in the Kreitzer bibliography noted above are several reserch papers that have discovered statistically negligible bias in their SETs, but these seem to be the exception rather than the rule. An overview of the wide rage of biases that have been empiracally studied in SETs can be found here (University of Dayton: https://www.udayton.edu/ltc/set/faculty/bias.php).
[14] Admittedly, this may seem objectionable to some because it appears like tampering with student opinions of the course. But this approach is modeled on training students to give useful feedback on peer-reviewed papers; students need practice and need to receive feedback to best learn how to be effectively critical.
[15] I will forever remain indebted to my university Writing Program which offered formal training in pedagogy, ultimately leading to working with our school’s Instructional Development as a consultant.
[16] A personal example: While training to teach a first-year composition and rhetoric courses I was given a reading that distinguished between “boundary crosser” students and “boundary guarder” students as pertaining to how they accessed and made use of prior genre knowledge. These proved to be helpful in giving me a conceptual “handle” to understand my experience with several students and to have a common vocabulary with my peers to discuss different approaches to these students.
[17] An incomplete listing of disciplinary journals on teaching can be found here.
[18] Noted (with references) in Brockx et. al. 2012: 1123.
[19] Perhaps the most cited internet resource to demonstrate bias in written commentary is the Gendered Language in Teacher Reviews, run by Ben Schmidt. The site aggregates data from RateMyProfessor.com and allows users to sort data by keywords.
[21] One recent study (Peterson et. al. 2019) has shown, however, that by explaining the implicit biases for race and gender found in SETs to students, those biases were significantly mitigated in the evaluations (in comparison to a control group).
Here is the anti-bias language that was used in the experiment: “Student evaluations of teaching play an important role in the review of faculty. Your opinions influence the review of instructors that takes place every year. Iowa State University recognizes that student evaluations of teaching are often influenced by students’ unconscious and unintentional [bolded in original] biases about the race and gender of the instructor. Women and instructors of color are systematically rated lower in their teaching evaluations than white men, even when there are no actual differences in the instruction or in what students have learned. As you fill out the course evaluation please keep this in mind and make an effort to resist stereotypes about professors. Focus on your opinions about the content of the course (the assignments, the textbook, the in-class material) and not unrelated matters (the instructor’s appearance).” Much more research needs to be done exploring this deeply important issue.
References
Boring, Anne, Ottoboni, Kellie & Stark, Philip B. 2016. “Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness.” ScienceOpen Research. [DOI: 10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1
Braga, Michela, Paccagnella, Marco & Pellizzari, ]Michele. 2014. “Evaluating Students’ Evaluations of Professors.” Economics of Education Review, Vol. 41, pp. 71.
Brockx, Bert,Van Roy, K. & Mortelmans, Dimitri. 2012. “The Student as a Commentator: Students’ Comments in Student Evaluations of Teaching.” Procedia – Social and Behavioral Sciences Vol. 69, pp. 1122-1133.
Carrell, Scott E. & West, James E. 2010. “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors.” Journal of Political Economy, Vol. 118, No. 3, pp. 409-32.
Griffin, B.W. 2004. “Grading Leniency, Grade Discrepancy, and Student Ratings of Instruction.” Contemporary Educational Psychology, Vol. 29, pp. 410–25.
Gump, S.E. 2007. “Student Evaluations of Teaching Effectiveness and the Leniency Hypothesis: A Literature Review.” Educational Research Quarterly, Vol. 30, pp. 56–69.
Henderson, Charles, Khan, Raquib & Dancy, Melissa. 2018. “Will My Student Evaluations Decrease if I Adopt an Active Learning Instructional Strategy?” American Journal of Physics, Vol. 86, No. 934. [DOI: 10.1119/1.5065907]
Låg, Torstein & Sæle, Rannveig Grøm. 2019. “Does the Flipped Classroom Improve Student Learning and Satisfaction? A Systematic Review and Meta-Analysis.” AERA Open. [DOI: 10.1177/2332858419870489]
Linse, Angela R. 2017. “Interpreting and Using Student Ratings Data: Guidance for Faculty Serving as Administrators and on Evaluation Committees.” Studies in Educational Evaluation, Vol. 54, pp. 94-106.
MacNell L, Driscoll, A. & Hunt A.N. 2015. “What’s in a Name? Exposing Gender Bias in Student Ratings of Teaching.” Innovative Higher Education, Vol. 40, No. 4, pp. 291–303.
Marsh, H.W. & Roche L.A.. 2000. “Effects of Grading Leniency and Low Workload on Students’ Evaluations of Teaching: Popular Myth, Bias, Validity, or Innocent Bystanders?” Journal of Educational Psychology, Vol. 92, pp. 202–28.
Peterson, David A. M., Biederman, Lori A., Andersen, David, Ditonto, Tessa M. & Roe, Kevin. 2019. “Mitigating Gender Bias in Student Evaluations of Teaching.” PLoS ONE, Vol. 14, No. 5. [DOI: 10.1371/journal.pone.0216241]
Rowan S., Newness E.J., Tetradis S., Prasad J.L., Ko C.C. & Sanchez A. 2017. “Should Student Evaluation of Teaching Play a Significant Role in the Formal Assessment of Dental Faculty? Two Viewpoints: Viewpoint 1: Formal Faculty Assessment Should Include Student Evaluation of Teaching and Viewpoint 2: Student Evaluation of Teaching Should Not Be Part of Formal Faculty Assessment.” Journal of Dental Education, Vol. 81, pp. 1362-72.
Stroebe, Wolfgang. 2016. “Why Good Teaching Evaluations May Reward Bad Teaching: On Grade Inflation and Other Unintended Consequences of Student Evaluations.” Perspective on Psychological Sciences, Vol. 11, No. 6, pp. 800-16.
Uttl, Bob, White, Carmela A. & Gonzalez, Daniela Wong. 2017. “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning are Not Related.” Studies in Educational Evaluation, Vol. 54, pp. 22-42.
A few years ago I decided to have all my students do their multiple choice quizzes at home and online.[1] It’s fairly easy to set up these quizzes if your school uses a Learning Management System (LMS) like Moodle (or an institutional adaptation). As I’ve noted before, it saves both precious class time and grading time.
This practice is predicated on the known benefits for giving frequent, low-stakes (low-grade impact) assessments. When given early in the term, this allows students to self-assess before more higher-stakes exams occur and also provides valuable feedback to instructors regarding the success of their teaching strategies.[2]
In practice, I make the online quizzes timed, giving students 1-2 minutes per multiple choice question depending on the complexity or cognitive challenge of the question (I earmark less time for recognition questions than application questions, for example). I’ve come to be very explicit in recommending how students should study; I tell them they should read and reorganize their class notes, comparing and incorporating ideas from the readings, class slides (which I also provide), and from what they remember in discussion.[3] Students are ultimately free to use their notes, course readings, and slides when they take the quiz, but the imposed time limit demands they have some understanding of the material or a conceptual organization of resources to know where to look.
I’ve come to realize an even better benefit of online quizzes recently – they ability to give immediate feedback to student responses. Online quizzes provide fields where instructors can type feedback to individual multiple-choice responses or for the entire question after it has been submitted.
I now use this space to offer a brief explanation of the structure of the question, especially noting if it needed students to apply their knowledge. This would involve, for instance, taking a novel situation and applying it to concepts they’ve studied (for quizzes I try to make only 10-15% of the questions application-based, this percentage increases for higher-stakes assessments). Because of this, I’m helping students identify why this multiple choice question seems harder than others, especially in comparison to one that is asking for simple recognition, i.e. recognizing the right word among the responses.
I also use this space to explain why the incorrect responses to a multiple choice question – called “distractors”- might have seemed plausible. For example, a distractor might represent the popular view of a phenomena instead of the analysis we’ve offered in class. Or a distractor might represent a common conceptual misstep in analyzing the question. With this feedback, students can immediately see where their thought process went off track when answering the question and gain more insight into the topic at hand.
Ultimately, I believe the best multiple choice exam is one that has been created by students. Creating good multiple choice questions, with plausible “distractors,” requires precisely the higher-order thinking many of us want our students to cultivate. Online quizzes have provided a new way to make this a reality.
One of the biggest issues I have found when having my students craft their own multiple choice questions (for extra credit) is the difficulty in having them craft higher-order questions (application) or provide plausible distractors for all or most of the responses. I believe the feedback I provide can help students see how good questions are crafted, thus helping them study and creatively learn the material. I’ll hope to return to this post after my current summer course.
Notes:
[1] While I ask that the quizzes be taken individually by students, cooperative quizzes are something to explore.
[2] While there is plenty of literature on extolling this a standard best-practice, I learned the value of frequent and early assessment the hard way. The first semester I taught at a community college I did not schedule any low-stakes quizzes. The first formal test my students had was a midterm, and while many did fine, there was an abundance of students who did exceptionally poorly (with grades under 30%). I did not provide them the opportunity to gauge their early level of understanding, and consequently to adapt their learning strategies.
[3] I’ve talked about the importance of note reorganization for study here.
Is there a value in having university students draw in a humanities course? Admittedly, this is charged question. I do not think practicing still-life drawing would benefit students greatly given the normal range of learning outcomes for a humanities course. But, beyond the perceptual skills being practiced while drawing, is there a cognitive benefit the can be leveraged? From this angle, I absolutely think there is pedagogical value in having students draw.[1]
For those weary of blindly joining me in advancing a “drawing across the curriculum” agenda (a bad joke for my writing colleagues), let me restate what I think drawing can be. For me, drawing is not about mimesis, the creation of a real-world replica on a paper, but about schematization. Drawing is not merely related to sensation, but also cognition and meaning-making. Mental schema function to align a range of perceptual data and convert them into intelligible concepts that can be used. Drawing is simply a physical practice, often overlooked in a non-art classroom, that enables this dynamic intellectual process. (I should note, I am not advocating to incorporate drawing activities to speak to “visual learners” – the myth of different “learning styles” has long since been debunked. Schematizing helps everyone.)
Graphic Organizers (Data Visualization)
One of the most immediate applications of drawing is the creation of graphic organizers, which allow for the construction of knowledge in a hierarchical or relational manner. Organization that is non-linear (unlike linear note-taking or outlining) often leads to better retention and recall. Semantic maps, conceptual maps, Venn diagrams, and tree diagrams (even T-charts) can all be implemented effectively in a classroom environment. If students have difficulty developing them on their own, instructors can assist by making handouts with portions of the charts left blank. I will admit, there is a learning curve to creating more complex graphic organizers, but the goal should be, ultimately, to have your students attempt to create them – doing the conceptual work is where the greatest benefits lay.
My first concept map for my Freshman Composition course. For future iterations I would have students help with much of the work.
Maps and Other Diagrams
More commonly I have my student draw maps. Instead of showing a map of a region, I will first schematize it on the blackboard – and have my students draw with me.[2] I will then show a proper map after the exercise, mostly to relate what we’ve drawn to what’s on the map. My maps, by choice, are minimalist; I only choose to depict what I think is most pertinent to the content or narrative I am presenting. For example, I often focus on rivers and lakes (the source of life and centers of human activity), or mountains and deserts (obstacles to human movement), or cosmopolitan centers (where documents are often produced, also the civil antipodes to foreign “barbarism”). I can then draw lines to represent human migration or the movement of ideas. This clearly takes more time than simply showing a map on a slide, but I’ve found it to be more effective in crystalizing ideas to students.[3] I’ve also included drawing these minimalist maps (with clear labels) on students exams.
Along these lines, I’ve also spent time drawing mythic cosmologies with my students (e.g. the Buddhist cakravāla and its dhātus – I call it the Buddhist wedding cake), as well as other diagrams produced in the primary materials we are working with (e.g. the bhavacakra). A lot of meaning of often encoded in these endeavors by the original artists and I would argue there is value in (selectively) reproducing them, not only looking at or analyzing them.
The mythic Buddhist world. There’s plenty of religious art to draw from!
Drawing Things
I might hear objections at this point – I am not really having students “draw” things. I believe there is room for this as well, although I would make sure we have a good pedagogical purpose for having students engaging in this (often) time-consuming endeavor. Luckily, for scholars of religious studies (like myself), various forms of artistic production is often at the core of religious practice. Having students participate in traditional religious practices of “art” making (we should always be mindful that some practices will not be considered “art” in the same way as we might approach it) can lead to meaningful interactions with the material under analysis. I can also be, quite frankly, simply fun too.
To provide one clear example, I’ve been having my students draw the important Buddhist figure Bodhidharma, the founder of Zen Buddhism, for several years now. I was inspired when I ran across the contemporary artist, Takashi Murakami, producing modern art versions of this famous Zen monk in 2007. I started scribbling some of my own portraits for fun and eventually decided to try and incorporate this practice into my teaching. At the time I was still looking for excuses to do fun in-class exercises that ask students to take a step out of their normal comfort zones. I was acutely aware that many people feel drawing is an in-born gift, not a skill, and would be hesitant to participate. Ultimately, I like to think that I fool students into drawing, rather than asking them to draw outright.
The inspiration: Murakami Takashi, I open wide my eyes but I see no scenery. I fix my gaze upon my heart.
On the schedule day, I will often bring blank copy paper class and provide two drawing options to my class. I say I will draw Bodhidharma on the blackboard step-by-step and students can choose to follow along, copying my process. Alternatively, students can choose to copy one of several traditional images of Bodhidharma I project on the screen. For those who choose to follow me (typically about half of the class), I imitate my best Bob Ross impression and try to make drawing non-threatening and, hopefully, fun (let’s draw a happy eyebrow right here…).
The set up with my finished Bodhidharma portrait on the whiteboard. [Southern Shaolin Temple, Putian, China. 2019]
The final pay-off for this activity comes at the end. I’ll have students reflect on the types of facial features we’ve drawn on the portrait and guess why they are important to East Asian artists (essentially, Bodhidharma is a caricature of a non-Chinese monk). This is the pedagogical purpose of this activity and I make sure to tie the points we make in discussion to those I’ve made throughout the lecture (if students do not do so already). To further draw out the significance of this activity, and position my students firmly within a actual “Buddhist” artistic tradition, I’ve also created an accompanying reading.
Rather superb renditions by my students for Woodenfish 2019.
The whole process of handing out paper, introducing the activity, drawing, and discussion takes – minimally – 15 minutes. Of course, you could conceive of projects that take much longer (such as over the whole term) or are completed as teams (based on the suggestion of a colleague, I used to do a textual version of Exquisite Corpse in my composition classes).
Final Thoughts
The real challenge is trying to determine the cost-benefit analysis of drawing – you will be spending far more time with the material than if you just showed the pictures, maps, or diagrams. Thus, as always, be judicious and reflect on the exercise afterwards – was it valuable in helping you to reach a particular learning objective? If at the end of the day, all I do is help my students doodle better, I am completely fine with that.
Notes:
*This is part of an ongoing series where I discuss my evolving thought process on designing university courses in Religious Studies. These posts will remain informal and mostly reflective.
[1] Disclaimer: As the son of an art teacher and professional artist, I’ve always challenged myself to have my students draw more. This notwithstanding, there is some interesting research on art and cognition that I’ve only just begun to dive into. A good primer is Thinking Through Drawing: Practice into Knowledge, edited by Andrea Kantrowitz, Angela Brew and Michelle Fava. Furthermore, there is already copious amounts of literature on incorporating drawing into science classrooms.
[2] This means I also have to tell student to bring paper and pen/pencils to class, quite a sizable portion (in my personal experience) takes notes solely on computers.
[3] There are clearly good reasons to show, and even focus on, highly detailed proper maps, it all depends on your pedagogical purpose. I’d suggest that if you want maps to be more meaningful, drawing elements of them with students can be helpful.