Esaki’s Pilgrims at the Daibutsu

For nearly three decades after the first Japanese postal cards were issued in 1873, their printing and distribution were strictly controlled by the government. Only with changes in postal codes in 1900 could private publishers start printing and selling their own postcards. Importantly, and for the first time, these privately issued cards could bear images on the obverse, thus being termed “picture postcards” (ehagaki 絵葉書). Previous government-issued specimens were printed blank to accommodate a sender’s written message. Moreover, the growing use among Japanese print shops of inexpensive collotype printing meant photographs could be easily reproduced for this new medium. Many early photographic postcards are reproductions of images originally created and sold in Japanese photography studios, as is the case with the examples here.

Figure 1Esaki 01a.JPG

 

  • Title/Caption: DAIBUTSU AT KAMAKURA
  • Year: 1900-1907 (postally unused)
  • Photographer: Esaki Reiji 江崎礼二 (1845-1910)[?]
  • Medium: collotype print on cardstock, hand-tinted
  • Dimensions: 5.5 in X 3.5 in
  • Reverse Imprint: Union Postale Universelle. CARTE POSTALE, 萬國郵便聯合端書

This postcard depicts the Kamakura Daibutsu, scaled to fit in the upper-left corner of the card [Fig. 1]. The blank space on the right side was reserved for a written message; Japanese postal code required the reverse side to be reserved solely for the name and address of the recipient. Once messages could be included on the reverse in 1907, postcard images were regularly scaled to fit the entirety of the obverse side.

For artistic flourish, the publisher of our card employed a subtle trompe-l’œil, making it appear as if the corner of the photographic image is curling off the paper. Visual illusions such as this would make the postcard stand out among a sea of similar imagery. Printed in large block lettering, the caption clearly denotes the subject of the photograph, the “Daibutsu at Kamakura.”

Figure 2

Esaki 02a

  • Title/Caption: 451 [or 461] DAIBUTSU AT KAMAKURA
  • Year: 1900-1907 (postally unused)
  • Photographer: Esaki Reiji 江崎礼二 (1845-1910)[?]
  • Medium: collotype print on cardstock, hand-tinted
  • Dimensions: 5.5 in X 3.5 in
  • Reverse Imprint: Union Postale Universelle. CARTE POSTALE, 萬國郵便聯合端書

Another postcard employs the same photograph. Here, the image covers a larger portion of the card, but lacks the trompe-l’œil effect [Fig. 2]. Additionally, the caption is much smaller and incorporates an identifying stock number, 451 (or possibly 461). It is of note that a caption which incorporates a stock number with a title is characteristic of prints made by Japanese photography studios of the 1880’s and 1890’s. By comparing this stock number to known lists gleaned from published Japanese studio albums, it appears likely the original photograph was taken by Esaki Reiji 江崎礼二 (1845-1910), a famed Tokyo-based photographer.[1]

Esaki apprenticed under the pioneering photographer Shimooka Renjō 下岡蓮杖 (1823-1914) in 1870 before opening his own studio in 1871 in Asakusa Park.[2] He soon established himself as a technical master, among the first of Japanese photographers to adopt the new gelatin dry-plate (zerachin kanpan ゼラチン乾板) technique in 1883 and executing technically difficult pictures of a naval mine detonating in the Sumida River (1883) and night-time exposures of a lunar eclipse (1884) and exploding fireworks (1885). The shorter exposure times of the dry-plate process also allowed Esaki to more easily photograph fidgeting children, an expertise he proudly displayed in a famous collage of more than 1700 young children and infants (1893).[3]

Figure 3

Esaki 01 pilgrimsThe photograph of the Daibutsu by Esaki (or one of his studio assistants) depicts the bronze statue from the southwest corner, an uncommon, but not unprecedented angle. More relevant to the site’s religious heritage, the photograph shows a line of Japanese pilgrims (jinreisha 巡礼者) in front of the Daibutsu, easily identified by their broad circular sedge hats and walking staffs carried over their shoulders [Fig. 3]. The mise-en-scène is more relaxed than reverent. The lead pilgrim, who holds his hat in his hand, appears to read the small rectangular sign perched on the pedestal (which, coincidentally, forbids climbing on the statue), while his fellow travelers casually stand conversing with one another. Only the temple priest by the offering table glances directly towards the camera.[4] This mundane expression of religious piety stands in contrast to the highly orchestrated images of devotion sometimes staged by Western photographers. Significantly, the distinction between Japanese pilgrim and tourist is often blurred, as both can engage in similar activities at a pilgrimage site, including visits to the temple souvenir shop.

Although faded, the hand-tinting is still visible in both cards, with the slate blue colossus overlooking his faithful visitors. The elements in the scene suggest this photograph was taken in the late 1890’s.[5]

Figure 4

Esaki 01b

Esaki 02b

The reverse of both cards is bordered by an ornamental filigree-like design in burgundy ink [Fig. 4]. These are examples of “undivided back” cards, since no line yet separates the areas on the back where the correspondence and address would later come to be written. This functions now as an easy identifier for dating old postcards, with these dating between 1900 and 1907. Since it was not yet common for publishers to imprint their names or trademarks on the back, it is difficult to tell who printed these beautifully rendered cards.

Notes:

*This is part of a series of posts devoted to exploring the development of a visual literacy for Buddhist imagery in America. All items (except otherwise noted) are part of my personal collection of Buddhist-themed ephemera. I have also published my working notes on identifying publishers of Meiji and early Taishō postcards and establishing a sequential chronology for Kamakura Daibutsu photographs.

[1] Stock lists for Esaki’s studio do not include numbers 451 or 461, but numbers 452 to 460 are all images of Kamakura, specifically Hachiman Temple, the Daibutsu, and the lotus ponds in Kōtokuin (the temple that houses the Daibutsu). See Bennett 2006a: 129. Unfortunately, almost all attributions to Esaki and his studio remain tentative and more work desperately needs to be done on his photographic oeuvre.

[2] For Esaki’s biographical information, see Bennett 2006b: 165 and here and here. Several Japanese resources note his name as “Ezaki,” but I follow the standard English “Esaki,” which is also how he promoted his studio on photographic mounts and in other published materials (the older “Yesaki” can also be found).

[3] This image was also sold in the United States through Sears & Roebuck catalogues.

[4] Closer inspection reveals a young boy towards the far right of the photograph, holding his hat in his hand, also possibly peering towards the camera

.Esaki 01 boy

[5] I have seen postcards of this image cancelled in January 1902, setting a firm terminus ante quem for the photograph. I have also seen a third postcard, oriented vertically, bearing this same photograph.

Instructional Design in Higher Education: What is It?

Peter Romaskiewicz

What is this post about?

As a trained historian who has (rather belatedly) developed an interest in Instructional Design, I grew curious about its historical origins and development. Quite frankly, The more I learned about various teaching techniques the more I became interested in tracing the trajectories and relationships between specific theories and concepts. This is a cursory attempt to make sense of this field of study and place it in relation to the Scholarship of Teaching and Learning, the concept of Active Learning, and the advancements of Educational Technology.

Early Years

The origins of “Instructional Design” (ID) are often traced to the creation of training materials for the military during World War II. Yet, it was not until the 1960’s that a more systematic approach to effective teaching began to appear. This included combining aspects of task analysis, learning objective specification, and criterion testing – all hallmarks of modern higher education – into an overarching model for effective instruction. The elementary principles were derived from the works of psychologists such as B.F. Skinner, Benjamin Bloom, Robert Gagné, and Robert Mager.[1] At this stage, there was interest in creating an overarching “systems approach” to teaching and learning, thus creating what may be now considered a specialized field of study.

As a result of these advances, in the early 1970’s many universities started funding instructional improvement centers (with names such as the “Instructional Systems Development Center”) to help faculty improve the quality of their instruction. In 1977, the first peer-reviewed journal devoted to ID, the Journal of Instructional Development, was published.[2] Moreover, in the same year, the Association for Education Communication and Technology (AECT; originating as the Department of Visual Instruction in 1923) proposed a formal definition of ID centered on five core elements: analysis, design, development, implementation, and evaluation (ADDIE).[3] Thus, ID was founded on a specific methodology focused on improving teaching efficacy.

The interest in teaching instruction faltered in higher education, however, in the 1980s and many of the improvement centers were defunded or disbanded. Furthermore, the Journal of Instructional Development ceased publication in 1988 due to “fiscal austerity.”[4] The 1990s proved to be an important juncture for the study and practice of teaching in higher education.

Constructivism and “Active Learning”

One of the important shifts in the 1990s was the growing interest in constructivism as a learning theory. Constructivism, viewed most broadly, has roots in epistemology, psychology, and sociology, and attempts to explain how people come to know the world around them.[5] Constructionist perspectives on learning are oriented around several principles:

  1. learning is an active, adaptive process
  2. knowledge is idiosyncratically constructed through personal filters of experience, beliefs, or goals
  3. knowledge is socially constructed
  4. effective learning requires meaningful, authentic (“real world”), open-ended, and challenging problems for the learner to work through[6]

Overall, the constructivist theory of learning is commonly positioned in opposition to the older behaviorist model, where learning is characterized as a passive stimulus-response to highly controlled surroundings. Following this new theoretical approach, learners are no longer treated as inert, empty vessels to be filled with knowledge, but active participants who try to develop effective ways to solve novel problems. While some have criticized this simplistic characterization of behaviorism and the “traditional” views of learning, it cannot be denied that this new interest in constructivism in the 1990s spurred a novel wave of research into optimizing the learning environment for this new conception of the “engaged” learner.[7] Since knowledge (or, anything beyond low-order memorization) cannot be simply be transferred from one mind to another according to this new framework, this leads to an inconvenient reality, namely, “we can teach, even well, without having students learn.”[8] Consequently, developing a full repertoire of teaching strategies based on sound research became ever more important. This coincided with a broader interest in teaching-related scholarship (discussed below).

It should be remembered that constructivism is not an ID theory. Instructional Design attempts to adopt relevant learning theories and develop a systematic approach to effective teaching. It should also be noted that instructional design aims to develop a range of pedagogical techniques – a proverbial pedagogical toolbox – thus, constructivism is only one learning theory that has been adopted. Nevertheless, the 1990s saw a growth in literature speaking to constructivist-based “active learning” environments, so much so that many of the most common teaching best practices today reflect, or have been reinforced by, the so-called constructivist movement.

One of the most well-known targets of active learning proponents is the classical lecture, now sometimes framed (unjustly) as an out-of-date modality of instruction. In 1991, Charles Bonwell and James Eison, authors of the seminal work Active Learning: Creating Excitement in the Classroom, proclaimed that “the exclusive use of the lecture in the classroom constrains student learning.” Instead, they promoted instructional activities “involving students in doing things and thinking about what they are doing.” Critics will often interpret this to mean that all lecturing activities need to be replaced by student-led learning initiatives. This is a misunderstanding of the application of active learning. The concern of Bronson and Eison was directed towards the exclusive use of lecturing, where students only take notes and follow directions. To enhance learning, they recommend routinely engaging in activities throughout the lecture where students can reflect upon, analyze, evaluate or synthesize the material that was presented.

These activities can vary greatly, but the general goal is to have students engage in higher-order thinking through reading, writing, and discussing. These can be very simple activities (such as employing think-pair-share) or more complex (such as providing a new reading where a recently learned theory needs to be applied). In practice, these activities are not significantly different from what Michael Scriven called “formative evaluation” (or formative assessment) in 1967. These are assessment procedures performed during the learning process, as opposed to “summative evaluation” (or summative assessment) which takes place at the completion of a learning activity (often measured by exams).[9] A wide range of these activities which provide crucial feedback to students are often placed under the category of Classroom Assessment Techniques (CATs), first coined by K. Patricia Cross and Thomas Angelo in 1988.[10]

The point is to puncture lectures with moments of student activity in order to break the line of one-way transmission in exchange for more interactive moments of cognitive processing. This can occur between instructor and student, between students, or function as an individual reflective exercise. While there is no pre-determined time frame to engage active learning activities, the common recommendation is to allow time for reflection and processing every 12-20 minutes of lecture. The timing depends on the complexity, density, and novelty of the information as well as the goals of the instructor. In practice, the instructor revolves between two roles in the active classroom setting, functioning first as the “sage on the stage” by providing important information or modeling procedural knowledge, then acting as a “guide on the side” by coaching and providing feedback to assist in the students’ development.

Teaching Informed by Scholarship

A second shift in the 1990s was the interest in developing scholarly literature that focused on teaching and learning in higher education. The theoretical underpinnings were outlined in Ernst Boyer’s Scholarship Reconsidered: Priorities of the Professorate in 1990. Boyer was the president of the Carnegie Foundation for the Advancement of Teaching and called for faculty to expand beyond their traditional roles as scholars and consider how to better serve college and university missions to educate an increasingly diverse student population. Boyer proposed reconceiving scholarship (“the work of the professoriate”) as four distinctive types: scholarship of discovery (e.g. traditional research in one’s discipline), scholarship of integration (e.g. composing introductory textbooks), scholarship of application (e.g. applied research), and scholarship of teaching.[11] It was this last category, scholarship of teaching, where Boyer argued that teaching should not merely be “tacked on” to the duties of faculty, but should be treated as an active area of intellectual exploration where instructors plan, evaluate, and revise their pedagogical approaches based on a rigorous understanding of relevant literature. In other words, teaching and research should not be seen as representing opposing scholarly interests because the research of teaching in higher education is a valuable contribution to scholarship in itself.

Soon after Boyer’s influential publication, other scholars such Robert B. Barr and John Tagg started to highlight the limitations of perceiving higher education as mere access to instruction and suggested examining the value of improving student learning. The suggestions of Barr and Tagg presumed a constructivist theory of education where the focus was centered on the learner.[12] Consequently, the following president of the Carnegie Foundation, Lee Schulman, formally incorporated “learning” into Boyer’s “scholarship of teaching,” thus creating the field of the Scholarship of Teaching and Learning (SoTL, often pronounced “sō-tul”) as it is known today.[13]

While there is no formal criteria for research to fall under the rubric of SoTL, one working definition has been suggested by Michael Potter and Erica Kustra: “the systematic study of teaching and learning, using established or validated criteria of scholarship, to understand how teaching (beliefs, behaviours [sic], attitudes, and values) can maximize learning, and/or develop a more accurate understanding of learning, resulting in products that are publicly shared for critique and use by an appropriate community.”[14] Generally, however, SoTL is rather broad and encompasses any approach to teaching and learning in higher education that mirrors traditional research, namely having defined goals, appropriate methods, significant results, and appropriate presentation.

I have seen no attempts to try and define the relationship between ID and SoTL. Even though they have disparate origins, they tend to share common methods and goals. In practice, because of the specific initiatives of the Carnegie Foundation, SoTL appears to represent the scholarly output of disciplinary specialists interested in researching teaching practices in higher education, while ID represents the pragmatic work done in departments found on campuses (e.g. holding workshops, publishing SoTL-oriented journals [see below], managing informative websites on pedagogy, etc.). These departments often appear under a wide range of names such as the Center for Teaching, Center for Excellence in Teaching and Learning, and Teaching Commons.[15] Perhaps the largest distinction is that historically ID has been interested in all levels of education (including training for business and the military), of which higher education was just one dimension. Additionally, ID has historically shown a closer affiliation with instructional media and technology (see below).

Ultimately, SoTL has become a recognized field of research both within individual disciplines and as a stand-alone discipline. Since 1990, there has been rapid growth in the publication of discipline-neutral journals exploring effective teaching. Below is an incomplete list of running publications falling under the SoTL rubric (including publications that existed previous to Boyer’s publication):

Numerous disciplines also have a history of studying how students learn within their fields, such as history (Teaching History, 1969-), sociology (Teaching Sociology, 1973-), or philosophy (Teaching Philosophy, 1975-), among many others. While SoTL research tends to be less tied to specific disciplinary domains, some will include these publications under the SoTL rubric. A helpful list of journal publications devoted to teaching and sorted by discipline is published digitally by the University of Saskatchewan Library (here).

Educational Technology and Instructional Media

A third shift in the 1990’s, which I will only discuss briefly here, was the growing interest in using computers and eventually the internet, for instructional purposes. Historically, we could trace the origins of ID to pre-World War II interests in technological advancements perceived as having an application for teaching. This would include early twentieth-century school museums and new visual media such as magic lantern slides and stereoview cards. In the coming decades, this interest would shift to the use of video and television.[16] This focus on researching (and adopting) the newest technology for the classroom is revealed in the name of one of the oldest professional groups dedicated to ID, the Association for Education Communication and Technology (AECT), which was started in 1923 as the Department of Visual Instruction. Additionally, many of the earliest journals dedicated to ID, published under the editorial supervision of the AECT, had titles such as A[udio] V[isual] Communication Review, Tech Trends, Media Management, and School Learning Resources. After World War II, Educational Technology (also known as Instructional Media, among other names) was increasingly seen as separate from the research interests of the newly developing field of ID. While these fields clearly overlap, they also covered specific, complementary niches.[17]

Some of the more recent interests of this field are the development and management of university Learning Management Systems (LMS) and the designing of online or blended classrooms, especially for long-distance courses. This has spawned a new major in several American colleges and universities called Learning Design and Technology (LDT).

TL;DR

Many Instructional Development departments in colleges and universities operate useful, information-rich websites. If I had to choose just one on the merits of providing ample, practical information about university pedagogy while also providing some historical context, it would be the Vanderbilt University Center for Teaching (if you navigate to the Center for Teaching home page, look for the Teaching Guides menu).

Notes:

[1] I am indebted to Reiser 2001a and Reiser 2001b for this synoptic history of Instructional Design.

[2] The use and study of various forms of instructional media, such as audio-visual materials, is often treated as a parallel area of study to instructional design, with the recent focus on the use of computers and long-distance education.

[3] For discussion on the various definitions of Instructional Design see Branch & Dousay 2015: 14-8, and Reiser 2001a: 53-4. Robert Reiser offers a more nuanced definition of Instructional Design: “The field of instructional design and technology encompasses the analysis of learning and performance problems, and the design, development, implementation, evaluation and management of instructional and noninstructional processes and resources intended to improve learning and performance in a variety of settings, particularly educational institutions and the workplace. Professionals in the field of instructional design and technology often use systematic instructional design procedures and employ a variety of instructional media to accomplish their goals. Moreover, in recent years, they have paid increasing attention to noninstructional solutions to some performance problems. Research and theory related to each of the aforementioned areas is also an important part of the field.” See Reiser 2001a and 2001b. Some have suggested that instructional development mirrors the scientific method, see Andrews & Goodson 1980.

[4] Higgins et. al. 1989: 8. Technically, the Journal of Instructional Development was combined with Educational Communication and Technology Journal (titled previous to 1978 as A[udio] V[isual] Communication Review) and consolidated as Educational Technology Research and Development. Additionally, the publications Tech Trends, Media Management, and School Learning Resources were also consolidated, see Higgins et. al. 1989, also see Dick & Dick 1989: 87.

[5] The figures most prominently associated with constructivism are Lev Vygotsky and Jean Piaget. See, e.g. Owen-Smith 2018: 17-8.

[6] There is no consensus definition of the constructivist theory of learning, but a survey of the literature suggests a polythetic dimension. For delineations of the elements of constructivism see, among others, Fox 2001: 24, Reiser 2001b: 63, and Karagiorgi & Symeou 2005: 18.

[7] For a summary overview of critiques against the novelty of constructivism, see Fox 2001. I agree with Fox that it is highly unlikely any “traditionalist” view of learning proposed the learning process to be entirely passive. For a list of scholarship exploring the relationship between constructivism and more “traditional approaches, see Reiser 2001b: 63. Furthermore, in the camp of constructivism, there are more radical and more conservative views regarding the relative importance the external environment and internal, individual frameworks; see the brief discussion in Karagiorgi & Symeou 2005: 19.

[8] Karagiorgi & Symeou 2005: 18.

[9] The precursors to Scriven’s helpful distinction between formative and summative is discussed in Cambre 1981.

[10] Their recommendations were published in Classroom Assessment Techniques: A Handbook for Faculty.

[11] This is outlined in Chin 2018: 304.

[12] Barr and Tagg differentiate between the Instruction Paradigm and the Learning Paradigm, see Barr & Tagg 1995.

[13] This is also reflected in the Carnegie Academy for the Scholarship of Teaching and Learning (CASTL) launched in 1998. For Shulman’s analysis of the importance of learning, see Shulman 1999. A brief history of SoTL can be found here (by Mary Huber) and here (by Nancy Chick).

[14] Potter & Kustra 2011: 2.

[15] It seems plausible to say that SoTL has superseded ID as the preferred terminology to name this field of research.

[16] See Reiser 2001a.

[17] See, e.g. Dick & Dick 1989.

References:

  • Andrews, Dee H. & Goodson, Ludwika A. 1980. “A Comparative Analysis of Models of Instructional Design.” Journal of Instructional Development, pp. 161-82.
  • Barr, Robert B. & Tagg, John. 1995. “From Teaching to Learning: A New Paradigm for Undergraduate Education,” Change, Vol. 27, No. 6, pp. 13-25.
  • Bonwell, Charles & Eison, James. 1991. Active Learning: Creating Excitement in the Classroom. ASHE-ERIC Higher Education Reports.
  • Boyer, Ernst. 1990. Scholarship Reconsidered: Priorities of the Professoriate. The Carnegie Foundation for the Advancement of Teaching: Princeton University Press.
  • Branch, Robert Maribe & Dousay, Tonia A. 2015. Survey of Instructional Design Models. Bloomington: Association for Educational Communication and Technology.
  • Cambre, Marjorie A. 1981. “Historical Overview of Formative Evaluation of Instructional Media Products,” Educational Communication and Technology, Vol. 29, No. 1, pp. 3-25.
  • Chin, Jeffrey. 2018. “Defining and Implementing the Scholarship of Teaching and Learning,” in Learning from Each Other: Refining the Practice of Teaching in Higher Education, eds. Michele Lee Kozimor-King, Jeffrey Chin, Berkeley: University of California Press, pp. 304-11.
  • Cross, K. Patricia & Angelo, Thomas. 1988 [1993]. Classroom Assessment Techniques: A Handbook for Faculty, 2nd edition. San Francisco: Jossey-Bass.
  • Dick, Walter & Dick, W. David. 1989. “Analytical and Empirical Comparisons of the Journal of Instructional Development and Educational Communication and Technology Journal.” Educational Technology Research and Development, Vol. 37, No. 1, pp. 81-87.
  • Fox, Richard. 2001. “Constructivism Examined,” Oxford Review of Education, Vol. 27, No. 1, pp. 23-35.
  • Higgins, Norman; Sullivan, Howard; Harper-Marinick, Maria & López, Cecilia. 1989. “Perspectives on Educational Technology Research and Development,” Educational Technology Research and Development, Vol. 37, No. 1, pp. 7-18.
  • Karagiorgi, Yiasemina & Symeou, Loizos.2005. “Translating Constructivism into Instructional Design: Potential and Limitations,” Journal of Educational Technology & Society , Vol. 8, No. 1, pp. 17-27.
  • Owen-Smith, Patricia. 2018. The Contemplative Mind in the Scholarship of Teaching and Learning. Bloomington: Indiana University Press.
  • Potter, Michael K. & Kustra, Erika D.H. 2011. “The Relationship between Scholarly Teaching and SoTL: Models, Distinctions, and Clarifications,” International Journal for the Scholarship of Teaching and Learning, Vol. 5, No. 1, Art. 23.
  • Reiser, Robert A. 2001a. “A History of Instructional Design and Technology: Part I: A History of Instructional Media,” Educational Technology Research and Development, Vol. 49, No. 1, pp. 53-64.
  • Reiser, Robert A. 2001b. “A History of Instructional Design and Technology: Part II: A History of Instructional Media,” Educational Technology Research and Development, Vol. 49, No. 2, pp. 57-67.
  • Shulman, Lee. 1999. “Taking Learning Seriously,” Change, Vol. 31, No. 4, pp. 11-17.

Working Notes on Japanese Postcard Publishers

Peter Romaskiewicz [Last updated: December 2019]

Introduction

In the ongoing (nay, endless!) attempt to identify the Japanese picture postcards (ehagaki 絵葉書) in my collection, I’ve decided to publish my working notes on identifying Japanese postcard publishers. Where possible, I try to provide historical information about the publisher. Moreover, using Urakawa Kazuya’s 浦川和也 four-period chronology as a foundation, I try to catalog variant designs printed on the reverse (atena-men 宛名面, “address side”) by each publisher as well as some different letterpress captioning styles on the obverse (egara [or shashin]-men 絵柄[写真]面, “design [or photograph] side,” or tsūshin-men 通信面, “communication side”)[1]. The goal is to help identify cards that do not bear a publisher’s name or trademark (shōhyō 商標, rogumāku ロゴマーク). The information below is mostly gleaned from Japanese sources (both print and digital) as well as some personal observations.

Please contact me if you can provide any other information or resources about Japanese postcard publishers: pmr01[at]ucsb[dot]edu.

Publishing Postcards in Meiji and Early Taishō Japan

The commercial market for photography in Japan grew significantly in the 1860’s and 1870’s with the arrival of globetrotting tourists looking for souvenirs of their exotic travels in Asia. The primary port of entry for travelers entering Japan during the Meiji era was Yokohama, which emerged as the center of this competitive commercial industry. Yokohama shashin 横浜写真, or “Yokohama photography,” came to denote the particular fusion of Western technology and Japanese craftsmanship as monochromatic prints were hand-colored by artists to produce vibrant, eye-catching scenes. Throughout the 1880’s and 1890’s Japanese owned studios grew in number and significance, displacing their Western counterparts. Moreover, as travel restrictions were lifted for foreigners and domestic interest increased, studios started to successfully populate more diverse urban areas throughout Japan.

Not unrelated, the Japanese postal delivery service began in March 1871 and soon joined the Union Postale Universelle (bankoku yūbin rengō 萬国郵便聯合) in June 1877, thus permitting the sending and receiving of international mail (although several countries maintained foreign post offices in select Japanese cities earlier). The first postal card (hagaki 端書) in Japan was issued in December 1873, but until the start of the twentieth century, all cards were government-issued (kansei 官製). These are identifiable through pre-paid franking printed on the address side, while the obverse was kept blank to accommodate a written message. Changes in postal codes on October 1, 1900 afforded private companies the opportunity to publish picture postcards (ehagaki 絵葉書) where an illustration or design could be included on the obverse (until April 1907 the sender’s message also had to be written on this side). Two years later, the government started to produce its own commemorative picture postcards. These changes altered the landscape of the postcard market and starting a new cultural phenomenon.

For private-issued (shisei 私製) cards, photographic imagery soon became the favored visual expression and many images from Japanese photography studios were initially used for this new medium. These images were photomechanically reproduced through an inexpensive planographic printing technique known as the collotype (korotaipu コロタイプ), introduced commercially in Japan by Ogawa Kazumasa 小川 一眞 (1860-1929) in 1889. Multi-color collotype printing was very difficult to execute, thus many early twentieth-century postcard publishers employed artists who hand-painted the cards with washes of watercolor (some colors, like red, were pigment based). Consequently, the aesthetic of Yokohama shashin that developed in the early Meiji period continued into the early Taishō era through this new visual medium.

The Russo-Japanese War from 1904 to 1905 initiated what is now referred to as a “picture postcard boom” (ehagaki būmu 絵葉書ブーム or ehagaki ryōko 絵葉書流行). Postcards were sold all throughout Japan, especially in urban centers. One could find postcard specialty shops in cities like Yokohama, Tokyo, Kyoto, or Kobe. Moreover, many other businesses became involved in the lucrative postcard market, including photography studios, printing shops, booksellers, souvenir stores, and even temples. The larger publishers would sell their stock wholesale to other stores, thus canvassing the country with inexpensive photographic images of landscapes, city scenes, portraits of geisha, actors, or the royal family, daily activities, war scenes, natural disasters, and so forth. At least one publisher, Ueda Photographic Prints Corp., had a wholesale outlet in New York City.

Infrequently, publishers would inconspicuously print their name and address on the card. It slowly became common, though far from standard, for larger publishers to print their signature trademark or logo on the card, most commonly in the stamp box (kitte ichi 切手位置) on the reverse side. While this would aesthetically frame the trademark, once a stamp was affixed it would also render the publisher anonymous. It is also possible to find a publisher’s name or insignia elsewhere on the card, such as in the letterpress caption on the obverse.

In too many cases, however, there is little identifying evidence to ascertain the publisher of a card. (In this industry of mass-production, it goes without saying that identifying the photographer or colorist is almost certainly impossible.) Elsewhere I have described a method to help determine otherwise anonymous publishers, and I consider this entry a further exploration of this endless, though enjoyable, quest.

Ueda Photographic Prints Corp.

上田写真版合資会社

Ueda Yoshizō.png

Ueda Yoshizō

Born in Tokyo, Ueda Yoshizō 上田義三 (1865-?) found employment after college in the oldest German export trading company in the capital, Aherns & Co. (Ārensu shōkai アーレンス商会), founded by Heinrich Aherns in 1869. In the mid-1890’s, after Ueda toured Europe and America, he returned to Japan to open his first business venture in 1897 (Meiji 30), the Yokohama Photographic Printing Co. 横浜写真版印刷所 first located on Yatozaka Slope 谷戶坂. In 1905 (Meiji 38) the business moved to Okina-chō 3-chōme (No. 131) 翁町3丁目(131番) and around 1913 (Taishō 2) the business was renamed Ueda Photographic Prints Corp. 上田写真版合資会社 (the name “Uyeda” can be found printed on some postcards).

Ueda was highly successful in selling photographs and producing government-issued postcards on his own collotype printing equipment. Importantly, Ueda’s success in printing early landscape and figural picture postcards presaged the Japanese postcard boom after the Russo-Japanese War, thus he became recognized as the “Japanese Pioneer of Picture Postcard Manufacturing” 日本元祖絵葉書製造元. Īkura Tōmei 飯倉東明 (1884-?) worked as Udea’s director of photography in the first decade of the twentieth century. My analysis of Ueda postcards from 1907-1918 can be found here.

Tonboya

トンボヤ

Hakaki sign

Another postal box signboard on Isezaki-chō

Tonboya signboard

Tonboya’s signboard on Isezaki-chō

Around 1905 (Meiji 38), Yoshimura Kiyoshi 吉村清, the proprietor of the well-known Tokyo-based publisher Kamigataya 上方屋 (in Ginza), started a new venture in Yokohama, called Tonboya トンボヤ, or “Dragonfly Studio.” [2] Along with Ueda, Tonboya was the most prolific hand-painted postcard publisher in Japan, also opening offices in Tokyo, Kawasaki, and Yokosuka. The original shop was located on Isezaki-chō 2-chōme (No. 16) 伊勢佐木町2丁目(16番), a famous area known among foreigners as Theatre Street (see post frontispiece above). The storefront can easily be located in period photographs due to its distinctive Japanese-style red cylindrical postal box (yūbin posuto 郵便ポスト) sign painted with ehakaki エハカキ [sic], or “Picture Postcards.” The left-hand column of words on the white storefront sign says “photographic collotype printing.”

Kamigataya stamp box

Kamigataya stamp box trademark

The postal box was also the trademark printed in the stamp box for Kamigataya issued cards. The precise business relationship between Kamigataya and Tonboya remains obscure.

Screen Shot 2019-10-04 at 18.46.22.png

Postal box signboard in Motomachi

There appears to have been a second office in Motomachi which also appears in period photographs, here saying “postal cards.” In the early Showa Period after the Great Kantō earthquake, the business moved to Izezaki-chō 1-chōme (No. 36) 1丁目(36番). Cards were initially hand-colored, but Tonboya used a multicolored collotype process starting in the early Taisho. Tonboya remained in operation after the Great Kantō Earthquake of 1923.

 

Tonboya Reverse Designs and Obverse Captions

Period I (October 1900-March 1907) – Undivided Back

Screen Shot 2018-12-17 at 08.47.57.png

Type 1: The characteristic dragonfly (tonbo) trademark is placed in the stamp box.

Screen Shot 2018-12-18 at 01.28.17

Type 2: A variant dragonfly trademark is placed in the upper left-hand corner.

Screen Shot 2018-12-19 at 14.06.31

Type 3: Even without the dragonfly seal, the characteristic serif font in dark/black ink can help identify the publisher. The same serif font, however, was also used by Kamigataya which is easily identified by the Japanese postal box trademark in the stamp box. With no identifying emblem in the stamp box, it would be difficult to confidently determine if the card was published by Tonboya or Kamigataya. Tonboya, however, seemed to have used black ink on the reverse while Kamkigataya seemed to have preferred dark green or maroon. Moreover, Kamigataya cards are sometimes printed with the “UNION POSTALE UNIVERSELLE” header as narrower than “CARTE POSTALE” below.

Screen Shot 2019-10-03 at 10.12.53

The obverse captioning is typically in capital letters and finished with a period. Some include a stock number in parentheses. It should be noted that Kamigataya also used capital letters in this period.

Period II (March 1907-March 1918)

Screen Shot 2018-12-19 at 14.55.17

Type 4: Dragonfly trademark with divided back and address lines. Variants exist with the rule lines for the name and address omitted.

Screen Shot 2018-12-18 at 01.28.53.png

Type 5: No dragonfly trademark with divided back, here with rule lines for the name and address.

Screen Shot 2019-10-03 at 09.55.57.png

Type 5 (variant): No dragonfly trademark with divided back, here without rule lines.

Screen Shot 2019-10-02 at 22.44.46

Type 5 (variant): No dragonfly trademark with barred divided back and without rule lines.

Screen Shot 2019-10-03 at 10.13.13

The obverse captions for the above designs often incorporate a dragonfly facing downwards and to the left. A stock number in parentheses with a letter code indicating the location of the image is also sometimes included. Note, however, this system is not universal, these designs can have the older captioning system of capital letters.

Screen Shot 2018-12-18 at 01.33.22.png

Type 6 [Yokohama Jubilee](Japanese): This reverse design was printed for the 1909 fiftieth anniversary jubilee for the opening of Yokohama port. The symbol in the stamp box is the emblem of Yokohama. Notably, the reverse design bears the dragonfly as the ki キ.

Screen Shot 2019-10-03 at 00.15.58

Type 6 [Yokohama Jubilee](English): Same design as above with English name and street address. The design also incorporates “MADE IN JAPAN” in the dividing line, suggesting this card was printed with US customs laws in mind. The Yokohama city insignia is also missing.

Screen Shot 2019-10-03 at 10.39.30.png

Obverse captions for the anniversary jubilee designs have a dragonfly facing upwards to the left. Sometimes a stock number in parentheses is incorporated.

Screen Shot 2019-10-03 at 10.49.33

Type 7 (blue): Around September 1909, “UNION POSTALE UNIVERSELLE” is removed and a new bilingual header is introduced. The dotted dividing line may or may not contain “MADE IN JAPAN.” Notably, the reverse design bears the dragonfly as the ki キ.

Screen Shot 2019-10-03 at 11.00.29

Type 7 (umber): The reverse design is printed in blue, umber, or black.

Screen Shot 2018-12-17 at 23.18.43

Type 7 (black): A barred line variant is also common.

Screen Shot 2019-10-03 at 10.56.42

Type 7 (blue): Here with address lines

Screen Shot 2019-10-03 at 11.10.12.png

The obverse captions for the above designs most typically have a dragonfly facing upwards with a stock number and letter identification in parentheses. There are, however, exceptions. The letterpress is commonly in italic print, but not always. At some point, the letter identification is printed in lower case. Confusingly, the Sakaeya publishing house 栄屋商店 lion is sometimes incorporated (the reverse design bears the dragonfly as the ki キ).

Period III (March 1918-February 1933)

Screen Shot 2019-10-02 at 22.38.56.png

Type 8: A bilingual header with a centered dividing line.

 

Screen Shot 2018-12-19 at 02.06.33.png

Type 8: A minor variant of above.

 

Hoshinoya

星野屋

Yoshioka Chōjirō

Yoshioka Chōjirō

Yoshioka Chōjirō 吉岡長次郎 arrived in Yokohama in 1904 (Meiji 37) with postcards purchased in Tokyo, hoping to turn a profit by reselling them to foreigners. After receiving numerous orders and making several trips back to Tokyo to restock, Yoshioka opened a shop in Yokohama at Onoe-chō 4-chōme, No. 61 尾上町4丁目(61番).

Hoshinoya display.png

Possible display of cards at Hoshinoya

By the end of the Russo-Japanese War in the fall of 1905, he had collected many collotype plates of native landscapes and was very successful marketing to both foreigners and Japanese. Hoshinoya emerged as one of the most well-known postcard shops in the port of Yokohama.

 

 

 

 

Hoshinoya Reverse Designs and Obverse Captions

Period II (March 1907-March 1918)

Screen Shot 2018-12-18 at 01.36.55.png

The Art Nouveau style “Carte Postale” is an easily identifying characteristic of Hoshinoya cards. I have also seen Nassen & Co. cards with this same font and scalloped stamp box, however, but cards with the Nassen & Co. imprint are far less common in the secondary market. In any regard, any reverse design with the publisher’s mark could be Hoshinoya or Nassen & Co.

Screen Shot 2018-12-18 at 01.36.46.png

Screen Shot 2018-12-18 at 17.20.28.png

Screen Shot 2019-10-03 at 22.07.45

Screen Shot 2018-12-19 at 14.05.05.png A variant style for “Carte Postale” can also be found. Note the stamp box is vertical.

Screen Shot 2019-10-03 at 22.18.03.png

495_002.jpg

This reverse design, with French header and bilingual translation, is frustrating. Here, it is clearly identified by the Hoshinoya stamp box trademark. Without the stamp box insignia, however, this design is fairly common among cards in the secondary market. I had previously believed this was an Ueda back (Type 2), but also have evidence that it may have been used by Tonboya. In any regard, only Hoshinoya’s trademark can be found on cards I have seen (thus far) of this design. Yet, not only is the sans-serif font uncharacteristic of Hoshinoya cards of this period, this publisher often used “Union Postale Universelle” as a header; this is not even imprinted on the card here. It remains possible that reverse designs were shared among printers (see Sakaeya below). Unfortunately, I am not sure this will ever be resolved with certainty.

 

041_002

See my comments directly above.

Period III (March 1918-February 1933)

 

Screen Shot 2019-10-03 at 22.06.41

Period III using the Art Nouveau style “Carte Postale.”

Screen Shot 2019-10-03 at 10.04.50.png

Screen Shot 2019-10-03 at 22.10.26.png

Screen Shot 2019-10-03 at 22.12.17

Hoshinoya is clearly indicated on the obverse of this card.

708_002.jpg

I am unsure of this identification, but I take the star at the top of the dividing bar to indicate the Hoshinoya trademark.

Screen Shot 2019-10-03 at 22.12.39

I am also unsure of this identification, It seems that Hoshinoya started to use a light blue in for the reverse designs, so I include this one here.

Screen Shot 2019-10-03 at 21.55.29.png

Hoshinoya relocated to Nikko and started to produce sets of this locale. Backs are also printed in Japanese.

 

Sakaeya & Co. 栄屋商店發行

Sakaeya shop.png

Sakaeye storefront in Motomachi, Kobe

A Kobe based company with a shop in Motomachi, Kobe. A majority of this publisher’s cards are of Kobe and its environs, but there are other images among its portfolio. Curiously, I have seen Sakaeya’s lion insignia in the caption of images that were printed on cards bearing both Ueda’s and Tonboya’s seals on the reverse. I’d speculate that Sakaeya purchased Ueda and Tonboya cardstock and resold them in Kobe with its lion insignia imprinted on the front. Period III cards also bare the insignia of Taisho Hato (see beow), a dove with is wings spread open.

 

 

Sakaeya Reverse Designs and Obverse Captions

Period II (March 1907-March 1918)

Screen Shot 2019-10-03 at 11.32.24.png

Ueda publisher back with Sakeya insignia in obverse caption.

Screen Shot 2019-10-03 at 00.32.11.png

Tonboya publisher back with Sakeya insignia in obverse caption.

Screen Shot 2019-10-03 at 11.35.24.png

 

Screen Shot 2019-10-03 at 11.46.26.png

Screen Shot 2019-10-04 at 19.59.05.png

Sakaeya captions typically use stock numbers with letter identification (most are K for Kobe), sometimes inside parentheses. The letterpress is sometimes italicized. While the lion insignia is printed in the bottom right corner, sometimes the Sakeya name is also included by the stock identification number. It is uncommon for landscape postcards to have the Sakaeya name or insignia printed on the reverse.

 

Period III (March 1918-February 1933)

Screen Shot 2019-10-03 at 11.29.25.png

Sakaeya continued to use Udea cards stock for their postcards into Period III

Screen Shot 2018-12-18 at 17.00.15.png

Eventually, Sakaeya incorporated the lion insignia into the stamp box.

Screen Shot 2019-10-04 at 19.54.40.png

Screen Shot 2019-10-04 at 19.54.54.png

Screen Shot 2019-10-04 at 19.55.11.png

Screen Shot 2019-10-04 at 20.06.10.png

Later period captions sometimes still incorporate the Sakaeya lion insignia, but it gets removed when the name and insignia are incorporated on the reverse. Sakaeya also started to use two lines of letterpress.

 

Other Publishers [also here]

  • Akanishi MarkAkanishi (Kobe 神戸)
  • Asahido.png Asahidō (Kyoto 京都)
  • Benrido.png Benrido 便利堂 (Kyoto 京都)[no trademark, but uses distinctive font)
  • Nassen & Co. (Yoshioka-chō, Yokohama) – interlaced N and S atop floral design
  • Screen Shot 2019-12-03 at 09.10.08.png Naniwaya Co. 浪華屋 (Kanda, Tokyo 東京神田) – [later became Tokyo Design Printing Co. 東京図按(vl. )印刷社; Kuroda Hisayoshi 黒田久吉]
  • Nisshinsha.png Nisshinsha (Tokyo 東京)
  • SN Banshuido.png S.N. Banshiudo 長島萬集堂 [Nagashima banshūdō](Shiba, Tokyo 東京芝)
  • Taisho Hato.png Taisho Hato Brand 大正鳩ブランド (Wakayama 和歌山)
  • Nara Todai-ji.png Tōdai-ji 東大寺 (Nara 奈良)

Notes

[1] The nomenclature for the sides of the postcard derived from their original design where one side was reserved solely for the address, while the other was reserved for the written message, and eventually, a printed image. These are also known as the reverse (rimen 裏面) and obverse (hyōmen 表面).

Screen Shot 2019-10-05 at 10.52.51

Kamkgataya storefront displaying postcards for sale.

[2] Some sources name the proprietor as Maeda Tokutarō 前田徳太郎, but I have not seen this name in printed Japanese sources. Some sources note 1907 (Meiji 40) as the date for the founding of Tonboya. A Kamigataya sign and display of postcards can also be found in the Motomachi district of Yokohama.

 

 

 

 

 

 

Resources:

 

 

 

 

 

 

 

A Meaningful and Engaging Syllabus Design

Can you judge a book by its cover? A course by its syllabus?

I’d say many university syllabuses[1] are analogous to an end-user license agreement or a manual for your kitchen blender – in other words, things I wouldn’t be caught dead reading either.

1

Additions like images can help with student interest in the material

While thoughtful syllabus design is not a major area of research, there is a small, but informative pool of literature on the topic.[2] Not surprisingly, a well-designed, engaging syllabus has been shown to prime student motivation and interest in the course more than a poorly designed syllabus.[3] I’d content that since the syllabus is one of the first ways we interact with our students, it’s a genre that deserves thoughtful planning.

Many of us use the syllabus as a kind of legal contract, sometimes incorporating “legalese” into our course policies and even have our students sign the syllabus to signal a binding agreement.[4] (Or, to test if the students actually read the syllabus, some will hide “Easter eggs” and reward those students who “passed the test.”) Indeed, the expansion of the syllabus over the decades – or “syllabus bloat” – has been mainly due to the growing abundance of policy statements used to settle any potential student grievances.

While I understand the appeal of the contractual model, I’ve never found it matched my teaching persona or fed into the classroom culture I was trying to establish.[5] Of course, listing course policies is necessary, but the syllabus does not need to be limited to this purpose. [6]

After reflecting on the intended audience and purpose of the syllabus genre, I’ve come to see it as one of the numerous pedagogical tools at my disposal. My syllabus design falls somewhere between a chapter in an introductory textbook, a promotional advertisement, and a monthly newsletter – at least this is my intention, every syllabus is an open-ended project.

One of the first hurdles I had to overcome was fully identifying my students as my target audience, not my colleagues. This may seem commonsensical in hindsight, but this realization immediately impacted my tone, the type of information I incorporated, and the overall graphic design of this new “learner-centered” and “engaging” syllabus [see chart].[7]

I had two specific revelations regarding content – essentially incorporating “hows” and “whys” into the copious amount of “whats.”

 

3.5

I include a section on student motivation (borrowed from Tona Hagan) for class discussion

First, I realized that several comments I would normally make about how to do well in my classes could easily and beneficially be incorporated. For example, I still hold a “what’s your motivation?” discussion in class, but now have some additional text in my syllabus that students can refer to during our conversation. I’ve found that it anchors discussion (like any class reading) and helps students make more pointed comments.

 

Additionally, I decided to include some material about why I designed the course the way I did, helping students to see my intellectual interests and pedagogical motivations. Accordingly, I write about the types of assignments and activities I employ and the value I find for students in assigning them.[8] I also try to reveal the scaffolding I’ve designed into the course, cuing students into the importance of daily foundational activities and how those are intended to build into larger, more complex, projects throughout the term. This is intended to help students with metacognition about their own learning behaviors and to see a clear pathway to achieve the learning objectives I’ve set for them (I’ve been inspired by the work of Tracy Zinn on this front).

Because I include a broader array of topics into my syllabus, I also do not go over the whole document on the first day of class. I introduce certain sections when necessary and have students refer back to it throughout the semester. I even have students look at it on one of the last days of the course.

The graphic design of my syllabus is almost entirely a result of the “syllabus makeover” by historian Tona Hangen (most notably, incorporating the trifold division of student success and motivation [9]). Good visual design not only captivates student interest, but also models professionalism (even enthusiasm), indicating the entire course will be handled with similar care.[10] Moreover, spending time working with principles of good design makes us aware of important information hierarchies which can be expressed through visual hierarchies (text color, text size, use of boxes, images, etc.).[11] As a consequence, students have an easier time parsing out more important information.

Certainly, an outstanding syllabus design will not make up for poor course design, but it might be worthwhile to consider the syllabus as an integral tool in helping students find success in our courses.

Here are some notes on my latest summer syllabus for Zen Buddhism: Mind and Material Practices. [PDF: Zen Mind and Material Practices Romaskiewicz Summer 2019]

  • Tone: Given that my students are the primary audience, I’ve consciously adopted a more friendly and positive tone (including using we/us/our, not “students”), and especially compassion and humor where I can.
  • Visual Hierarchy: I use color and colored shapes to direct student attention to more important information
  • Images: To promote some of the topics we will discuss, I try to incorporate images that foreshadow these ideas (I also try to creatively incorporate a reference to the image in adjacent text)
  • Hows: I include text that helps students reflect on their motivations for taking my course and how they can succeed
  • Whys: I provide a rationalization for assignments and activities I employ, not simply the explanation of their execution

1-2.png3-4.png5-6.png

Quick Reference Sources:

Notes:

[1] The plural form of syllabus is quietly debated in the halls of The Academy. I had a professor in grad school who preferred (ahem, actively and regularly commented upon) the proper plural form as syllabuses since the original term is not derived from Latin, but Greek (it’s a little more funky, actually). Thus, using a proper plural Latin declension (=syllabi) is unwarranted. (The same goes for octopus, coincidentally.) Interestingly, the Google Books NGram Viewer shows that syllabi is more common than syllabuses. The debate continues…

[2] A list of references can be found at the beginning of the document here [https://cte.virginia.edu/sites/cte.virginia.edu/files/Syllabus-Rubric-Guide-2-13-17.pdf].

[3] Harnish & Bridges 2011, Ludy et. al. 2016.

[4] Of course, a syllabus is not, and cannot be, an actual binding legal contract. But you could treat your syllabus like one.

[5] While I believe a syllabus should clearly state course policies and try to consider numerous “what ifs,” I do not think the nearly endless implicit agreements between student and instructor need to be made explicit. In my view, signing a syllabus makes the motivation for adhering to policies external (abiding by the law), rather than internal (it’s the right thing to do), and it undermines trust. Of course, this is my personal view. Lastly, some folks are also fans of the “syllabus quiz.”

[6] Description of a syllabus as a mix between a contract, permanent record, and learning tool can be found here [https://jan.ucc.nau.edu/~coesyl-p/syllabus_cline_article_2.pdf]

[7] I am borrowing the “learner-centered” and “engaging” syllabus from the typology in Ludy et. al. 2016 (see table above). It also amazes me that scholars might think that a good visual design cheapens the “scholarly” integrity of a text. What one might dismiss as “flash,” actually has an integral rhetorical purpose. Research on the impact of syllabus tone can be found in Harnish & Bridges 2011.

[8] Incidentally, I also chose to incorporate these things because I would notice students would rarely ever (maybe never) take notes about these aspects when we discussed them in class. Upon reflection, I felt knowing the hows and whys were central to my class, and while I couldn’t test my students on these aspects, they could at least have an easy way to consult them.

[9] It’s interesting to note that the “engaged” syllabus in Ludy et. al. 2016, p. 11, also adopted a similar approach, using the categories of “diet” and “life-style change.”

[10] See other insights about good visual design here [http://www.pedagogyunbound.com/tips-index/2014/2/7/make-your-course-documents-visually-engaging].

[11] I am only aware of one experimental study that compared a text-rich syllabus to a graphic-rich syllabus, i.e. Ludy et. al. 2016. Here is its principal finding: “Students perceived both types of syllabus positively, yet the engaging syllabus was judged to be more visually appealing and comprehensive. More importantly, it motivated more interest in the class and instructor than the contractual syllabus. Using an engaging syllabus may benefit instructors who seek to gain more favorable initial course perceptions by students.” Ludy et. al. 2016: 1.

References:

  • Doolittle, P. E., & Siudzinski, R. A. 2010. “Recommended Syllabus Components: What do Higher Education Faculty Include in their Syllabi?” Journal On Excellence In College Teaching, Vol 21, No. 3, pp. 29-61.
  • Ludy, Mary-Jon; Brackenbury, Tim; Folkins, John Wm; Peet, Susan H.; Langendorfer, Stephen J. & Beining, Kari. 2016. “Student Impressions of Syllabus Design: Engaging Versus Contractual Syllabus,” International Journal for the Scholarship of Teaching and Learning,” Vol. 10, No. 2, Article 6.
  • Harnish, Richard J. & Bridges, K. Robert. 2011. “Effect of Syllabus Tone: Students’ Perceptions of Instructor and Course,” Social Psychology of Education,  Vol. 14, No. 3, pp. 319-330.
  • Perrine, R. M., Lisle, J., & Tucker, D. L. 1995. “Effects of a Syllabus Offer of Help, Student Age, and Class Size on College Students’ Willingness to Seek Support from Faculty.” Journal of Experimental Education, Vol. 64, No. 1, pp. 41-52.
  • Saville, B. K., Zinn, T. E., Brown, A. R., & Marchuk, K. A. 2010. “Syllabus Detail and Students’ Perceptions of Teacher Effectiveness.” Teaching Of Psychology, Vol.37, No. 3, pp. 186-189.
  • Zinn, Tracy E. 2009.” But I Really Tried! Helping Students Link Effort and Performance.” Observer, Vol. 22, No. 8, pp. 27-30.

Opening Pandora’s Box: Students Evaluations of Teaching

[Update: This post mostly deals with how instructors can best use the problematic instrument of student evaluations. A recent, accessible post for administration who are determining how (or if) to use student evaluations can be found with Research on Student Ratings Continues to Evolve. We Should, Too. We both cover many of the same resources.]


Having just finished another summer of teaching, last week I participated in the seemingly timeless ritual of passing out and administering student course “evals.”

Epigraph.pngOver four decades of research has shown, however, that student evaluations of testing (SETs[1]) are poor and often problematic measures of teaching effectiveness. Not only do SETs exhibit systematic bias, the data is also often statistically misused by administration and instructors themselves. These problems forced one recent study to flatly proclaim that “SET should not be relied upon as a measure of teaching effectiveness,” and that “SET should not be used for personnel decisions.”[2]

When we accept that SETs are neither statistically reliable (consistent across measures), statistically valid (testing what they claim to test), or appropriately applied, we are left to question if student evaluations offer any viable information for the instructor.

I suggest that student feedback, especially written comments when provided with proper instructions, can open crucial lines of communication with the class and can be used in a limited capacity – in coordination with other measures – for instructors to critically self-assess. I offer end-user advice for university instructors on how to best prepare students for creating constructive feedback and utilizing that information to become more critically reflective and effective teachers. I draw upon scholarly research and recently implemented institutional initiatives, illustrated with personal practices, to comment on how best to incorporate student feedback into our teaching.

(1) A Broken Instrument – Overview

I take as axiomatic that SETs as currently designed and interpreted are poor proxies for measuring teaching quality or effectiveness.[3] For example, a range of studies have shown that among the best ways to predict student evaluations is to examine the responding students’ interim grades. In other words, it has been shown that students’ anticipation of a good course grade is highly correlated with positive teaching evaluations.[4] This conflation between grade expectation and teaching effectiveness is just one of the reasons the validity of SETs have been called into question. Clearly, instructors can also engineer positive feedback by “teaching to the test” (instead of having students do the more difficult work of learning the skills required to do well on a test) or having generous grading policies, among other tactics.[5]

One of the more prominent areas of research has focused on gender and racial biases exhibited in SET data. Dozens of empirical research papers point to statistically significant biases against female instructors, and one recent randomized, controlled, blind experiment at a US institution bolstered these findings. Regardless of student performance, instructors perceived as male (the experiment was performed with an online course) received significantly higher SET scores.[6] This holds true even for measures one would have expected to be objective, such as perceived promptness in returning graded assignments – regardless that both male and female instructors returned assignments at the same time. In aggregate, studies have shown that both women and men are as likely to make biased judgments favoring male instructors.[7]

Additionally, there is evidence suggesting that students who rate their instructor’s effectiveness highly and who take subsequent advanced classes will perform more poorly (i.e. receive worse grades) than students who rated their previous instructor’s effectiveness as low. This means that more effective instructors can actually be evaluated more negatively by students in comparison to their less effective teaching counterparts.[8] Part of this poor evaluation may be due to using more challenging active or deep-learning strategies that ultimately have been shown to be more effective techniques for teaching, but sometimes elicit active student resistance.[9]

Despite their ubiquity on college campuses, it has been shown that SETs do not primarily measure teaching effectiveness, but instead measure student biases and overall subjective enjoyment.

I do not mean to attempt to convince skeptics of the reliability of this stunning research; there is plenty of research available to comb through to make your own opinion.[10] One can view the online bibliography of gender and racial bias in SETs, regularly updated by Rebecca Kreitzer, here.[11] Additionally, there are at least two peer-reviewed journals dedicated to exploring evaluation more broadly, Assessment & Evaluation in Higher Education (1975-) and Studies in Educational Evaluation (1975-). For a summary overview of SETs biases, meta-analyses are offered in Wright & Jenkins-Guarnieri 2012 (which concludes that SETs are apparently valid and largely free from bias when paired with instructor consultations) and Uttl et. al. 2017 (which concluded SETs are invalid).

For the TLDR crowd, I would simply suggest reading Boring et. al. 2016, a work presented with a high level of statistical rigor examining two separate randomized experiments. It also received a fair amount of popular press. There is also a presentation on some of its principle findings by one of the contributing authors available online.

I also make no attempt to argue that one can read course evaluations in a manner that adjusts for student bias – the factors contributing to that bias are so numerous and complex that SETs should not be treated as sole objective measures for teaching quality under any interpretive lens. Recommendations on how best to use SETs in hiring, firing, or tenure decisions have also been discussed in the academic literature. A qualified (and sometimes apologetic) defense of SET is put forth by Linse 2017, while a point-counterpoint perspective is provided in Rowan et. al. 2017. In general, incorporating SETs as part of a much more comprehensive teaching portfolio appears to be the middle ground adjudicated by many university administrations. (The American Sociological Association also published its suggested guidelines for using student feedback, in light of recent research, this week.)

(2) Finding the Critical Perspective – Brookfield’s Four Lenses

From the perspective of an instructor, we must remember that student feedback constitutes only one window to our teaching. Stephen Brookfield has developed a method to help assist instructors become more critically reflective teachers by using four lenses, often simply referred to as Brookfield’s Four Lenses.[12] In the hopes of increasing self-awareness, one must draw from several different vantage points to provide a more comprehensive perspective. These “lenses” include 1) the autobiographical lens, 2) the student’s eyes, 3) colleague’s perspectives, and 4) theoretical literature. These roughly correlate to self-reflection, student feedback (or SET), peer evaluation, and exploration of scholarly research.[13]

Among these Four Lenses, arguably the most important is self-reflection which ultimately encompasses the other three since they all require comparative reflection. This heightened self-awareness forms a foundation for critical and reflective teaching and informs us where adjustments in our teaching may be necessary.

Lens 1A – Annotated Lesson Plans: In terms of the autobiographical lens, on a practical level, I regularly take notes after individual lectures (sometimes, simply, in the time between when one class ends and the next begins), noting things I found pertinent to the effectiveness of conveying the material, such as how long class activities took, good questions that were asked by students, insightful discussion topics, and sticking points or conceptual hurdles. Undoubtedly, these notes have become the most valuable information I consult when revisiting lectures in later semesters. Specifically, these lecture annotations allow me to adjust future material, activities, and discussions, or timing allowances.

Lens 1B – Annotated Syllabus & Journaling: Another helpful self-assessment activity has been annotating my syllabus throughout the semester, culminating in a significant review at the end of the term. By regularly taking notes on readings, class policies, grading procedures, and course organization, this information has assisted me in reconceptualizing my courses and tracing out new areas to explore. Lastly, I have implemented journaling – primarily in the form of this blog –  as a means to reflect upon my experiences in the classroom (both positive and negative) and chronicle my discoveries about teaching.

Lens 2 – Mid-Term Evaluations: From this bird’s eye view, student feedback operates as just one measure of teaching quality and should be balanced against other critical perspectives. Importantly, gathering student feedback should not be reserved for only the end of a course. Informal, anonymous mid-term evaluations can provide actionable ideas that could help correct teaching oversights – or encourage us to continue what we are doing.

Typically, I will ask a pair of subjective questions: 1) “What is working well for you?” and 2) “What is not working well for you?” – both in relation to my teaching of the course material. I will also direct students to think about numerous facets of the course, including the readings, assignments, class activities (group activities or student-led discussions) and lectures – or anything else – for comment.  Admittedly, not all of the anonymous feedback is constructive or actionable, but if I see clear patterns in comments I will take them into consideration when planning future classes. I also spend a few minutes at the beginning of the following lecture to discuss the feedback with the class and allow for further discussion. Moreover, I also use this as an opportunity to discuss what comments were actionable (positive or negative) and which comments were irrelevant (such as the time of the class, the size of the class, or the temperature of the room). Students need training in providing relevant and actionable narrative commentary, a point I will return to below.[14]

Lens 3 – Teaching Community: As is commonplace in graduate school, I received no formal training in teaching as part of my program, but it was through collegial conversations with peers that my interest, and confidence, in teaching grew.[15] Even if my colleagues did not possess formal training in pedagogy, this informal community functioned as a place to discuss classroom successes and failures and still provided another valuable perspective. In many cases, these conversations revealed the diversity of possible approaches in the classroom and inspired me to take a few pedagogical risks (or what I originally perceived to be risks).

Lens 4 – Scholarship on Teaching and Learning: In order to best make sense of the insights drawn from three lenses of the self, student, and peer, instructors should also consult literature or engage with established theory. This oftentimes provides us with technical vocabulary that can better describe the experiences were all often share.[16] Fortunately, most universities offer workshops that instructors can attend to improve the quality and effectiveness of their teaching. Moreover, the Scholarship on Teaching and Learning (SoTL) is quite voluminous, including many journals such as College Teaching, International Journal of Teaching and Learning in Higher Education, Journal of Effective Teaching in Higher Education, and the Journal on Excellence in College Teaching, among others. There are also numerous disciplinary journals dedicate to teaching, including Teaching Theology and Religion, and the Journal of Religious Education in my home discipline of religious studies.[17]

(3) Revisiting Student (Written) Feedback – And Hope Remained?

There is significantly more research on the close-ended ordinal scale questions of SETs than the open-ended “narrative” commentary that often accompany them. Several studies have noted that written comments can provide more useful and important feedback than statistical reports.[18] Of course, this does not mean that all comments are necessarily relevant to teaching effectiveness nor should they be assumed to be free of bias.[19] While a lot more research remains needs to be done in this area, written comments can contain more course (and instructor) specific details and provide actual ideas to improve teaching.[20] Because of the potentially actionable and specific nature of written comments, instructors should strategize on how best to administer the written portion of student evaluations.

It is important for instructors to make sure students are aware of the purpose of student feedback and possibly explain how feedback may have been used in the past to create better learning environments. In order to help students reflect on the effectiveness of my teaching I will often revisit the course syllabus and have the student reread the learning outcomes, directing them to further think about how the structure of the course, readings, assignments, and activities helped or inhibited the realization of those outcomes. Focusing student attention on teaching effectiveness and quality can help minimize irrelevant commentary or comments on (perceived) instructor identity.[21]

It is also important to inform students about the value of written comments and invite them to write down their insights. Research shows between 10% and 70% of SETs include written comments, thus asking students directly to write commentary is necessary. To ensure the comments are actionable, I also ask students to provide the rationale for their opinions (simply, I tell them to always use “because statements,” e.g. “I (dis)liked this course because…). Importantly, I also give students ample time to discuss and complete the evaluation task, around 10-15 minutes (I leave the room when students begin the evaluations).

Some institutions are beginning to start initiatives that explain the importance of student feedback to students directly and describe how to provide effective feedback. For example, the Faculty Instructional Technology Center at UC Santa Cruz provides instructions to students about crafting effective comments (with examples) and what types of comments (emotionally charged and identity-based) to avoid (see here). Moreover, the center provides instructions for instructors on how to craft the most beneficial questions, focusing on specificity, open-endedness, and goal-orientation (see here). (Similar instructions can be found at UC Berkeley Center for Teaching and Learning, University of Wisconsin-Madison, Vanderbilt University Center for Teaching.)

A more innovative approach was recently taken by the UC Merced Center for Engaged Teaching and Learning which produced a set of short 3-7 minute videos for instructors to show in their class (instructors choose which length to show, the 3-minute version is embedded above). Promoted as “Students Helping Students,” the videos feature university students talking about the importance and purpose of feedback and provides guidelines on crafting useful comments (see here).

After receiving written student feedback, instructors should pay attention to recurring themes or stories that emerge in the commentary. Non-corroborated comments mean very little, especially if they do not align with your own reflections, the observations of colleagues, or insights taken from scholarly literature. In the end, mid-term and end-of-term student feedback, especially written commentary, can offer crucial insights that allow instructors to critically self-assess pedagogical strategies and develop into reflective teachers.

Notes:

[1] Given the subjective nature of student evaluations (described below), some institutions and researchers read the acronym SET as “student experiences of teaching.”

[2] Boring et. al. 2016: 2, 11

[3] “Teaching effectiveness” is generally, though not universally, defined as the instructor’s capacity to facilitate or increase student learning.

[4] Economist Richard Vedder comments that grade inflation in American universities began roughly around the same time in the late 1960’s and early 1970’s (SETs were first used in the 1920’s, see Linse 2017) when student evaluations became a common evaluation tool. The classic study on his phenomenon appears to be Johnson’s Grade Inflation: A Crisis on College Education (2003). Irrespective of the title, a large portion of the book is dedicated to analyzing SETs and their relation to course grades. More recent studies include Griffin 2004 and Stroebe 2016. To be fair, some debate the magnitude of the correlation between SET and grade expectation, see e.g. Gump 2007 and Linse 2017. One can refer to meta-analyses presented in Wright & Jenkins-Guarnieri 2012 and Uttl et. al. 2017 (latter summarized here [https://www.insidehighered.com/news/2016/09/21/new-study-could-be-another-nail-coffin-validity-student-evaluations-teaching]).

[5] This could also include timely psychological priming, such as telling students they are doing exceptionally well with extraordinarily difficult materials, or giving easy assignments early in the term to set up higher than normal grade expectations.

[6] MacNell et. al. 2015. The data was further analyzed in Boring et. al. 2016. Most of the empirical research in this area incorporates incomplete censuses of the student population (the students who simply return their evaluations) as opposed to truly random samples of the population, thus this is an important study confirming the finding of other reports.

[7] There are numerous other biases that have been detected in SET data, none of which is related to teaching quality, such as age, attractiveness of instructor, time of day, class size, etc.

[8] See Carrell & West 2010, Braga et. al. 2014, and Stroebe 2016.

[9] See Pepper 2010 and Carrell & West 2010. For more varied results on the relationship between low evaluations and active learning, see Henderson 2018. An overview of some of these issues for teaching physics, but relevant to other disciplines, can be found here.

[10] While the empirical evidence is “decidedly mixed,” (Peterson et. al. 2019), there is undeniable evidence that biases are widespread. Among the resources listed in the Kreitzer bibliography noted above are several reserch papers that have discovered statistically negligible bias in their SETs, but these seem to be the exception rather than the rule. An overview of the wide rage of biases that have been empiracally studied in SETs can be found here (University of Dayton: https://www.udayton.edu/ltc/set/faculty/bias.php).

[11] Link: https://docs.google.com/document/d/14JiF-fT–F3Qaefjv2jMRFRWUS8TaaT9JjbYke1fgxE/mobilebasic?fbclid=IwAR3_W1actb5hg-rf2bbxrwAlal2K16askYDm5EJOTdeRCptkZEFuryrxQAY. A different annotated bibliography, updated by Danica Savonick and Cathy N. Davidson, can also be found here: https://www.hastac.org/blogs/superadmin/2015/01/26/gender-bias-academe-annotated-bibliography-important-recent-studies.

[12] I am grateful to Lisa Berry for informing me of Brookfield’s body of work.

[13] Brookfield first proposed this model in 1995 in Becoming a Critically Reflective Teacher (2nd edition published in 2005). There are numerous online resources summarizing the principle arguments, here is one [https://www.learning.ox.ac.uk/media/global/wwwadminoxacuk/localsites/oxfordlearninginstitute/documents/supportresources/lecturersteachingstaff/resources/resources/CriticallyReflectiveTeaching.pdf].

[14] Admittedly, this may seem objectionable to some because it appears like tampering with student opinions of the course. But this approach is modeled on training students to give useful feedback on peer-reviewed papers; students need practice and need to receive feedback to best learn how to be effectively critical.

[15] I will forever remain indebted to my university Writing Program which offered formal training in pedagogy, ultimately leading to working with our school’s Instructional Development as a consultant.

[16] A personal example: While training to teach a first-year composition and rhetoric courses I was given a reading that distinguished between “boundary crosser” students and “boundary guarder” students as pertaining to how they accessed and made use of prior genre knowledge. These proved to be helpful in giving me a conceptual “handle” to understand my experience with several students and to have a common vocabulary with my peers to discuss different approaches to these students.

[17] An incomplete listing of disciplinary journals on teaching can be found here.

[18] Noted (with references) in Brockx et. al. 2012: 1123.

[19] Perhaps the most cited internet resource to demonstrate bias in written commentary is the Gendered Language in Teacher Reviews, run by Ben Schmidt. The site aggregates data from RateMyProfessor.com and allows users to sort data by keywords.

[20] Brockx et. al. 2012

[21] One recent study (Peterson et. al. 2019) has shown, however, that by explaining the implicit biases for race and gender found in SETs to students, those biases were significantly mitigated in the evaluations (in comparison to a control group).

Here is the anti-bias language that was used in the experiment: “Student evaluations of teaching play an important role in the review of faculty. Your opinions influence the review of instructors that takes place every year. Iowa State University recognizes that student evaluations of teaching are often influenced by students’ unconscious and unintentional [bolded in original] biases about the race and gender of the instructor. Women and instructors of color are systematically rated lower in their teaching evaluations than white men, even when there are no actual differences in the instruction or in what students have learned. As you fill out the course evaluation please keep this in mind and make an effort to resist stereotypes about professors. Focus on your opinions about the content of the course (the assignments, the textbook, the in-class material) and not unrelated matters (the instructor’s appearance).” Much more research needs to be done exploring this deeply important issue.

References:

  • Boring, Anne, Ottoboni, Kellie & Stark, Philip B. 2016. “Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness.” ScienceOpen Research. [DOI: 10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1
  • Braga, Michela, Paccagnella, Marco & Pellizzari, ]Michele. 2014. “Evaluating Students’ Evaluations of Professors.” Economics of Education Review, Vol. 41, pp. 71.
  • Brockx, Bert,Van Roy, K. & Mortelmans, Dimitri. 2012. “The Student as a Commentator: Students’ Comments in Student Evaluations of Teaching.” Procedia – Social and Behavioral Sciences Vol. 69, pp. 1122-1133.
  • Carrell, Scott E. & West, James E. 2010. “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors.” Journal of Political Economy, Vol. 118, No. 3, pp. 409-32.
  • Griffin, B.W. 2004. “Grading Leniency, Grade Discrepancy, and Student Ratings of Instruction.” Contemporary Educational Psychology, Vol. 29, pp. 410–25.
  • Gump, S.E. 2007. “Student Evaluations of Teaching Effectiveness and the Leniency Hypothesis: A Literature Review.” Educational Research Quarterly, Vol. 30, pp. 56–69.
  • Henderson, Charles, Khan, Raquib & Dancy, Melissa. 2018. “Will My Student Evaluations Decrease if I Adopt an Active Learning Instructional Strategy?” American Journal of Physics, Vol. 86, No. 934. [DOI: 10.1119/1.5065907]
  • Låg, Torstein & Sæle, Rannveig Grøm. 2019. “Does the Flipped Classroom Improve Student Learning and Satisfaction? A Systematic Review and Meta-Analysis.” AERA Open. [DOI: 10.1177/2332858419870489]
  • Linse, Angela R. 2017. “Interpreting and Using Student Ratings Data: Guidance for Faculty Serving as Administrators and on Evaluation Committees.” Studies in Educational Evaluation, Vol. 54, pp. 94-106.
  • MacNell L, Driscoll, A. & Hunt A.N. 2015. “What’s in a Name? Exposing Gender Bias in Student Ratings of Teaching.” Innovative Higher Education, Vol. 40, No. 4, pp. 291–303.
  • Marsh, H.W. & Roche L.A.. 2000. “Effects of Grading Leniency and Low Workload on Students’ Evaluations of Teaching: Popular Myth, Bias, Validity, or Innocent Bystanders?” Journal of Educational Psychology, Vol. 92, pp. 202–28.
  • Peterson, David A. M., Biederman, Lori A., Andersen, David, Ditonto, Tessa M. & Roe, Kevin.  2019. “Mitigating Gender Bias in Student Evaluations of Teaching.” PLoS ONE, Vol. 14, No. 5. [DOI: 10.1371/journal.pone.0216241]
  • Rowan S., Newness E.J., Tetradis S., Prasad J.L., Ko C.C. & Sanchez A. 2017. “Should Student Evaluation of Teaching Play a Significant Role in the Formal Assessment of Dental Faculty? Two Viewpoints: Viewpoint 1: Formal Faculty Assessment Should Include Student Evaluation of Teaching and Viewpoint 2: Student Evaluation of Teaching Should Not Be Part of Formal Faculty Assessment.” Journal of Dental Education, Vol. 81, pp. 1362-72.
  • Stroebe, Wolfgang. 2016. “Why Good Teaching Evaluations May Reward Bad Teaching: On Grade Inflation and Other Unintended Consequences of Student Evaluations.” Perspective on Psychological Sciences, Vol. 11, No. 6, pp. 800-16.
  • Uttl, Bob, White, Carmela A. & Gonzalez, Daniela Wong. 2017. “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning are Not Related.” Studies in Educational Evaluation, Vol. 54, pp. 22-42.

Online Resources:

 

Take-Home Quizzes and the Art of the Distraction

A few years ago I decided to have all my students do their multiple choice quizzes at home and online.[1] It’s fairly easy to set up these quizzes if your school uses a Learning Management System (LMS) like Moodle (or an institutional adaptation). As I’ve noted before, it saves both precious class time and grading time.

This practice is predicated on the known benefits for giving frequent, low-stakes (low-grade impact) assessments. When given early in the term, this allows students to self-assess before more higher-stakes exams occur and also provides valuable feedback to instructors regarding the success of their teaching strategies.[2]

In practice, I make the online quizzes timed, giving students 1-2 minutes per multiple choice question depending on the complexity or cognitive challenge of the question (I earmark less time for recognition questions than application questions, for example). I’ve come to be very explicit in recommending how students should study; I tell them they should read and reorganize their class notes, comparing and incorporating ideas from the readings, class slides (which I also provide), and from what they remember in discussion.[3] Students are ultimately free to use their notes, course readings, and slides when they take the quiz, but the imposed time limit demands they have some understanding of the material or a conceptual organization of resources to know where to look.

I’ve come to realize an even better benefit of online quizzes recently – they ability to give immediate feedback to student responses. Online quizzes provide fields where instructors can type feedback to individual multiple-choice responses or for the entire question after it has been submitted.

Epigraph.png

I now use this space to offer a brief explanation of the structure of the question, especially noting if it needed students to apply their knowledge. This would involve, for instance, taking a novel situation and applying it to concepts they’ve studied (for quizzes I try to make only 10-15% of the questions application-based, this percentage increases for higher-stakes assessments). Because of this, I’m helping students identify why this multiple choice question seems harder than others, especially in comparison to one that is asking for simple recognition, i.e. recognizing the right word among the responses.

I also use this space to explain why the incorrect responses to a multiple choice question – called “distractors”- might have seemed plausible. For example, a distractor might represent the popular view of a phenomena instead of the analysis we’ve offered in class. Or a distractor might represent a common conceptual misstep in analyzing the question. With this feedback, students can immediately see where their thought process went off track when answering the question and gain more insight into the topic at hand.

Ultimately, I believe the best multiple choice exam is one that has been created by students. Creating good multiple choice questions, with plausible “distractors,” requires precisely the higher-order thinking many of us want our students to cultivate. Online quizzes have provided a new way to make this a reality.

One of the biggest issues I have found when having my students craft their own multiple choice questions (for extra credit) is the difficulty in having them craft higher-order questions (application) or provide plausible distractors for all or most of the responses. I believe the feedback I provide can help students see how good questions are crafted, thus helping them study and creatively learn the material. I’ll hope to return to this post after my current summer course.

Notes:

[1] While I ask that the quizzes be taken individually by students, cooperative quizzes are something to explore.

[2] While there is plenty of literature on extolling this a standard best-practice, I learned the value of frequent and early assessment the hard way. The first semester I taught at a community college I did not schedule any low-stakes quizzes. The first formal test my students had was a midterm, and while many did fine, there was an abundance of students who did exceptionally poorly (with grades under 30%). I did not provide them the opportunity to gauge their early level of understanding, and consequently to adapt their learning strategies.

[3] I’ve talked about the importance of note reorganization for study here.

Drawing to Think & Thinking through Drawing

Is there a value in having university students draw in a humanities course? Admittedly, this is charged question. I do not think practicing still-life drawing would benefit students greatly given the normal range of learning outcomes for a humanities course. But, beyond the perceptual skills being practiced while drawing, is there a cognitive benefit the can be leveraged? From this angle, I absolutely think there is pedagogical value in having students draw.[1]

For those weary of blindly joining me in advancing a “drawing across the curriculum” agenda (a bad joke for my writing colleagues), let me restate what I think drawing can be. For me, drawing is not about mimesis, the creation of a real-world replica on a paper, but about schematization. Drawing is not merely related to sensation, but also cognition and meaning-making. Mental schema function to align a range of perceptual data and convert them into intelligible concepts that can be used. Drawing is simply a physical practice, often overlooked in a non-art classroom, that enables this dynamic intellectual process. (I should note, I am not advocating to incorporate drawing activities to speak to “visual learners” – the myth of different “learning styles” has long since been debunked. Schematizing helps everyone.)

Graphic Organizers (Data Visualization)

One of the most immediate applications of drawing is the creation of graphic organizers, which allow for the construction of knowledge in a hierarchical or relational manner. Organization that is non-linear (unlike linear note-taking or outlining) often leads to better retention and recall. Semantic maps, conceptual maps, Venn diagrams, and tree diagrams (even T-charts) can all be implemented effectively in a classroom environment. If students have difficulty developing them on their own, instructors can assist by making handouts with portions of the charts left blank. I will admit, there is a learning curve to creating more complex graphic organizers, but the goal should be, ultimately, to have your students attempt to create them – doing the conceptual work is where the greatest benefits lay.

Writing 2 Concept Map.png

My first concept map for my Freshman Composition course. For future iterations I would have students help with much of the work.

Maps and Other Diagrams

More commonly I have my student draw maps. Instead of showing a map of a region, I will first schematize it on the blackboard – and have my students draw with me.[2] I will then show a proper map after the exercise, mostly to relate what we’ve drawn to what’s on the map. My maps, by choice, are minimalist; I only choose to depict what I think is most pertinent to the content or narrative I am presenting. For example, I often focus on rivers and lakes (the source of life and centers of human activity), or mountains and deserts (obstacles to human movement), or cosmopolitan centers (where documents are often produced, also the civil antipodes to foreign “barbarism”). I can then draw lines to represent human migration or the movement of ideas. This clearly takes more time than simply showing a map on a slide, but I’ve found it to be more effective in crystalizing ideas to students.[3] I’ve also included drawing these minimalist maps (with clear labels) on students exams.

Along these lines, I’ve also spent time drawing mythic cosmologies with my students (e.g. the Buddhist cakravāla and its dhātus – I call it the Buddhist wedding cake), as well as other diagrams produced in the primary materials we are working with (e.g. the bhavacakra). A lot of meaning of often encoded in these endeavors by the original artists and I would argue there is value in (selectively) reproducing them, not only looking at or analyzing them.

Figure 2.png

The mythic Buddhist world. There’s plenty of religious art to draw from!

Drawing Things

I might hear objections at this point – I am not really having students “draw” things. I believe there is room for this as well, although I would make sure we have a good pedagogical purpose for having students engaging in this (often) time-consuming endeavor. Luckily, for scholars of religious studies (like myself), various forms of artistic production is often at the core of religious practice. Having students participate in traditional religious practices of “art” making (we should always be mindful that some practices will not be considered “art” in the same way as we might approach it) can lead to meaningful interactions with the material under analysis. I can also be, quite frankly, simply fun too.

To provide one clear example, I’ve been having my students draw the important Buddhist figure Bodhidharma, the founder of Zen Buddhism, for several years now. I was inspired when I ran across the contemporary artist, Takashi Murakami, producing modern art versions of this famous Zen monk in 2007. I started scribbling some of my own portraits for fun and eventually decided to try and incorporate this practice into my teaching. At the time I was still looking for excuses to do fun in-class exercises that ask students to take a step out of their normal comfort zones. I was acutely aware that many people feel drawing is an in-born gift, not a skill, and would be hesitant to participate. Ultimately, I like to think that I fool students into drawing, rather than asking them to draw outright.

Figure 16.png

The inspiration: Murakami Takashi, I open wide my eyes but I see no scenery. I fix my gaze upon my heart.

On the schedule day, I will often bring blank copy paper class and provide two drawing options to my class. I say I will draw Bodhidharma on the blackboard step-by-step and students can choose to follow along, copying my process. Alternatively, students can choose to copy one of several traditional images of Bodhidharma I project on the screen. For those who choose to follow me (typically about half of the class), I imitate my best Bob Ross impression and try to make drawing non-threatening and, hopefully, fun (let’s draw a happy eyebrow right here…).

drawing Bodhidharma [PC-Peter Romaskiewicz].JPG

The set up with my finished Bodhidharma portrait on the whiteboard. [Southern Shaolin Temple, Putian, China. 2019]

The final pay-off for this activity comes at the end. I’ll have students reflect on the types of facial features we’ve drawn on the portrait and guess why they are important to East Asian artists (essentially, Bodhidharma is a caricature of a non-Chinese monk). This is the pedagogical purpose of this activity and I make sure to tie the points we make in discussion to those I’ve made throughout the lecture (if students do not do so already). To further draw out the significance of this activity, and position my students firmly within a actual “Buddhist” artistic tradition, I’ve also created an accompanying reading.

 

 

student Bodhidharma 05 [PC-Peter Romaskiewicz]

Rather superb renditions by my students for Woodenfish 2019.

The whole process of handing out paper, introducing the activity, drawing, and discussion takes – minimally – 15 minutes. Of course, you could conceive of projects that take much longer (such as over the whole term) or are completed as teams (based on the suggestion of a colleague, I used to do a textual version of Exquisite Corpse in my composition classes).

Final Thoughts

The real challenge is trying to determine the cost-benefit analysis of drawing – you will be spending far more time with the material than if you just showed the pictures, maps, or diagrams. Thus, as always, be judicious and reflect on the exercise afterwards – was it valuable in helping you to reach a particular learning objective? If at the end of the day, all I do is help my students doodle better, I am completely fine with that.

Notes:

*This is part of an ongoing series where I discuss my evolving thought process on designing university courses in Religious Studies. These posts will remain informal and mostly reflective.

[1] Disclaimer: As the son of an art teacher and professional artist, I’ve always challenged myself to have my students draw more. This notwithstanding, there is some interesting research on art and cognition that I’ve only just begun to dive into. A good primer is Thinking Through Drawing: Practice into Knowledge, edited by Andrea Kantrowitz, Angela Brew and Michelle Fava. Furthermore, there is already copious amounts of literature on incorporating drawing into science classrooms.

[2] This means I also have to tell student to bring paper and pen/pencils to class, quite a sizable portion (in my personal experience) takes notes solely on computers.

[3] There are clearly good reasons to show, and even focus on, highly detailed proper maps, it all depends on your pedagogical purpose. I’d suggest that if you want maps to be more meaningful, drawing elements of them with students can be helpful.