shotsfiredThere’s a bit of a controversy brewing in social media over a new review essay published in the our field’s main peer review periodical, Journal of the American Academy of Religion, on the book, On Teaching Religion, edited by Chris Lehrich and containing some of the writings on pedagogy by Jonathan Z. Smith.

The reviewer, a onetime student of Smith’s, reflects on her own experience in his classes, as an undergrad at the University of Chicago in the late-1990s, in order to identify the gaps (i.e., inconsistencies or maybe even contradictions) she now finds, looking back, between the teacher and the writer.

While I’ve expressed my views on problems that I find with the essay directly to the author and also to the journal’s editor, and while I’ve posted a thing or two in response to social media discussions about it, for the purposes of our Department’s blog (which is obviously read by quite a variety of people, from current and past students to people from who knows where who somehow stumble across it online), I’d like to offer up an assessment not of someone else’s professional skills in service of my argument but my own skills — at least as others assess them — as a way into the problem of what one does, as a scholar, with experiential disclosures (in this case, of former students).

For professors all get teaching evaluations each semester, presumably — but what do you do with them, whether they’re written by a 19 year old Business major who took your course to gain a general education credit or someone thinking back on those classes 15 years later?

Of course you can easily find me online among those professors who, at a variety of web sites, have been rated by former students. What’s more, it is very easy to find complaints among them reviews — this one (which gets the course number wrong, by the way) is currently the one at the top of the list (click to enlarge):

teachingeval2But dig a bit, through the ones that, on the contrary, tell you my tests were far too easy or that recommend going to my classes (sage advice), along with the one that reveals that I’m a bullshitter who loves the sound of my own voice, and you will find this:

teachingeval1So, I’ll repeat my question: what does one do with all this? Or, better put, what does one make of other people’s claims of experience? Does all this tell you something about me and my competencies or, with a nod to reader response theory, is it all about the student, the reviewer, and his/her competencies? And what do I make of the gap — not the one that might exist between what I may practice or preach (for I’m likely terribly inconsistent on who knows how many scores) but, instead, the one between competing, even contradictory experiences of me as teacher?

Looking through those teaching evaluations that we are each given at the end of every semester — you know, the ones that can range from detailed comments (whether positive or negative) to remarks on how we dress (in my experience as a Chair, male profs get those pretty infrequently) or even complete blanks when it comes to the qualitative comments — what does one do with the comments, the reports of experience, the gaps in perception and, dare I say, the varying levels of customer satisfaction…? Given the number of academic friends on Facebook who annually seem to share the following utterly sarcastic article from The Onion

onion— I may have my answer…

But let me press on nonetheless: are we offended, pleased, or (as with those posting that 1996 Onion article) apparently indifferent — because we know it’s not about us — if we fail to earn a chili pepper because we’re not judged “hot”?

hotAnswering that question may help me to figure out what to do when, as a scholar reading a peer reviewed journal in my field, I come across an essay reviewing a book — a classification made by the journal, I’ve learned, not the review’s author — based on a memory of undergraduate experiences of a teacher who also writes on teaching.