Evidence, Please

I have said and written a number of dumb things over the years, but the worst statement of mine to appear in print came after the 2016 primary. I vote early in the morning and, if I remember correctly, voted on my way home from the gym at maybe 7 AM. On the way out, a journalist stopped me to ask for a comment. I growled something about my frustration with the “dangerous rhetoric” on both sides.

This milquetoast comment appeared in the paper the next day.

I stand by the first part of the statement, but regret qualifying it with “both sides.” The tenor of political advertising has reached the point that some of the races in Missouri feature virtually identical attack ads against each candidate, but in the aggregate there is no comparing the political rhetoric being put out by the two major political parties. Both sides use rhetoric; one side is actively undermining the legitimacy of the US government and stoking fear and hatred. And yet, in that moment, I contracted a case of bothsiderism that is rampant in political journalism.

Already as I drove away from the polling location I regretted what I had said. I had been thinking about Trump et al. when I said it and yet I not only softened my specific opinion but also suggested that this was a pervasive problem across the aisle. So why did I equivocate even though I have strong, clear political opinions?

It was early and I was asked for an opinion on the spot, but the explanation goes deeper.

In part, I don’t like painting with too broad a brush. I am not a fan of the Democratic Party as an institution and the nature of regional politics has sometimes resulted in Republican candidates in other parts of the country holding political opinions closer to my own than the Democratic candidates I have on my ballot. Similarly, I am seriously alarmed at the amount and types of money that gets spent in US politics, regardless of party, and am happy to give credit to the handful of Republican office holders more committed to taking the necessary steps during the pandemic than they are to playing partisan politics with it, even if I also think they are elsewhere complicit in enabling an administration run amock.

Just this weekend I read an article about how one of those Republican governors, Mike DeWine, was the target of a conspiracy to effect a citizen’s arrest because he listened to the scientists about public safety measures, making this at least the second plot after the conspiracy against Gretchen Whitmer in Michigan.

Another part, I think, was conditioned behavior. I was talking to a complete stranger who was looking for quotes that he could publish and I didn’t want to give him the sense that I had a bias. Is this not also the opinion I, a normal person, am supposed to have with the political elite—that is, sullen disenchantment with a system that largely doesn’t work for me? Certainly, that is what all of the political advertising around here is telling me.

The third part of this triptych is a learned behavior through years of teaching. It has been a right-wing talking point at least since the 1950s that higher education is filled with liberal professors determined to indoctrinate young people into whatever is the cause of the day. Professors often clap back that they need the students to do the reading before they can make any headway on the indoctrination program.

Jokes aside, a several of things seem to generally hold true:

Teaching is a political act. I make political decisions when determining what content we cover, what order we cover them, and what readings we use in class. In my classes we talk about issues like slavery, colonization, and wealth inequality (to name a few), but I usually moderate my political opinions order to focus on the evidence.

Some of this is practical. I’d rather not end up in a position where students send video of my class to a right-wing Facebook group, particularly while I’m working as a contingent faculty member on semester-by-semester contracts.

But some of this is also philosophical. I see my job as a professor as teaching students how to think historically and critically about the world around them. There are things I will not tolerate in my classroom: ad hominem attacks, for instance, or bigotry of any stripe, but these have nothing to do with whether the opinion being expressed is liberal or conservative (which, note, is not equivalent of Republican or Democratic).

“What is the evidence for this?” is one of the most common comments I make on papers, regardless of whether I agree or disagree with the politics of the opinion being expressed. In discussion when I ask questions, students often act like they’re repeating the rote answer they’re supposed to have learned at some point in their lives or that they’re looking for the answer that will please me and end the debate. Those answers get much more difficult when I follow up their statement with “why do you say that?” or “what evidence leads you to that conclusion?”

As I tell my students who often seem like they’re fishing for the specific answer that will please me, everyone is entitled to their own opinion, but that opinion must be grounded in evidence.

These days this isn’t easy. People are increasingly living in two different media ecosystems, neither of which offers a whole lot in the way of evidence, even if media typically decried as “liberal” does a somewhat better job. When opinion and anecdote substitute for substance, evidence loses out and the result is the sort of gulf in a recent poll between 92% of Democrats believing that African Americans face a lot of discrimination compared to 52% of Republicans who agree with the statement—when asked about whether white people face a lot of discrimination, 13% of Democrats agreed, while 57% of Republicans did. The gulf was similarly striking when asked questions about protests in the abstract versus when the question specifically mentioned African Americans.

Of course, opinion polls are exactly that: opinion. They do not require the respondent to offer evidence or reflect on where that opinion comes from. No one likes to be wrong and having your beliefs challenged is uncomfortable; there is comfort in media that confirms what you think you know about the world. (Un)fortunately, there is a whole smorgasbord of options with authoritative-sounding voices or names that will offer you talking points for whatever political position is! Some of them might even be based on evidence after a sort! Consuming these neatly-packaged bites is easy; learning to verify, confirm, and evaluate them is harder because it requires both effort and time.

Four years after I made my original comment, I remain concerned about tone of political rhetoric, but I now see that tone as inseparable from these other issues. This is a country where one imperfect party seems interest in governing for all Americans while the other seems largely interested in ruling for a few with many of their candidates denying science, trading in conspiracy theories, and interpreting the Constitution to suit their purposes regardless of what it actually says. Evidence exists only insofar as they are advantageous.

I recently characterized this political cycle as insulting to my intelligence exactly because of its aversion to evidence. Take Missouri’s Amendment 3. This measure marginally changes the rules about lobbyists, but is primarily an underhanded attempt to hand districting power back to the party in power and un-do a non-partisan measure that passed with 62% of the vote in 2018. Naturally, the advertising in favor of Amendment 3 is mostly scare-mongering about how the (new) regulations handed power to groups outside Missouri.

This past week I encouraged all of my students to vote. I still don’t see it as my place to preach a particular candidate or platform, but suggested that they look beyond the advertising, consider their own values, and learn about the candidates before deciding who to vote for. The most political statement I made was to suggest that they should be deeply suspicious of anyone who wants to make it harder for them to participate.

Encouraging people to vote is one thing; endorsing particular political platforms is another. Maybe I’m naive, and certainly I have some privileges that other professors don’t have, but I can’t do my job if I directly engage in politics in the classroom. I am just also keenly aware that I don’t want to repeat my mistake of four years ago of being so carefully moderated that I slip into the sort of misleading talking points not supported by the evidence.

An insidious hierarchy

One of the harshest criticism that a professor can give to a graduate student is that s/he writes “like an undergrad.” PhD students bemoan that MA students do not participate in class discussion. Graduate students and professors alike rend their clothing and tear at their rapidly thinning hair to lament that undergraduates don’t go to class, don’t do the reading, they don’t edit, cannot spin out mellifluous prose, and (to hear some people talk) haven’t a solid thought in their airy little heads.

These are stereotypes and stereotypes contain a kernel of truth. In the case of the last example, it probably comes from the fact that most undergrads are not old enough to drink (legally). People need time to grow up, to learn, to mature. Writing like an undergrad–or acting like an undergrad more generally–is probably influenced in some ways by the college culture and college experience in the sense that the environment one lives in is going to affect behavior, but it is going to be even more influenced by the student’s age and educational experience. So, too, upper level undergrads are going to be different than freshmen. And there is no immediate change in newly-minted graduate students from “undergrad” to “grad.” Learning is a process, intellectual development is a process. One hopes that there will be an evolution from the first year through graduation and then continued development through a graduate school career.

Using “undergrad” as a term to imply intellectual retardation, even retardation through youth, is a problem on several levels. First, it implies a sharp division in ability, when there is really only a division in expectations. Second, such comments reinforce an elitist, ivory-tower perception of graduate schol. Third, and most problematic for me, it is not a constructive critique. It carries with it a number of implications, but doesn’t actually convey in what ways (analysis, source use, insightfulness) the graduate student needs to differentiate him or herself. One would hope that there would be further comments that would be more constructive, but the comparison to an undergrad doesn’t seem to serve any positive purpose.

The hierarchy implies an unnaturally sharp distinction between the categories. I mostly note this because one of the things I see most frequently on social media w/r/t student exams or papers is that undergrads claim radical historical change happens at unnaturally specific dates. And yet, the act of donning a robe and walking across a stage is a ritual that transforms a high schooler into a college student and a college student into a graduate student? Changed expectations are one thing, but the change in performance is not going to happen when the students walk across that stage.

A few weeks ago there was a John Hodgman quote floating around social media that highlighted how scary learning can be. Admitting ignorance is conflated with admitting inadequacy too often. Ignorance is correctable, but the admission, the struggle, is difficult. The mistake I feel that I am watching on the part of educators is sloppily,haughtily, fogetting how difficult this process actually is. None of us sprang from Zeus’ forehead fully formed. Yes, learning and school come easier to some than to others, but to forget that learning is a process only serves to discourage students. When students are discouraged from learning we have failed.

Writing this piece reminds me of an incident in high school where one of my friends was called out for hypocrisy over an essay for which she won a prize. I am not trying to excuse myself of wrongdoing, though. I am guilty of contributing to this hierarchy, too. I lament the state of undergrads and their inability to read a short assignment or participate in class, or how they can’t seem to answer all the questions on an exam. I generally make these comments while in the throes of grading. This is a form of venting and, in my experience, doing so makes it easier to continue grading. I do my best to avoid broadcasting these laments on social media or even to too many people. I need to vent, but the jokes and the complaints are not something that most people should hear–or should care about.Instead, I want to be more conscious of making these statements and caution against, in all our exhaustion, frustration, and stress, using this sort of hierarchical, exclusionary, and unconstructive language.

Here is my main issue with this hierarchy. Whether to cover up their own insecurities or out of a misplaced sense of self-righteousness, academics seem to go over the top with these complaints about “undergrads” (and usually seem to mean “underclassmen” for “undergrad”) and forget that they, too, were once undergrads and were once MA students. I suppose that it is possible that all of these other instructors were perfect students back in their day–always going to class, doing the readings, talking in class, editing their papers, having fully-formed and developed thoughts in their work–but I know that I was not. At one point in my college career I regularly skipped class, fell asleep in class, did not edit papers, did not do the reading, and sometimes even turned in assignments that I am now ashamed to have attached my name to. Even when I did turn in work that I was proud of at the time, it was not always great work. That is because I was young. There were some subjects I wasn’t good at, there were some that I didn’t care that much about. I fully admit that I was not a particularly good student in college nor am I a great student even today and I wonder at the irony inherent in that I am now teaching college students and have to give advice on how to study on a regular basis. When I feel myself becoming too myopic about students, I remind myself of this past, that I was once there too.

….

The corollary to what I just wrote is that there will always be a wall of sorts between what the teacher says and what the students hear, there will always be students who give less than their full attention to the instructor, and there will always be an impatience on the part of students to find out their grade–something exacerbated, not created, by the Pavlovian nature of a grade and standardized test based educational system. On the former points, it is frustrating dedicate hours to preparing for class and to see apathy on the faces of the crowd, but even the best lecturers are going to have to deal with that. On the last point, grading papers is one of those things that it is impossible to understand how long it takes unless you have had that experience yourself. Are these things frustrating? Yes, absolutely, yes. But undue venting about these issues is also counter-productive. The type of understanding I have suggested throughout this piece the understanding David Foster Wallace was talking about in This is Water. “Understanding” and “patience” are not simple solutions to a long-trending institutional problem in education, higher education, and society, but it seems that to do otherwise is contributing to the problem.

Rethinking the narrative: a return?

Teaching history is at a bit of a crossroads. High school history, particularly with its emphasis on objective forms of testing and standardization, has diluted history and gives the sense that there should be some sort of objective correct answer. Objective testing and a renewed sense of practical applicability to every aspect of education has also led to an increase in apathy about history learning. High school history classes are, for the most part, boring.1 In those classes there is an overemphasis on names and dates to the extent that many incoming freshmen believe that names of dead people and past dates are what compose history. One of the the uphill battles fought in college history classes is to draw connections and teach them that history has more to do with the connections, causation, and movements than about the name and the dates.

As the fashionability of studying old white men in positions of power has waned, history classes seem to increasingly deal with cultural movements (sometimes without concrete dates), social conditions, and ideas. If fed to a receptive audience, this type of class is more engaging than another dry history class where students are forced to memorize an interminable list of dates and names. At the same time, because this sort of class requires students to actively engage the material (ultimately the goal of history, in my opinion), it does run the risk of falling on deaf ears.

An even greater issue, though, is that for the marginally interested student–one willing to show up to lecture and to read primary documents, but not the textbook2–this sort of course runs the risk of confusion. One of the primary reasons, it seems, that professors assign textbooks is to relieve themselves of needing to provide a thorough, detailed narrative in lecture. Yet, to my as-of-yet limited experience, unless the students are demonstrably tested on the material from the textbook on a regular basis, they don’t read it. Textbooks are expensive, dense, and boring. As a TA and tutor I can only bring myself to look at them as a last resort. But if the instructor expects that the student is reading it on a regular basis to fill in the gaps of the lecture and to provide some sort of narrative then the students need to do the reading before class just in order to follow along.3

The immediate impetus for this discussion is that I gave a quiz today in which I was told that, in addition to the presidency of Lyndon Johnson, the Civil Rights Act of 1964 might have taken place in the presidencies of Wilson, Hoover, Franklin Roosevelt, Kennedy, and Nixon (and one Kennedy/Nixon). Kennedy and Nixon were at least in the right ballpark and Roosevelt had instituted an earlier liberal domestic agenda plan, but Wilson was president a half century earlier. Nor was this the only question that resulted in such a staggering uncertainty about chronology or dates. These results are just a small sample size and are likely as much due to people not showing up for class as anything, but the lack of concrete chronology cannot be helping.

One of my committee members mentioned to me in passing that if you are not assigning a book with a good narrative for the time period of your class, you need to make sure that you are providing one for the students. I am wondering if we shouldn’t go even a step further and return to providing a concrete narrative, making sure to draw out the chronology so that the students can see what order events took place in and how they overlapped–to go along with a healthy helping of geography. Ideas, cultural movements, and social issues are all well and good, but if the students confuse the chronology, confuse the causation, and confuse where they took place, none of that matters. As both a student and a discussion leader I prefer seminar style classes, but right now it seems that we need a better way to lay down a chronological foundation if we ever hope to actually engage ideas.


1 I say this recalling that some of my favorite high school teachers were history teachers. Then again, I am now a graduate student working towards a Ph.D. in history.
2 Who could blame this student? I hate textbooks, though I will admit that they are sometimes useful.
3 Nor are all textbooks are created equal.

Multiple Choice

Here is a multiple choice question for you:

What is it that multiple choice questions (in humanities and social sciences) actually test?
A) Rote memorization of facts and trivia.
B) Deductive reasoning.
c) Comprehension of key themes from the lecture.
D) Ability to reason and draw connections between events.
E) How closely you read the textbook for facts and trivia.

I would accept A or E, with B being debatable. The problem is that I firmly believe that those are not really the purpose of the humanities, at least not at a college level.1 Although I have had multiple students come to me panicked about short answer, identification, and essay questions, claiming that they would be comforted by multiple choice tests, the comfort has more to do with familiarity and surety of having a “correct” answer than actual performance on the exams. Moreover, the perception that the lectures are utterly incomprehensible because there is a distinct lack of facts and key information plagues those same students. In much the same way that a recent article in the Chronicle of Higher Education discusses the struggles of students to formulate their own paper topics, students seem at a loss as to how to navigate the spaces between assigned reading, powerpoint presentations, and lecture. To my mind, power point presentations present the biggest problems, since the discussion section and TAs should be able to find the balance between the lecture and the readings, but in the lecture hall the students are presented with two distinct sources of information and with professors who vary as widely as not to use presentations to largely testing the students on the material on the presentations while reading the slides out during lecture.2

The sage wisdom once given to me by my father is that the key to getting a good grade is to discover what the instructor wants and then give it to them. Too often this is the key to getting a good grade, and in navigating the technological obstacle course of higher education, this truism certainly applies.

But I digress.

I do understand the appeal of multiple choice tests from the point of view of the instructors. So long as you don’t have to continually update (or have some means of automatically updating) the answers, the exams are easy to grade and are rather clear-cut in terms of right and wrong so complaints about grading are relatively limited. Of course, the students who come to complain about grades are generally asking the wrong questions–and so are the professors using multiple choice questions. Multiple choice questions have a limited range of types of information that can be addressed, but a very broad base of information to pick from. The answers are very precise, easy to mistake, and, most importantly, of little actual value. The professor is emphasizing memorization trivia and eclectica, not skills, logic, or actual learning. Better is to test the learning, logic, and writing, while allowing the trivia to supplement the answers. One of those things prepares people for pub trivia; the other prepares people to take in information and then to be able to produce actual thought, which should serve them well beyond the classroom while (if the students applies themselves) also preparing people for pub trivia. One provides an easy criterion on which to evaluate student performance on relatively trivial things; the other provides a more nebulous means of evaluating student performance on much more significant things and should provide a more meaningful way to gauge student learning and improvement.

In this sense, multiple choice exams, particularly multiple choice-only exams, are practically criminal in higher education.

Like I said, I have had students come to me begging to have multiple-choice exams, the type of which they are familiar with from standardized tests in high school. There is a clear-cut “right” answer and, if nothing else, there is a sense that they can just guess. But, at least in this instance, I don’t care what the students want. Nonetheless, this insistence on direct and absolute answers is an outgrowth of the societal insistence that the important part of the education are the facts learned (note: No Child Left Behind and the expansion of standardized tests).3 Learning the facts is the surest way to make the grade, which, in turn, is the surest way to achieving the degree, which, in turn, is the surest way to getting a job that will make more money, which, as my students usually assure me, is the measuring stick by which society determines your worth.

This calculation is simple, rational, sterile. School is to enable the career, not to learn anything. Classes are merely the obstacles in the way. Most instructors should disagree with the statement, either because they care about educating students, or they are defensive about their field of study being worthwhile, or both, but the too frequent use of multiple choice exams (when even giving prompts in advance and giving writing prompts for class papers seems to be too much direction) undercuts the actual value of the education while reinforcing the misconceptions of what is actually important.

Anyone who gives multiple choice exams on a wide scale is failing the students. The educational industry for high schools as it currently exists is setting the students up for failure, and professors incapable or uninterested in correcting these issues in college are complicit. Fighting against the corporatization of colleges, for-profit colleges, and the societal movement to value the degree over the education is hard enough without professors buying in to the misconceptions and letting the students down. Multiple choice tests are just one example of this phenomenon, one which is threatening to radically alter the shape of college and undercut the ideal of an educated society.


1 I think it is a travesty at lower levels of schooling, too, though high school cirricula and evaluation methods are a lot harder to change than those at college. Here multiple choice should also apply to similar evaluation methods.
2 Powerpoint and the use of the technology in the classroom might be the feature of another post, but I have noticed that students tend to focus on what is on the powerpoint at the expense of what the lecturer is saying, or worse, only writing down the spare outline presented on the powerpoint and setting down their pen. And the really repugnant part as far as I am concerned is that this behavior is condoned or even required by some professors (and of those, not all make the presentations available after the lecture). I can recall one humanities professor using powerpoint in college (yes, this is a “back in my day” moment from a young man, deal with it), and his usual process was to open powerpoint, but rather than actually using the presentation feature, he would scroll down the creation screen. And his slides were maps. Students who did not know initially learned quickly that they had to write down what he was saying. Now it is required for professors to have at least a passing ability to use technology such as powerpoint in classes, but the technology seems to be an inhibitor to learning, particularly if it is done done with a great deal of care (badly done or overly intensive presentations become the focal point of the class rather than a tool).
3 Curiously, this has recently been matched by the idea of providing students with “job-training” at the expense of the traditional disciplines. These two developments are oxymoronic.