So, if Student Evaluations of Teaching (SETs) do not accomplish their goals, what should we do? This is an important question and one that requires a sophisticated discussion and response because, in my view, too often the response is part of the problem. Many people to whom I have spoken, for instance, will say "Yeah, I know SETs aren't perfect but it is the only tool we have so we have to use it." This is the equivalent of trying to turn a screw with a toothpick. The toothpick may be the only tool you have but using it to turn a screw will just waste your time and will never accomplish your goal. In this case, using the only tool you have is counterproductive, particularly when there is a better and straightforward answer: get the right tool.
The other thing some people do when I make statements like the above is to think that I am suggesting that we ditch SETs completely. What I am suggesting is that we need to engage in a complicated procedure that recognizes that listening to student voices cannot be reduced to a fill-in-this survey, as if that could convey student voices. I am also suggesting that if we want to listen to students -- and I am arguing we should -- then we should actually listen to students; not someone who is not a student and who is not answerable to students who is speaking in their name.
So, how do we do this? Despite the complexity of the final result, I don't think that hearing student voices is "rocket science". Let me divide my comments into two sections: practical tips and broader engagements. Today I'll address what I'll call practical tips, although, as I want to explain they are not really practical. Instead, there is a philosophy here to which it is important to pay attention because if we don't, we are missing what is, in fact, a key opportunity to acknowledge and foster communication between faculty and students.
The first thing that we need to recognize is that faculty, in fact, listen to students all the time. The vast majority of the faculty I know (which is now a large number at a broad range of universities) spend a large part of their time listening to students. They meet students for consultation and extra help, raise issues in class, talk after class and before. The vast majority of faculty to whom I speak go through their SETs (both the quantitative and qualitative comments) in intricate detail to tease out information. Said differently, faculty are already listening. The idea, then, that we need some sort of mechanism -- perhaps even a coercive mechanism -- to get faculty to listen to students is, in fact, wrong, plain and simple. Everyone has, I am sure, their story of the arrogant prof who thought they were the cat's meow. I have at least one. But, we should no more let this anecdotal evidence stand in for the majority than we would in any other case. I did not teach last year because I was on sabbatical but the year before I spent about 8 hours a week meeting with students. Think about that. If I worked a regular 40 hour week ... that would be 20% of my work time (before I'd given a single lecture, ran a single seminar or tutorial, organized a single extra help session, marked a single paper, etc.) that was devoted to just talking to students on subjects that they pick.
The second thing we need to recognize is this: faculty are already listening in another way. The fact that SETs are not the be-all-and-end-all of student voice does not mean that they cannot play a useful role; the vast majority of faculty to whom I speak believe this is so and devote considerable attention to reviewing SETs results. We need to consider that role and encourage faculty to make use of it. For instance, with regard to the quantitative evaluations ... are there any red flags? Are a given minority group of students raising consistent concerns about any particular issues, say fairness in evaluation? These are, then, often merged with qualitative comments to see if there are issues that need to be addressed. Many faculty now ask their students to complete voluntary mid-term evaluations as well that are designed to highlight and respond to any emerging problems in a course.
The upshot of points one and two -- to repeat myself -- is this: the idea that faculty are not listening to student voices and that we need to find some mechanism and potentially force faculty to adopt it -- or dock people's pay if they don't meet a certain standard -- is wrong. As a faculty member, it is in, in fact, in my interest to have an on-going conversation with my students. While any one student may or may not see the results of that conversation (more on this in a later blog), it does not mean that I did not listen, think about what you had to say, weigh it against other views articulated by students, consider the fairness of your ideas, etc. I do this -- as do other faculty -- not because I am forced but because I want to be the best instructor I can be. If you tell me you did not understand lecture X ... well, heck, it is in my interest to help you understand the idea and modify the way I explained X or I don't meet my own objectives (which is having people learn things).
The Philosophy of Dialogue
In one of my previous blogs, I noted that people like numbers. They like the quantitative assessment of faculty because it is easy to understand and relatively easy to draw conclusions. You can also make comparisons. Nurse got 4.0; Other Faculty member got 4.2; ergo Other Faculty Member must be better. You can set a mark ... and look down to an average and see if that person has met that mark. If they have ... good, they are good. If not, they need improvement. But, as I said before, that is not really listening to students. It carries with it no inherent response to concerns, for example, ignores minority groups as if their voices did not count, and relies on a mediated articulation in which someone else -- someone who is not a student and has no connection to students (in addition, someone who has not been in the course) -- determines what students mean by the numbers they filled in. No student is asked what they meant, nor are interpretations confirmed with students, nor will the person making the determination ever have to explain the character and nature of their determination to students. Hence, numbers are not a good way to solicit student voices.
Direct conversation between faculty member and student is better. In fact, even having Faculty Members interpret and act on SETs is better then relying on the interpretation of a third party because one mediating step in the chain of communication is removed. Think about that for a second. If you want to communicate with me -- you want to tell me something I am doing right or something I am doing wrong -- what is the most effective way for you to do that. I suspect, at this point, most of you said "tell you directly" and that would be right. A second way would be to fill out a survey. It would be one step removed from direct communication but as long as I look over the results and treat them seriously ... we are not in bad shape. We start to lose efficiency -- and voice -- however, if we start to say "I'll let someone else speak for me." We lose accuracy if we start to say "I'll let someone else whom I don't know and who will not check with me and who was not in the class with me -- that is, does not share the experiences about which I wish to communicate speak for me."
Thus, the idea that faculty listen to students in class, in office hours, in extra help sessions, by talking before and after class, by writing emails (I answer a minimum of two emails from students each day during the school year and often five or six, sometimes up to twenty), is not some sort of way of deflecting the problem. It represents a real and concerted mode of engagement that is intended to provide the most direct form of communication possible and, I might add, the most responsive.
What do I mean by responsive? This: you come and talk to me, saying something like "Professor Nurse, I did not understand thing X." I then say "OK, what about X did you not understand?" You say "I got the first part about 1 and 2, but after that I lost track of what was going on and missed 3 and 4." I say, "fair enough, it was a tough subject. Let's go over this, let me very quickly recap 1 and 2 and then explain 3 and 4 and see if that clears things up." You see what I mean? Direct communication addressed the problem that this student was having and addressed it then and there. Now, imagine the alternative: mediated communication based on SETs. The student fills out their SET at the end of the semester remember that they did not understand X from earlier in the term and scores a low number on comprehensibility (say, a 2). This then goes into the pile (see my previous blog for who aggregates and averages distort voices) and is averaged with the rest of the class, the vast majority of whom did understand my discussion of X (although why they understood -- because of my explanation or some other cause, like they just happened to know it -- is never addressed or even considered through this type of quantitative SETs) and so my final ranking on this measures is OK (say, 3.94); not wonderful but not low enough for my dean to raise any real concerns.
What has happened to the student's voice? It is gone ... lost in the mist of the average. Has the student's problem been addressed; that is: do they understand X any better than before. Well, no. Not in this example. Thus, we have failed on two counts. We have not heard the student and we have not addressed their problem. Now, I ask you ... if you were interested in hearing student voices and addressing problems ... which method of communication would you choose?
I get it. People like numbers. I watched Moneyball. Numbers can and are useful. No one is debating that. What we are talking about here -- what I am writing about -- is student voice. What is the most effective way to ensure that student have, as a friend of mine recently put it, "their say." The answer I am giving here might not be popular because it is running against a received wisdom that says only though surveys can students be heard. I am trying to say that I don't see that as the case. I am trying to say that I think most faculty are already listening to students and trying to address student concerns. Moreover, direct communication has merits. Its not just hippy, feel-good, peace and love stuff. It has appreciable merits in that it provides a way for each voice to be heard, it operates on an opt-in basis (those who want to use it, can; those who don't, can ignore it ... the choice remains with the student as to whether or not they want their voice heard), and it is responsive in a way that SETs are not. Moreover, this approach -- direct communication -- has the further merit of not relying on my good graces. I might like to pay attention to students. I might be a nice guy and like to listen to people but that is not the point. If I don't listen to students, I compromise my ability to do my job well. Students, in other words, don't learn what they are supposed to learn and hence my own performance as the guy at the front of the room is not what it could be.
I am not saying this argument is perfect. There is more that I will add to it in a future blog. But, I am saying that we have the basis, already in place of a very good, very effective way of ensuring that student voices are heard.