Skip navigation

Course evaluations: the good, the bad and the ugly

A Q&A with University of Toronto educational developer Pamela Gravestock.

BY LÉO CHARBONNEAU | AUG 21 2013

They’re often seen as the bane of a professor’s existence: student course evaluations. Among the many criticisms that faculty level at such evaluations is that they’re not taken seriously by students, aren’t applied consistently, may be biased and don’t provide meaningful feedback. Guilty on all counts, if they’re designed poorly, says Pamela Gravestock, the associate director of the Centre for Teaching Support and Innovation at the University of Toronto. But that doesn’t have to be the case. Done well, they can be both a useful and effective measure of teaching quality, she says.

Dr. Gravestock, whose dissertation focused on how teaching is evaluated for tenure at Canadian universities, is one of the leading experts in Canada on course evaluations. In 2008, she co-authored with colleague Emily Gregor-Greenleaf a report for the Higher Education Quality Council of Ontario, Student Course Evaluations: Research, Models and Trends (PDF). It was the first examination of the research on student course evaluations from a Canadian perspective and has become an essential guide for universities looking to redesign their course evaluation policies. Dr. Gravestock is also the project manager behind a total revamp of the course evaluation system at U of T, a process that is still ongoing. She recently spoke with University Affairs about the misperceptions and pitfalls of course evaluations and how to improve them.

University Affairs: Do students take course evaluations seriously?

Dr. Gravestock: What we know from the research, and what we’ve found at our own institution, is that students are woefully uneducated about course evaluations. They don’t know how they’re used and they don’t know where their feedback goes. We’ve heard time and again from students that they think faculty just throw them away or put them in a drawer and never look at them again. Institutions need to do a much better job of telling students why they’re filling them out and then explaining to them how the data that are collected are used – for promotion and tenure processes, for course improvement and all those things. Once they understand that, they take them much more seriously.

UA: Fundamentally, are students able to provide accurate assessments of teaching ability?

Dr. Gravestock: Yes and no. There are definitely certain things that students can provide feedback on, but there are also things that students are not necessarily in a position to provide feedback on. An example of the latter is a question that appears on most course evaluations, asking students to comment on the instructor’s knowledge of the subject area. A first-year undergraduate wouldn’t know whether the instructor is knowledgeable or not. That’s a question that is better suited to a peer evaluation. However, students can certainly comment on their learning experience within the context of a course.

UA: There are lots of myths about course evaluations, like easy markers get better evaluations. Is that true?

Dr. Gravestock: There have been a fair number of studies with regard to the perception that students will provide more favourable feedback when the course is easy. But there have been studies that have countered that claim. Students will respond favourably to a course, even if it’s tough, if they knew it was going to be tough. If the expectations of the instructor are made clear at the outset of a course, and students understand what is expected of them, they won’t necessarily evaluate the instructor harshly.

UA: I get the impression that many professors aren’t that keen on course evaluations.

Dr. Gravestock: I would agree with you. But, in my experience, talking about the issues and putting them all on the table – explaining to faculty that we need to educate our students, that we need to ask the right questions – helps to alleviate a lot of the fears. Educating faculty about the research as well is really helpful. There have been 40 years of study on course evaluations and the research shows that students can provide effective feedback on course evaluations and that it is a useful measure of teaching effectiveness when they are well-developed and validated.

UA: Do we even know what effective teaching looks like?

Dr. Gravestock: There is lots of literature on what makes an effective teacher, but at many institutions they don’t have a clear understanding or definition of what teaching effectiveness is. It often gets boiled down to particular characteristics – communication skills, organization. But ultimately what we should be assessing for teacher effectiveness is learning, and course evaluations are limited in their ability to do that. They’re assessing the student’s perception of their learning or their experience of learning in a course, but not whether they’ve actually learned anything. That’s why they should be only one factor when you’re assessing effectiveness.

UA: Should students be able to see course evaluations?

Dr. Gravestock: My opinion is that they should be available to students; it closes that loop. Students can see that the feedback they’ve provided is being used.

UA: I have heard some frustrated professors say that course evaluations don’t give much direction on what they need to do to improve. What would you say to that?

Dr. Gravestock: I would relate that back in part to the instrument itself. Often the questions are not the right questions. General questions about the instructor’s effectiveness aren’t going to tell you what’s going on. Also, faculty are often just given this information and no one guides them through it. Educational developers are really well-positioned to help instructors in interpreting the data and figuring out next steps – a plug for my profession!

UA: So what are some key components to a well-designed course evaluation?

Dr. Gravestock: I’m quite happy with the instrument we’ve developed here [at University of Toronto] because it’s student-centred. For example, we have a question about whether or not the student found the course intellectually stimulating. There are also questions about whether or not the assignments and assessment measures used in the course contributed to their learning or allowed them to demonstrate their learning in a course. These are things they are best positioned to respond to and no one else could. Also, one of the things we did very differently when we developed our new framework and our new questions is that we developed institutional teaching priorities first – what we wanted to see in every student’s experience in a course at the University of Toronto. We used these priorities to develop our questions. That has served us well.

UA: What steps were involved in the redevelopment of the course evaluation process at U of T?

Dr. Gravestock: We started looking at our evaluation processes about four years ago, after Emily and I put out the report for HEQCO. We took the issue to the provost’s office, said there was interest among faculty and could we look at it. We spent about a year consulting and examining the processes and we came up with a report (see above) that’s on our website. We had a series of recommendations, basically that we should move to a centrally managed evaluation system. Prior to this, each faculty had its own form; everyone was doing something very different. We then spent another year developing the framework. We knew that if we were going to something centralized it had to reflect the distinct cultures at U of T – the divisional cultures and the disciplinary cultures.

UA: So is it a standardized form?

Dr. Gravestock: We have eight core institutional questions that appear on all evaluation forms. And then faculties and departments can add their own that reflect their contexts, needs and interests.

UA: Has it been fully implemented?

Dr. Gravestock: We’ve spent the last two years in what we’re calling a phased implementation. We’re working with five faculties right now and we’re adding four more this coming year.

UA: What has the response been so far?

Dr. Gravestock: There’s always a bit of anxiety about change. We work with each division that’s about to come on board for a good year before we actually roll it out in that division. There’s lots of consultation, which has helped. Lots of faculty have been really keen about the change because they were unhappy about the previous one. Some are more hesitant. Students, on the other hand, have been phenomenally receptive. We’ve moved to an online system and they love the ease of use and that they can do it on their own time.

UA: Are students required to do an evaluation?

Dr. Gravestock: It’s required that an evaluation be administered for every course, but it’s not required that an individual student fill it out.

UA: Does an online form adversely affect participation?

Dr. Gravestock: There were fears here that our participation rates would drop, but we haven’t seen that. We’re on par with where we were before when we were administering it on paper.

UA: And what is that?

Dr. Gravestock: Across the five divisions that we’re working with, we’re seeing a mean response rate of over 50 percent. In some divisions their mean is 75 percent. You couldn’t ask for anything better.

UA: Any final thoughts?

Dr. Gravestock: Since we started doing this work five years ago, I’ve seen a groundswell of interest from other institutions across Canada. They’re pouring significant amounts of time and resources into this. What’s most interesting for me, in listening to these discussions, is how institutions are linking course evaluations to other types of evaluations and supports around teaching. I’ve been in my job for 11 years at the centre and I have seen a huge shift in terms of how people understand teaching and the importance of teaching. That has allowed us to have these conversations about how we evaluate it effectively. But there’s still a lot more to do.

PUBLISHED BY
Léo Charbonneau
Léo Charbonneau is the editor of University Affairs.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Keith Kinder / August 21, 2013 at 13:34

    I am particularly interested in Dr. Gravestock’s observation that student participation numbers remained steady when on-line evaluations were introduced. In our Faculty when we moved to on-line evaluations a few years ago participation crashed and has not recovered.

    I think this proves Dr. Gravestock’s assertion that students do not understand the purpose of such evaluations, and also that we’re not using the right questions.

    We’ll obviously be looking carefully at the HEQCO report.

  2. Dan Jorgensen / August 21, 2013 at 15:01

    Interesting work, and agree that students ought to have access to evaluations. I was in grad school at UBC when I first saw course evaluations in the form of the student-run and -sponsored *Anti-Calendar*. The reviews were unbuttoned, sometimes clearly vindictive, but more often very illuminating (and it wasn’t hard to tell the difference). They were, I would wager, an excellent guide. Since then the evaluations have been institutionalized and to a large measure hijacked by the auditing apparatus of the universities. Their use is mainly to serve as a tool for merit and promotion – but NOT to inform students what the experience of their fellows has been in particular courses, with particular profs. There is, of course, an alternative now in the online “Rate My Prof” system – but I this is far inferior to what flesh and blood on-campus student surveys and evaluations could provide. Pity it seems to be unavailable on most (all?) campuses.

  3. rcg / August 21, 2013 at 18:35

    Student evaluations can indeed be very useful, and, in my experience, many students tend to be quite fair is their assessments of professors and courses. Usually, students seem to be able to differentiate between professors who teach well, work hard and perform their teaching duties at a high level, and professors who may not do as good a job but overcompensate by being perhaps overly generous in their grading. However, again, in my experience, the switch from paper course evaluation forms to electronic evaluations drastically cut down the student participation in this process, to the point where such participation became almost non-existent. I am aware that in a course with more than 100 students at our university, for example, only 2 electronic student evaluations were received. Prior to the switch to electronic evaluations, the student participation rate used to be better than 70%. In our department, we have maintained the evaluation forms completed by the students in the classroom, and the student participation rate is often better than 80%. Based on my experiences, if we are really interested in meaningful student evaluations and a high level of student participation in this process, we should be careful about the switch to electronic student evaluations. This may make the process not only less accountable, but it also seems to significantly reduce the number of evaluations received for all courses I am aware of, in various departments at our university.

  4. Sean Lawrence / August 21, 2013 at 20:35

    I find something quite disturbing about this: on the one hand, Dr. Gravestock admits that student evaluations can be misleading and, in any case, should only be one of several modes of evaluation. On the other, she wants to make them at least appear more rigorous, thereby tempting us all to over-rely on them.

    I suppose the question is whether there’s any way to evaluate evaluations themselves, to test them against something else. Until their findings are disprovable, I’ll have to consider them pseudo-scientific.

    On the other hand, it’s not a pseudo-science if it makes no pretense to science. I should think that if we abandon the effect to defend class evaluations within a carapace of rigor, we shall be more likely to entertain them as useful communication.

  5. Jan Noel / August 22, 2013 at 09:10

    I too am concerned, as are other commenters, with a very real drop in participation rates members of my department have experienced due to the move to online evaluations. Heading into exams, only the more motivated students (whether the motive be gratitude or revenge) take the time to respond, making for a less representative survey.

    I agree with Dr. Gravestock about how valuable evaluations are. Can she and others put their minds to ways students might fill out the online evaluations IN CLASS, either on their own devices or going en groupe to nearest computer lab?

Click to fill out a quick survey