They’re often seen as the bane of a professor’s existence: student course evaluations. Among the many criticisms that faculty level at such evaluations is that they’re not taken seriously by students, aren’t applied consistently, may be biased and don’t provide meaningful feedback. Guilty on all counts, if they’re designed poorly, says Pamela Gravestock, the associate director of the Centre for Teaching Support and Innovation at the University of Toronto. But that doesn’t have to be the case. Done well, they can be both a useful and effective measure of teaching quality, she says.
Dr. Gravestock, whose dissertation focused on how teaching is evaluated for tenure at Canadian universities, is one of the leading experts in Canada on course evaluations. In 2008, she co-authored with colleague Emily Gregor-Greenleaf a report for the Higher Education Quality Council of Ontario, Student Course Evaluations: Research, Models and Trends (PDF). It was the first examination of the research on student course evaluations from a Canadian perspective and has become an essential guide for universities looking to redesign their course evaluation policies. Dr. Gravestock is also the project manager behind a total revamp of the course evaluation system at U of T, a process that is still ongoing. She recently spoke with University Affairs about the misperceptions and pitfalls of course evaluations and how to improve them.
University Affairs: Do students take course evaluations seriously?
Dr. Gravestock: What we know from the research, and what we’ve found at our own institution, is that students are woefully uneducated about course evaluations. They don’t know how they’re used and they don’t know where their feedback goes. We’ve heard time and again from students that they think faculty just throw them away or put them in a drawer and never look at them again. Institutions need to do a much better job of telling students why they’re filling them out and then explaining to them how the data that are collected are used – for promotion and tenure processes, for course improvement and all those things. Once they understand that, they take them much more seriously.
UA: Fundamentally, are students able to provide accurate assessments of teaching ability?
Dr. Gravestock: Yes and no. There are definitely certain things that students can provide feedback on, but there are also things that students are not necessarily in a position to provide feedback on. An example of the latter is a question that appears on most course evaluations, asking students to comment on the instructor’s knowledge of the subject area. A first-year undergraduate wouldn’t know whether the instructor is knowledgeable or not. That’s a question that is better suited to a peer evaluation. However, students can certainly comment on their learning experience within the context of a course.
UA: There are lots of myths about course evaluations, like easy markers get better evaluations. Is that true?
Dr. Gravestock: There have been a fair number of studies with regard to the perception that students will provide more favourable feedback when the course is easy. But there have been studies that have countered that claim. Students will respond favourably to a course, even if it’s tough, if they knew it was going to be tough. If the expectations of the instructor are made clear at the outset of a course, and students understand what is expected of them, they won’t necessarily evaluate the instructor harshly.
UA: I get the impression that many professors aren’t that keen on course evaluations.
Dr. Gravestock: I would agree with you. But, in my experience, talking about the issues and putting them all on the table – explaining to faculty that we need to educate our students, that we need to ask the right questions – helps to alleviate a lot of the fears. Educating faculty about the research as well is really helpful. There have been 40 years of study on course evaluations and the research shows that students can provide effective feedback on course evaluations and that it is a useful measure of teaching effectiveness when they are well-developed and validated.
UA: Do we even know what effective teaching looks like?
Dr. Gravestock: There is lots of literature on what makes an effective teacher, but at many institutions they don’t have a clear understanding or definition of what teaching effectiveness is. It often gets boiled down to particular characteristics – communication skills, organization. But ultimately what we should be assessing for teacher effectiveness is learning, and course evaluations are limited in their ability to do that. They’re assessing the student’s perception of their learning or their experience of learning in a course, but not whether they’ve actually learned anything. That’s why they should be only one factor when you’re assessing effectiveness.
UA: Should students be able to see course evaluations?
Dr. Gravestock: My opinion is that they should be available to students; it closes that loop. Students can see that the feedback they’ve provided is being used.
UA: I have heard some frustrated professors say that course evaluations don’t give much direction on what they need to do to improve. What would you say to that?
Dr. Gravestock: I would relate that back in part to the instrument itself. Often the questions are not the right questions. General questions about the instructor’s effectiveness aren’t going to tell you what’s going on. Also, faculty are often just given this information and no one guides them through it. Educational developers are really well-positioned to help instructors in interpreting the data and figuring out next steps – a plug for my profession!
UA: So what are some key components to a well-designed course evaluation?
Dr. Gravestock: I’m quite happy with the instrument we’ve developed here [at University of Toronto] because it’s student-centred. For example, we have a question about whether or not the student found the course intellectually stimulating. There are also questions about whether or not the assignments and assessment measures used in the course contributed to their learning or allowed them to demonstrate their learning in a course. These are things they are best positioned to respond to and no one else could. Also, one of the things we did very differently when we developed our new framework and our new questions is that we developed institutional teaching priorities first – what we wanted to see in every student’s experience in a course at the University of Toronto. We used these priorities to develop our questions. That has served us well.
UA: What steps were involved in the redevelopment of the course evaluation process at U of T?
Dr. Gravestock: We started looking at our evaluation processes about four years ago, after Emily and I put out the report for HEQCO. We took the issue to the provost’s office, said there was interest among faculty and could we look at it. We spent about a year consulting and examining the processes and we came up with a report (see above) that’s on our website. We had a series of recommendations, basically that we should move to a centrally managed evaluation system. Prior to this, each faculty had its own form; everyone was doing something very different. We then spent another year developing the framework. We knew that if we were going to something centralized it had to reflect the distinct cultures at U of T – the divisional cultures and the disciplinary cultures.
UA: So is it a standardized form?
Dr. Gravestock: We have eight core institutional questions that appear on all evaluation forms. And then faculties and departments can add their own that reflect their contexts, needs and interests.
UA: Has it been fully implemented?
Dr. Gravestock: We’ve spent the last two years in what we’re calling a phased implementation. We’re working with five faculties right now and we’re adding four more this coming year.
UA: What has the response been so far?
Dr. Gravestock: There’s always a bit of anxiety about change. We work with each division that’s about to come on board for a good year before we actually roll it out in that division. There’s lots of consultation, which has helped. Lots of faculty have been really keen about the change because they were unhappy about the previous one. Some are more hesitant. Students, on the other hand, have been phenomenally receptive. We’ve moved to an online system and they love the ease of use and that they can do it on their own time.
UA: Are students required to do an evaluation?
Dr. Gravestock: It’s required that an evaluation be administered for every course, but it’s not required that an individual student fill it out.
UA: Does an online form adversely affect participation?
Dr. Gravestock: There were fears here that our participation rates would drop, but we haven’t seen that. We’re on par with where we were before when we were administering it on paper.
UA: And what is that?
Dr. Gravestock: Across the five divisions that we’re working with, we’re seeing a mean response rate of over 50 percent. In some divisions their mean is 75 percent. You couldn’t ask for anything better.
UA: Any final thoughts?
Dr. Gravestock: Since we started doing this work five years ago, I’ve seen a groundswell of interest from other institutions across Canada. They’re pouring significant amounts of time and resources into this. What’s most interesting for me, in listening to these discussions, is how institutions are linking course evaluations to other types of evaluations and supports around teaching. I’ve been in my job for 11 years at the centre and I have seen a huge shift in terms of how people understand teaching and the importance of teaching. That has allowed us to have these conversations about how we evaluate it effectively. But there’s still a lot more to do.