Assessing Teaching

Writer: 
November 30, 2006

Professors often speak of teaching success in subjective terms: light bulbs appearing overhead, nodding heads signaling understanding. But there are also ways to measure teaching more objectively.

At Duke, the student course-evaluation form has long been the primary method of providing faculty members with feedback about their teaching. The evaluation form consists of twenty questions that ask students to rate the quality of the course and the professor, on a scale of one to five, in various areas. There is also room for open-ended comments. The current form was developed in 2000 by a committee appointed by Robert Thompson, dean of Trinity College and vice provost for undergraduate education, based on recommendations outlined by the Individual Development and Educational Assessment (IDEA) Center at Kansas State University. Among the qualities students are asked to rate are several principles of good teaching—enthusiasm, accessibility, clear expectations, clear feedback—as well as things like intellectual challenge and amount of work required.

Matt Serra, director of assessment for Trinity College of Arts & Sciences, helps process more than 18,000 completed student-evaluation forms per semester and creates summary reports for each course, professor, and department and for the college as a whole.

He says that one of the benefits of relying on student evaluations is the high return rate; it's about 80 percent. A faculty peer-review system would perhaps give a more accurate picture, he says, but would be prohibitively time intensive and could conflict with teaching and research obligations.

"You can't reduce the quality of teaching to a number," he says of the student-evaluation forms and summary reports, "but you can give yourself a comparative number." For instance, if a department head sees that the average in the department is 3.4 or 3.5, and one professor is consistently averaging 2.9, they'd likely meet to discuss performance expectations.

Five years ago—around the time websites like ratemyprofessor.com and pickaprof.com, where students can rate classes and professors, began popping up—there was a call to post course evaluations online, where students could access them. Some faculty members worried that this would result in students' steering away from challenging courses. But Serra and Thompson say the "quality of the course" rating actually correlates highest with "quality of instruction" and "level of intellectual stimulation." As it stands now, course evaluations are posted on the Duke registrar's website alongside course descriptions on an opt-in basis. As of this semester, the opt-in rate is only 17.1 percent. The low rate is attributable, in part, to the fact that professors must agree to have evaluations posted.

Faculty members generally acknowledge that the surveys are good for getting a one-time glimpse of attitudes. But many seek more accurate ways to gauge their own effectiveness. For example Jeffrey Forbes, an assistant professor of the practice of computer science, surveys students at the beginning and end of each semester to measure their knowledge and expectations, and uses a personal-response system to solicit feedback and gauge understanding during a given lecture. Technology, he says, allows him to alter the pace and direction of the class instantaneously. Julie Reynolds, a Mellon Lecturing Fellow who teaches writing and biology, has done a review of biology department honors theses that compares the writing skills of those who wrote their theses in the context of a Writing in Biology class, those who participated in a writing-focused forum, and those who wrote it entirely on their own to gauge the effectiveness of the teaching.

The university's Scholarship with a Civic Mission initiative, designed to develop students' academic knowledge, ethical-inquiry skills, and civic-leadership capacities, has included an assessment component from the start. Since it was launched in 2002, project faculty and staff members have collected and analyzed qualitative and quantitative data to gauge learning outcomes for students and teaching and research outcomes for faculty members, among other indicators.

Conferences to explore the growing field of scholarship in teaching and learning have become more mainstream over the last ten years, and there is a growing interest in the field among professors and administrators at Duke and elsewhere. As more resources are allocated, the means of measuring effective teaching will become more sophisticated. "Traditionally the evaluation of K-12 teaching has been much more visible," says Doug James, director of academic support programs at the Graduate School. "Regional accrediting bodies are now expanding their focus to the assessment of learning outcomes for undergraduate education."