Exchanges: The Online Journal of Teaching and Learning in the CSU

Course Evaluations at Mid-Term:
Lessons Learned

Diane L. Johnson

Department of Mathematics
Humboldt State University


Faculty mid-term evaluation, often called small group instructional diagnosis (SGID), was introduced at Humboldt State University (HSU) in 1997. The evaluation process in general and its implementation at HSU are discussed. Alternative forms of evaluation and issues addressed by mid-term evaluations are considered. Longitudinal data regarding faculty response to the program is presented as well as interviews with two participating faculty members. Student responses include a list of positive attributes of instructors and suggested course changes.

Faculty Mid-Term Evaluation and Its Alternatives

The process of evaluating courses with the help of a trained, neutral facilitator near the middle of a term is called Faculty MidSemester Evaluation at HSU. Universities on either a semester or a quarter system benefit equally, so we will refer to the process as faculty mid-term evaluation. Evidence is presented to suggest that such evaluations improve teaching and learning outcomes for both students and faculty.

Original research on mid-term evaluations (SGID) was done by Joseph Clark (Clark & Redmond, 1982), and other researchers such as Coffman (1991) have outlined the process. It has been recommended to improve instruction at all levels, including that of teaching assistants (Nyquist et al., 1989). Basically, instructors request the help of trained facilitators to learn of student needs and concerns in particular courses, as well as to brainstorm solutions to those issues. As this information is obtained in the middle of a term, there is still time for the class to benefit from suggested solutions.

There are a great many possibilities for the evaluation and subsequent improvement of a class besides faculty mid-term evaluations. For example, peer reviews of teaching are often done wherein a contributor who works in the instructor’s field does a constructive evaluation (Chism, 1999). Because of the similar academic interests of the evaluator and the professor being evaluated, specific suggestions about course content can be made. Concerns include the impartiality of reviewers, the difficulty of finding peers within the home university, and the expense of outside consultation. Additionally, faculty members often lack the time to perform a peer review.

Some departments encourage an evaluation by a department chair, who may provide support or mentoring and who may directly observe the course in question (Seldin & Associates, 1999). Administrators may not be attuned to student concerns, however, and the students may not feel comfortable expressing their ideas. In addition, while improvement of teaching may be the goal of a department chair, information gained in the process may find its way into the retention and tenure process. A teaching portfolio, developed by the instructor to provide a reflective analysis of a specific course, has also been suggested for improving the teaching and learning in the classroom (Seldin, 1995). The portfolio is then used as a mentoring tool. Again faculty may be unable to make the necessary time commitment.

Some universities use on-line questionnaires at mid-term to determine student reactions. An advantage to this system is that it does not take in-class time and can be anonymous. However, time is required to write the questionnaires unless very open-ended questions are asked. Students lacking the benefit of facilitation or of group interaction may keep their responses perfunctory and less thought out. Participation may also suffer if the evaluations are voluntary. There are the so-called fast feedback forms of evaluation that ask students to email the professor or to fill out a three-by-five card near the end of a session (Davis, 1993). These methods may be less anonymous but provide quick information.

The end-of-term evaluation can be useful in improving a course (Arreola, 2000), but the comments generally do not affect the students making them. The format of the department student evaluation is usually somewhat generic and may not apply to the specifics of the course in question. Also, the evaluations are administered near the time of the final evaluations of the students, which may be problematic. Faculty mid-term evaluations may improve their results by demonstrating the instructor’s readiness to listen and adapt to student concerns.

The Evaluation Program at HSU

When a professor initially requests a mid-term evaluation, three appointments are set up: the initial meeting with the facilitator, the classroom interview, and the follow-up contact. During the initial meeting the questions and concerns of the instructor are addressed, and the facilitator obtains information about the course(s) under consideration. If both parties agree that this process is appropriate for the instructor’s needs, the facilitator learns about the students, overall course content, and specific questions that the instructor may have for the students or facilitator.

The classroom interview takes from 20 to 40 minutes, depending on the size and concerns of the class. It is recommended that the evaluation be done at the end of the class period as it often distracts students from the course material. The evaluation follows this sequence:

  1. The facilitator is introduced by the instructor and the instructor leaves the room.
  2. The facilitator briefly describes the process and puts the class into groups of three or four.
  3. Each group is given a feedback sheet asking for three or four things that the instructor is doing well and also three or four things the group would like to see changed. They are asked to work towards consensus and are given about ten minutes.
  4. The facilitator brings the class back together and asks each group for a positive aspect of the instructor’s teaching. A general sense of agreement or disagreement in the class as a whole is obtained.
  5. The suggested changes are discussed and the intensity of the student concerns is noted. The facilitator collects the feedback sheets, thanks the students, and dismisses them.

On the basis of the feedback sheets and overhead summary, the facilitator writes a report to be emailed to the instructor within 24 hours of the evaluation. Before the next classroom session, the facilitator meets with the instructor to discuss concerns and possible solutions. Facilitators are trained to ensure that instructors take the strengths of their courses to heart, as faculty are often quick to overlook the positive comments. Based on what is discovered, the instructor creates a response for the students and presents it at the next class meeting. To complete the process, the facilitator contacts the instructor approximately two weeks later to see how the class is going, specifically checking on the atmosphere and improvement of teaching and learning, including instructor and student satisfaction.

At HSU four to five new facilitators are trained every fall, typically during the fifth week of instruction. We currently have a team of six facilitators, two of them veterans from previous years. The experience is very valuable for new instructors, as they are exposed to a myriad of teaching styles and philosophies. A few facilitators are part-time instructors or university support staff, but most are graduate students.

Over the years a training manual has developed at HSU, based on the pioneering work of Donald Wulff of the University of Washington. Potential facilitators are given a packet including a timeline of steps necessary for each evaluation. They receive a list of points to cover and questions to ask during their initial interview with the instructor. They also receive a sample script for their presentation to a class. We include blank feedback sheets and overhead transparencies for the in-class discussion. In addition, facilitators receive a timesheet with instructions for payroll requirements.

At HSU, facilitators in 2005-2006 earned $60 for courses with fewer than 60 students and $90 for larger courses. The program coordinator received from three to six units of release time yearly for running the program. Between $2500 and $5000 a year is budgeted for the actual evaluations, with occasional additions if demand warrants. We remain in close contact with the Office of Academic Affairs, our main funding source, as well as with the Faculty Development Committee. We have served 371 instructors in 545 courses in the first eight years, with literally thousands of students participating.

Instructors at all levels across campus are informed of the program by flyers and email before classes begin and again during the first five to eight weeks of classes. Interested instructors contact the program coordinator, who records contact and course information for matching with potential facilitators. Facilitators contact the instructors and keep the program coordinator informed. The program provides facilitators with necessary supplies and feedback sheets, and our department provides staff assistance with the payroll.

At HSU the actual report written by the mid-term evaluation facilitator is not allowed in a faculty member’s personnel file in order to preserve a confidential process by which instructors can receive candid criticism without fear of reprisal. The feedback helps instructors address problems before the end-of-term evaluations are administered. To confirm participation while maintaining confidentiality, a brief letter is sent to each participant at the end of the academic year listing the courses and semesters in which the service was used. On occasion, personnel committees across campus have suggested mid-term evaluations for instructors to help improve teaching and learning. The resulting letter of participation is sufficient to show good intent.

Issues Addressed by Mid-Term Evaluations

One possible result of a faculty mid-term evaluation is that an instructor who lacks confidence may be strongly encouraged by his or her students’ unprompted positive comments (see Table 1).

Table 1. Commonly mentioned instructional strengths
General Category Comment Number of Responses
Professor Enthusiastic
Have practical experience
Good use of humor
Lecture Style Good explanations
Good pace
Discussion Good discussions
Organization Use of Internet and PowerPoint
Sequential ordering of material
Content Guest lecturers, videos
Application of material
Testing Fair tests and quizzes
Structure Group work
Peer teaching
Materials Textbook

Seven courses were chosen from a sample of classes taught by instructors ranging from TAs to full professors. While by no means statistically representative, these students seemed to crystallize ideas held in a majority of evaluations. They were enthusiastic about guest lecturers and videos; appreciated the application of material covered in class and the lively discussions; and were grateful for thoughtful use of the Internet and PowerPoint.

In observing the suggested changes for the seven courses, the greatest demand was for more structure and a finer focus in lectures and exams (see Table 2). Along the same lines, some students wanted more specifics or examples in lecture. It is possible for contradictory comments to arise, in which case the facilitator can find the predominant sentiment of the class by observing the tenor of the students or by taking a vote.

Table 2. Commonly mentioned instructional changes
General Category Comment Number of Responses
Lectures Need more specifics
Better, more diverse explanations
Connect lecture to relevance of topic
Uneven pace
Be prepared
Professor Condescending
Organization More sequential syllabus
Homework Need variety in assignments
Discussion Need more, improved
Evaluation Provide study guide for exams
Link test to discussion and guest speakers
Structure More structure, finer focus
More leadership by instructor
Improve group learning
Content Redundancy
Bring in guest lecturers
Materials Textbook (students do not like)

Some student suggestions, such as requests for a TA for the class or for a change of textbook mid-term, simply are not within the instructor’s power to implement. Particular policies may be unpleasant for students but academically advisable, and by learning of student dissatisfaction, the faculty member has the opportunity to explain the rationale for them, encouraging students to “buy into” policies and possibly alleviating stronger criticism in end-of-the-semester evaluation.

What cannot be well addressed by mid-term evaluations is a situation where an instructor is opposed to making any changes. In this case, more harm than good can be done with an evaluation, as students who have carefully considered improvements to the course will fail to see them implemented. If the instructor has a negative attitude towards students, criticism may worsen his or her outlook, making for an even worse atmosphere. In this case such an instructor probably should not participate in the program.

Faculty Responses and Interviews

At the end of each term (i.e., fall and spring semesters), we initially sent out questionnaires about mid-term evaluations to all participating faculty. Beginning with spring semester 2004, we sent questionnaires to all participating faculty at the end of the academic year. In a sampling of 371 questionnaires sent during the period from spring semester 1997 to spring semester 2005, 229 questionnaires were returned and with few exceptions, the responses about mid-term evaluations were favorable.

Nearly three quarters of respondents preferred both mid- and end-of-term evaluations, while nearly a quarter preferred mid-term evaluations alone. A negligible number preferred only end-of-term or no evaluations. Clearly, respondents believed that overall teaching techniques, classroom atmosphere, the instructors’ understanding of students, and perceived student learning were all positively affected by mid-term evaluations (see Table 3). In fact, more than 90% felt that both new and experienced instructors would benefit from mid-term evaluations (See Appendix A).

Table 3. Mid-Term Evaluations Faculty Questionnaire
Perceived Changes in Teaching

The fact that the questionnaires are sent out during the last week of lectures may account for the low response rate of 61.7% of participating faculty. The results of the mid-term evaluations are not known until later in the semester, an exceptionally busy time for faculty when the questionnaires may not be a priority. Some dissatisfied instructors may have chosen not to return their questionnaires, but we have never had a participating faculty member complain to the offices of academic affairs or of faculty development at HSU. Since 1997 there have been at most two or three faculty members critical of the program, the most outspoken of whom thought that the facilitator lowered the academic rigor of her class.

Individual Debriefing for Qualitative Data

For more qualitative data, two faculty members at HSU were interviewed: a new faculty member who joined the mathematics department during academic year 2003-04 and a senior education professor who had used the service on numerous occasions. The new faculty member’s input was valuable as she was getting acquainted with HSU student attitudes, while the senior member’s perceptions indicated longer-term trends. However, faculty members at all ranks have used the service to their advantage. Both instructors felt that their mid-term evaluations made them more aware of student concerns and made the students more aware of their own learning process. For example, students sometimes focus on grades instead of their own learning; the evaluations help students reflect on and take more responsibility for it.

The new faculty member feared that several students expected her to do everything that they had suggested in the evaluation and that they may have been somewhat disappointed when she did not agree to do so. She explained that her refusal to accept homework late was based on the resultant logistical nightmare, and she reassured them that they had several homework scores dropped to compensate for missed work. Even so, some students did not accept her explanation, claiming that her policy did not encourage the make-up of missed assignments. She had hoped to convince them that her policy was reasonable but was not entirely successful. Her mid-term evaluation made her more reflective about getting students to “buy into” her reasons for the policies, and a thoughtful facilitator might help her prepare her arguments. She said that she would definitely welcome future mid-term evaluations.

The senior faculty member appreciated having an evaluation facilitated by someone from a different department, as her students were then unlikely to encounter the facilitator as a classmate or an instructor. When students realized that the evaluation was completely voluntary, they saw that their instructor wanted to do her part to improve teaching and learning. More extensive interviews as well as case studies may bring additional insight about the implementation of the mid-term evaluation process.

Student Responses

Over the years, student responses at HSU have remained mixed. Students seem satisfied with the process if they feel listened to and are satisfied with the faculty member’s response. For each issue this may mean implementing student suggestions or else modifying or rejecting them while giving good reasons to do so. During the follow-up interview with the instructor, the facilitator can help to clarify the instructor’s rationale for not taking a specific suggestion.

In almost all cases, however, students report an improved atmosphere in the class. The group experience usually promotes collegiality and a greater sense of personal responsibility. To see a compilation of different student evaluations across disciplines, see Coffman (1998).

Mid-Term Evaluations at Other Institutions

HSU’s academically and regionally diverse student population responds well to mid-term evaluations, suggesting their broad student impact. Similar programs have been successfully implemented over the years by institutions ranging from research universities such as the University of Washington to community colleges such as The College of the Redwoods in Eureka, California.

Based on our experience, we suggest that those considering such programs make faculty participation voluntary and separate from retention and tenure decisions to minimize negative personnel decisions. HSU experimented with a volunteer staff of faculty working as facilitators, but discontinued the arrangement due to faculty time constraints and commitments.


HSU’s faculty mid-term evaluation program has provided feedback to hundreds of instructors over the nine years it has been in existence, and the faculty response has been overwhelmingly positive. The faculty members are reminded of the issues and concerns of people at a very different stage of the learning process than themselves, and they have time to improve the teaching and learning situation for everyone involved. They can feel supported by the facilitator and can learn new approaches to previously difficult situations. The program is consistent with the student centeredness of HSU and could readily be adopted on other campuses.


Arreola, R.A. (2000). Developing a Comprehensive Faculty Evaluation System, second ed. Bolton, MA: Anker Publishing Co.

Chism, N.V.N. (1999). Peer Review of Teaching. Bolton, MA: Anker Publishing Co.

Clark, D.J., and Redmond, M. (1982). Small group instructional diagnosis: Final report. Seattle, WA: University of Washington. (ERIC Document Reproduction Service No. ED 217 954)

Coffman, S.J. (1998). Small group instructional evaluation across disciplines, College Teaching, 46(3), 106-112.

Davis, B.G. (1993). Tools for Teaching. San Francisco: John Wiley and Sons.

Nyquist, J.D., Abbott, R.D., and Wulff, D.H., Eds. (1989). Teaching Assistant Training in the 1990s. San Francisco: Jossey-Bass.

Seldin, P. (1995). Improving College Teaching. Bolton, MA: Anker Publishing Co.

Seldin, P. and Associates (1999). Changing Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. Bolton, MA: Anker Publishing Co.

Posted July 7, 2006.

All material appearing in this journal is subject to applicable copyright laws.
Publication in this journal in no way indicates the endorsement of the content by the California State University, the Institute for Teaching and Learning, or the Exchanges Editorial Board.
©2006 by Diane L. Johnson.

·· exchanges·· all research ·· top of this article ··