SGIDs available for in-person and synchronous online classes!
Use this link to submit your application for a Fall 2023 SGID: https://forms.gle/SRtmkFkELz5Qer6V9
Small Group Instructional Diagnosis (SGID) is a simple and straightforward method of mid-term course evaluation that uses an outside facilitator to conduct a discussion with students and to provide feedback to an instructor. SGIDs can help strengthen communication between faculty and students, offer an opportunity to make mid-semester course adjustments, and assist in the development of ideas for strengthening a course.
The CETL has adapted the SGID process to make it available for any class that meets in person or synchronously online. SGIDs are announced each semester in the October and February CETL newsletters, and the deadline to apply for Fall 2023 classes is noon on Thursday, October 5th. All Fall 2023 SGIDs will be conducted between October 9th and 27th.
The Small Group Instructional Diagnosis was developed by Joseph Clark and Mark Redmond as a government grant project in 1982. The SGID is a mid-term course evaluation where a trained facilitator conducts a structured conversation with students to highlight the students’ consensus on what is going well and poorly in the class, what the instructor could do to further facilitate students learning, and what students could do to contribute to their own learning.
There are a number of reasons to use the SGID:
To inquire into one’s course and the students’ learning experiences.
To obtain systematic feedback from student relatively early in the semester (6-8 weeks): it is useful feedback at the right time. And, the feedback is, perhaps, more helpful because it represents group attitude.
To locate what might be changed for improving students’ learning and satisfaction in a course.
To make changes that can improve students learning experiences during the remainder of the course.
They can be used to enhance teaching effectiveness. (formative vs. summative assessment)
They are low pressure for students and faculty because they are not “official” student assessments. (Not used for tenure and promotion)
A few studies have been done on the effects of the midterm evaluation on improving teaching, and its relationship to SETs. In her book, Student Ratings of Instruction: Recognizing Effective Teaching (2013), Nira Hativa reports the following:
“Cohen (1980) performed a meta-analysis of 17 studies that examined effects of midterm evaluation on improving teaching. He found that receiving feedback from student ratings administered during the first half of the term was positively related to improving teaching as measured by student ratings at the end of term. Similarly, Murray (2007) showed (on the basis of Murray & Smith, 1989) that midterm feedback with ratings of specific behaviors led to significant improvement of classroom teaching, as indicated by significantly higher ratings of Overall Teaching at the end of term. Murray concluded that under the right conditions, midterm feedback on specific teaching behaviors could significantly improve teaching.”
Cohen, P. A. (1980). Effectiveness of Student-Rating Feedback for Improving College Instruction: A Meta-Analysis of Findings. Research in Higher Education, 13(4), 321-341.
Murray, H. G. (2007). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 145-200). Dordrecht, The Netherlands: Springer.
Murray, H.G., and Smith, T.A. (1989). Effects of Midterm Behavioral Feedback on End of-term Ratings of Instructor Effectiveness. Paper presented at annual meeting of the American Education Research Association, San Francisco.
Other SGID Resources
"The Impact of a Learner-Centered, Mid-Semester Course Evaluation on Students" by Carol A Hurney, Nancy L. Harris, Samantha C. Bates Prines, & S.E. Kruck
"Use of Small Groups in Instructional Evaluation" by Joseph Clark and Jean Bekey
"Student Ratings: Myths vs. Research Evidence" by Michael Theall