Penn State University

Tools and Resources

Search Results

How midsemester feedback can help instructors and students, plus suggestions for useful questions to ask.

Instructors are the most important determinant of student participation. Students are more likely to submit feedback if they know that instructors value their feedback and use it to make improvements in the course. Below are suggestions for how you might discuss mid-semester feedback with your students.

Few of us would argue that quality feedback is useful, yet classroom-based research indicates that teachers do not give as much feedback as they think they do (e.g., Ingvarson & Hattie, 2008). This article shares a variety of resources regarding feedback.

The Midterm/Midsemester Class Interview (or Small Group Instructional Diagnosis, SGID) is a process designed to help instructors learn what their students think about how the course is going. Students identify elements of the class that are helping them learn and offer suggestions to strengthen the course. We recommend using this procedure in the middle of the semester, after students have received at least one grade. The process involves three steps: 1) meeting with an instructional consultant to discuss the instructor's objectives for the process; 2) a class interview with small groups and a whole class discussion; and a post-interview summary and discussion of the results with the consultant.

Designing Effective Reviews: Helping Students Give Helpful Feedback

LET'S GO Scroll Down
This module explores the qualities of effective reviews. Good review prompts help reviewers provide feedback that writers can use to make high-quality revisions.

The module identifies some of the choices that instructors can make while designing review tasks in order to generate helpful feedback. It will discuss the qualities of effective review prompts, design choices, and frameworks for helping structure open-ended feedback.

Lam, R. (2010) A Peer Review Training Workshop: Coaching Students to Give and Evaluate Peer Feedback, TESL Canada Journal/Revue TESL du Canada Vol. 27(2, Spring 2010), 114-127.

Single-point rubrics provide enough information so students know what’s expected of them and room for targeted feedback on their work, making grading more efficient and less anxiety-producing for both instructors and students. This recorded presentation in Kaltura requires Penn State log in.

Many instructors feel that the student ratings process is something 'done to' them. Annotation offers a way for instructors to interpret their own ratings rather than rely on accurate interpretation by others not involved in the course. The annotation serves as a cover page for the summary report available to faculty from the Student Course Feedback (e.g., SRTEs) the student ratings system (rateteaching.psu.edu). This document identifies key elements of an effective annotation as well as an example of a one-page annotation for a fictional course.

Penn State’s Teaching Assessment Framework consists of student feedback, peer review, and self-reflection. Consultants from the Schreyer Institute work with individual faculty on formative (non-evaluative, developmental) assessment to support instructors regularly making small adjustments and continuous improvement to their teaching. Summative (evaluative) assessments are conducted by faculty peers and academic unit heads.

Penn State’s Faculty Assessment of Teaching Framework assesses teaching using evidence from three sources, peer review, self-assessment, and student feedback. The framework also identifies four Elements of Effective Teaching, which provide a foundation of understanding, advance a shared language for communication, and serve as standards against which the combined sources of evidence are judged. Academic units may also use the elements as an invitation to discuss other important aspects of effective teaching. This document includes teaching examples by element.

Instructors are the most important determinant of student participation in the Student Educational Experience Questionnaire (SEEQ). Students are more likely to complete the questionnaire if they know that instructors read their feedback and value it as a source of ideas to improve the course.

Components for evaluation of faculty to student feedback.

The Midterm/Midsemester Class Interview (or Small Group Instructional Diagnosis, SGID) is a process designed to help instructors learn what their students think about how the course is going. Students identify elements of the class that are helping them learn and offer suggestions to strengthen the course. We recommend using this procedure in the middle of the semester, after students have received at least one grade. The process involves three steps: 1) meeting with an instructional consultant to discuss the instructor's objectives for the process; 2) a class interview with small groups and a whole class discussion; and a post-interview summary and discussion of the results with the consultant.

The SALG website is a free course-evaluation tool that allows college-level instructors to gather learning-focused feedback from students. It can be used for mid-semester feedback that will help instructors improve student learning in the course.

Large classes are among the most important because many students enrolled are new to the college experience. The big challenges of teaching large classes include finding ways to engage students, providing timely feedback, and managing logistics. When faced with these challenges, many instructors revert to lectures and multiple-choice tests. There are alternatives. This special report describes some alternative teaching and course management techniques to get students actively involved without an inordinate amount of work on the instructor’s part. From the Teaching Professor, Magna.

From UC Berkeley's Center for Teaching and Learning, Considerations for Large Lecture Classes provides six ways to make lectures in a large enrollment course more manageable and effective. The strategies include communicating explicit learning expectations, not trying to "cover" everything, focusing on analysis of issues or problems, engaging students through active learning practices, providing feedback to students, and using clickers to poll students.

Let’s Talk About Power: How Teacher Use of Power Shapes Relationships and Learning
Leslie Frances Reid, Jalal Kawash
Proceedings of the University of Calgary Conference on Postsecondary Learning and Teaching, Vol 2, 2017
Abstract
Teachers’ use of power in learning environments affects our students’ experiences, our teaching experiences, and the extent to which learning goals are met. The types of conversations we hold or avoid with students send cues regarding how we use power to develop relationships, influence behaviour and entice motivation. Reliance on prosocial forms of power, such as referent, reward, and expert, have a positive impact on outcomes such as learning and motivation, as well as perceived teacher credibility. Overuse of antisocial forms of power that include legitimate and coercive powers negatively affect these same outcomes. In this paper, we share stories from our teaching experiences that highlight how focusing on referent, reward and expert power bases to connect, problem solve, and negotiate challenges with our students has significantly enhanced our teaching practice. We provide resources that can be used by teachers to become aware of and utilize prosocial power strategies in their practice through self-reflection and peer and student feedback.

The Role of Interactive Digital Simulations in Student Conversations About Visualizing Molecules
Yuen-ying Carpenter, Erin Rae Sullivan
Proceedings of the University of Calgary Conference on Postsecondary Learning and Teaching, Vol. 2, 2017

Abstract
The visualization of chemical compounds in three-dimensions is a foundational skill in the study and practice of chemistry and related fields, and one which has the potential to be supported by interaction with virtual models. Here, we present a collaborative learning activity piloted in first-year chemistry which investigates if inquiry-driven interactive technology can contribute meaningfully to student conversations around this topic, and how students’ conversations and practices may shift when driven by feedback from an interactive simulation. Our initial observations from this pilot project suggest that students engaged in collaborative sense-making and discussion around key ideas throughout this activity. Students’ post-activity reflections also highlighted their positive experiences and increased confidence with the topic afterwards. The unique dynamics of these interactions lead us to propose a novel framing of interactive visualizations as participants rather than merely as resources in student learning conversations.

Often called “peer observation of teaching ” or “peer evaluation of teaching,” peer review of teaching (PRT) involves seeking feedback from an informed colleague for the purposes of improving one’s practice (formative assessment) and/or evaluating it (summative assessment). Texas A&M University's Faculty Performance Evaluation Task Force recommended having separate review processes for formative and summative assessment using multiple sources of data from students, peers, administrators, and as well as faculty themselves for evaluating teaching. Includes institutional perspectives and supporting videos from the University of Texas.

This is the second report of the Committee on Assessing Teaching Effectiveness submitted to Kathy Bieschke, Vice Provost for Faculty Affairs. This report is recommends options for improving future evaluation of teaching for tenure, promotion, annual review, and reappointment. The committee's recommendations address the unacceptable over-reliance on student feedback in the process of evaluation--specifically the numerical ratings of the Student Ratings of Teaching Effectiveness (SRTE) and the ‘Open Ended Item’ responses, which serve to amplify systemic inequities and hierarchies within our teaching community. The first report of the committee provided recommendations for evaluating teaching for promotion & tenure during the pandemic of 2020.

This is the committee's second report [for Report 1, see Appendix M in the NEW: 2020-2021 Administrative Guidelines for Policy AC23 (formerly HR23): Promotion and Tenure Procedures and Regulations]

The Howe Center for Writing Excellence at Miami University Ohio provides a thorough guide to setting up peer writing exercises for a remote or online course. The site includes a map of their overall recommendations on facilitating effective online peer response. It emphasizes the importance of spending time setting up the process to help prepare students and provides prompts and tools for students to give useful feedback.

Best Practices in the Evaluation of Teaching, by Stephen L. Benton, The IDEA Center and Suzanne Young, University of Wyoming
Effective instructor evaluation is complex and requires the use of multiple measures—formal and informal, traditional and authentic—as part of a balanced evaluation system. The student voice, a critical element of that balanced system, is appropriately complemented by instructor self-assessment and the reasoned judgments of other relevant parties, such as peers and supervisors. Integrating all three elements allows instructors to take a mastery approach to formative evaluation, trying out new teaching strategies and remaining open to feedback that focuses on how they might improve. Such feedback is most useful when it occurs in an environment that fosters challenge, support, and growth. Rather than being demoralized by their performance rankings, faculty can concentrate on their individual efforts and compare current progress to past performance. They can then concentrate on developing better teaching methods and skills rather than fearing or resenting comparisons to others. The evaluation of teaching thus becomes a rewarding process, not a dreaded event.
Keywords: Evaluation of teaching, summative evaluation, formative evaluation, mastery orientation

This document provides methods for doing classroom assessment (usually ungraded) to help faculty keep students in large classes engaged and to provide feedback about student knowledge of specific concepts to both faculty and students.

This is a ready-to-use template for collecting mid-semester or end-of-course open-ended feedback from students.

This FAQ sheet offers many strategies for collecting student feedback in large classes.

This document describes the use of student peers to provide feedback on written assignments by fellow students.

Penn State University