Penn State University

Tools and Resources

Search Results

Few of us would argue that quality feedback is useful, yet classroom-based research indicates that teachers do not give as much feedback as they think they do (e.g., Ingvarson & Hattie, 2008). This article shares a variety of resources regarding feedback.

Designing Effective Reviews: Helping Students Give Helpful Feedback

LET'S GO Scroll Down
This module explores the qualities of effective reviews. Good review prompts help reviewers provide feedback that writers can use to make high-quality revisions.

The module identifies some of the choices that instructors can make while designing review tasks in order to generate helpful feedback. It will discuss the qualities of effective review prompts, design choices, and frameworks for helping structure open-ended feedback.

Lam, R. (2010) A Peer Review Training Workshop: Coaching Students to Give and Evaluate Peer Feedback, TESL Canada Journal/Revue TESL du Canada Vol. 27(2, Spring 2010), 114-127.

Single-point rubrics provide enough information so students know what’s expected of them and room for targeted feedback on their work, making grading more efficient and less anxiety-producing for both instructors and students. This recorded presentation in Kaltura requires Penn State log in.

The SALG website is a free course-evaluation tool that allows college-level instructors to gather learning-focused feedback from students. It can be used for mid-semester feedback that will help instructors improve student learning in the course.

Large classes are among the most important because many students enrolled are new to the college experience. The big challenges of teaching large classes include finding ways to engage students, providing timely feedback, and managing logistics. When faced with these challenges, many instructors revert to lectures and multiple-choice tests. There are alternatives. This special report describes some alternative teaching and course management techniques to get students actively involved without an inordinate amount of work on the instructor’s part. From the Teaching Professor, Magna.

From UC Berkeley's Center for Teaching and Learning, Considerations for Large Lecture Classes provides six ways to make lectures in a large enrollment course more manageable and effective. The strategies include communicating explicit learning expectations, not trying to "cover" everything, focusing on analysis of issues or problems, engaging students through active learning practices, providing feedback to students, and using clickers to poll students.

Let’s Talk About Power: How Teacher Use of Power Shapes Relationships and Learning
Leslie Frances Reid, Jalal Kawash
Proceedings of the University of Calgary Conference on Postsecondary Learning and Teaching, Vol 2, 2017
Teachers’ use of power in learning environments affects our students’ experiences, our teaching experiences, and the extent to which learning goals are met. The types of conversations we hold or avoid with students send cues regarding how we use power to develop relationships, influence behaviour and entice motivation. Reliance on prosocial forms of power, such as referent, reward, and expert, have a positive impact on outcomes such as learning and motivation, as well as perceived teacher credibility. Overuse of antisocial forms of power that include legitimate and coercive powers negatively affect these same outcomes. In this paper, we share stories from our teaching experiences that highlight how focusing on referent, reward and expert power bases to connect, problem solve, and negotiate challenges with our students has significantly enhanced our teaching practice. We provide resources that can be used by teachers to become aware of and utilize prosocial power strategies in their practice through self-reflection and peer and student feedback.

The Role of Interactive Digital Simulations in Student Conversations About Visualizing Molecules
Yuen-ying Carpenter, Erin Rae Sullivan
Proceedings of the University of Calgary Conference on Postsecondary Learning and Teaching, Vol. 2, 2017

The visualization of chemical compounds in three-dimensions is a foundational skill in the study and practice of chemistry and related fields, and one which has the potential to be supported by interaction with virtual models. Here, we present a collaborative learning activity piloted in first-year chemistry which investigates if inquiry-driven interactive technology can contribute meaningfully to student conversations around this topic, and how students’ conversations and practices may shift when driven by feedback from an interactive simulation. Our initial observations from this pilot project suggest that students engaged in collaborative sense-making and discussion around key ideas throughout this activity. Students’ post-activity reflections also highlighted their positive experiences and increased confidence with the topic afterwards. The unique dynamics of these interactions lead us to propose a novel framing of interactive visualizations as participants rather than merely as resources in student learning conversations.

Often called “peer observation of teaching ” or “peer evaluation of teaching,” peer review of teaching (PRT) involves seeking feedback from an informed colleague for the purposes of improving one’s practice (formative assessment) and/or evaluating it (summative assessment). Texas A&M University's Faculty Performance Evaluation Task Force recommended having separate review processes for formative and summative assessment using multiple sources of data from students, peers, administrators, and as well as faculty themselves for evaluating teaching. Includes institutional perspectives and supporting videos from the University of Texas.

The SEEQ instrument is a mid-semester feedback survey that Penn State instructors have been using since the 1990s. The instrument is available at and results are available only to the instructor for the semester in which the survey was used.

While originally created by Herbert Marsh as an evaluation survey, Penn State has never used this 40 item instrument this way. The instrument was first offered at Penn State using Scantron bubble sheets. It was later adapted for use in the university's long-time LMS (ANGEL). When the university adopted Canvas as its LMS, the SEEQ could not be adapted for use in Canvas because quiz data are reported inappropriately for a survey. In 2020, university programmers developed an in-house system for offering the SEEQ. At that time, the original name of the instrument was changed to Student Educational Experience Questionnaire.

Many instructors feel that the student ratings process is something 'done to' them. One way that instructors can take control of the process is to systematically approach the ratings and the accompanying written feedback by analyzing the ratings and identifying themes in the feedback. This document provides guidelines for preparing student ratings & feedback for a review, including an example of a one-page annotation for a fictional course to accompany raw data and a template for identifying key themes. At Penn State, this self assessment cannot be included in the official Promotion and Tenure dossier, it can guide the administrator's assessment letter or guide their summary the written comments for the faculty.

These guidelines help faculty preparing their SRTEs (student ratings) for review by an academic administrator or a faculty promotion committee. This document includes:
1) Guidelines for preparing to undergo a review of your SRTEs & students written feedback;
2) a sample annotation (aka, abstract, summary), and
3) a template for analyzing written feedback into themes.

This document is a fictitious example of a 1-page annotation of a faculty member's SRTEs and written feedback for a single course.

This is the second report of the Committee on Assessing Teaching Effectiveness submitted to Kathy Bieschke, Vice Provost for Faculty Affairs. This report is recommends options for improving future evaluation of teaching for tenure, promotion, annual review, and reappointment. The committee's recommendations address the unacceptable over-reliance on student feedback in the process of evaluation--specifically the numerical ratings of the Student Ratings of Teaching Effectiveness (SRTE) and the ‘Open Ended Item’ responses, which serve to amplify systemic inequities and hierarchies within our teaching community. The first report of the committee provided recommendations for evaluating teaching for promotion & tenure during the pandemic of 2020.

This is the committee's second report [for Report 1, see Appendix M in the NEW: 2020-2021 Administrative Guidelines for Policy AC23 (formerly HR23): Promotion and Tenure Procedures and Regulations]

The Howe Center for Writing Excellence at Miami University Ohio provides a thorough guide to setting up peer writing exercises for a remote or online course. The site includes a map of their overall recommendations on facilitating effective online peer response. It emphasizes the importance of spending time setting up the process to help prepare students and provides prompts and tools for students to give useful feedback.

Best Practices in the Evaluation of Teaching, by Stephen L. Benton, The IDEA Center and Suzanne Young, University of Wyoming
Effective instructor evaluation is complex and requires the use of multiple measures—formal and informal, traditional and authentic—as part of a balanced evaluation system. The student voice, a critical element of that balanced system, is appropriately complemented by instructor self-assessment and the reasoned judgments of other relevant parties, such as peers and supervisors. Integrating all three elements allows instructors to take a mastery approach to formative evaluation, trying out new teaching strategies and remaining open to feedback that focuses on how they might improve. Such feedback is most useful when it occurs in an environment that fosters challenge, support, and growth. Rather than being demoralized by their performance rankings, faculty can concentrate on their individual efforts and compare current progress to past performance. They can then concentrate on developing better teaching methods and skills rather than fearing or resenting comparisons to others. The evaluation of teaching thus becomes a rewarding process, not a dreaded event.
Keywords: Evaluation of teaching, summative evaluation, formative evaluation, mastery orientation

Penn State Teacher II 1997. Compendium of teaching tips and advice from seasoned faculty and graduate students. Includes sections on Course design, matching teaching methods with learning objectives, teaching large courses, evaluating student learning, collecting feedback, sample syllabi, feedback questionaires, grading standards, plagiarism, teaching philosophies.
Authored by D. Enerson, R. Neill Johnson, Susannah Milner, and Kathryn M. Plank.

This document provides methods for doing classroom assessment (usually ungraded) to help faculty keep students in large classes engaged and to provide feedback about student knowledge of specific concepts to both faculty and students.

Three examples of simple mid-semester feedback questionnaires.

This is a ready-to-use template for collecting mid-semester or end-of-course open-ended feedback from students.

This FAQ sheet offers many strategies for collecting student feedback in large classes.

This document describes the use of student peers to provide feedback on written assignments by fellow students.

Penn State University