Penn State University

Tools and Resources

Top Downloaded Tools and Resources at Penn State

Item Analysis (a.k.a. Test Question Analysis) is an empowering process that enables you to improve mutiple-choice test score validity and reliability by analyzing item performance over time and making necessary adjustments. Knowledge of score reliability, item difficulty, item discrimination, and crafting effective distractors can help you make decisions about whether to retain items for future administrations, revise them, or eliminate them from the test item pool. Item analysis can also help you to determine whether a particular portion of course content should be revised or enhanced.

This document was created to provide you with a source of options for gathering data on teamwork assignments and projects. You may choose to adopt one of the examples as is, combine elements from several of the examples, or use the examples to identify characteristics that correspond to particular aspects of your assigned work, course content, or student population.

This document describes a specific strategy that provides a collaborative learning experience for students.

Heavily abridged version of Weinstein, Y., Madan, C. R., & Smith, M. A. (in press). Teaching the science of learning. Cognitive Research: Principles and Implications, prepared for and presented at "Reframing Testing as a Learning Experience: Three Strategies for Use in the Classroom and at Home" on Tuesday, Oct. 3, 2017.

Six key learning strategies from research in cognitive psychology can be applied to education: spaced practice, interleaving, elaborative interrogation, concrete examples, dual coding, and retrieval practice. However, a recent report (Pomerance, Greenberg, & Walsh, 2016) found that few teacher-training textbooks cover these principles; current study-skills courses also lack coverage of these important learning strategies. Students are therefore missing out on mastering techniques they could use on their own to learn effectively. This handout contains the six key learning strategies to address those concerns.

This document describes strategies for encouraging and enabling students in large classes to participate in class.

This is a peer-reviewed article published in the journal of Studies in Educational Evaluation. Its focus is the accurate interpretation of student ratings data (including Penn State's SRTE) and appropriate use of the data to evaluate faculty. It includes recommendations for use and interpretation based on more than 80 years of student ratings research. Most colleges and universities use student ratings data to guide personnel decisions so it is critical that administrators and faculty evaluators have access to the cumulative knowledge about student ratings based on multiple studies, rather than single studies that have not been replicated, studies based on non-representative populations, or that are from a single discipline.

The article provides an overview of common views and misconceptions about student ratings, followed by clarification of what student ratings are and are not. It also includes two sets of guidelines for administrators and faculty serving on review committees.

This is a faculty peer evaluation form (peer observation, classroom observation). It has a "checklist" format, rather than a scaled rating (Likert scale) format. This form asks faculty peer reviewers to note the presence of teaching activities/behaviors that have already been established as indicative of high quality teaching. This form is intentionally designed to be shortened by the faculty in an academic unit so that it reflects the unit's teaching values, and the priorities of the unit. It should not be used "as is" because it is too much to expect reviewers to evaluate; fewer items per section will make the form easier to use.

The form was created based in January 2006 based on information in: Chism, N.V.N. (1999) Chapter 6: Classroom Observation, Peer Review of Teaching: A Sourcebook. Bolton, MA: Anker Publishing.

This document describes criteria for an effective electronic teaching portfolio.

This file is an example of a rubric that can be used to grade a science experiment. The use of a rubric can help instructors to grade more accurately and more quickly.

Penn State University