Penn State University

Tools and Resources

Search Results

This is a 2-page document describing the main statistical indices provided as part of an item analysis. It offers information about how to interpret each index.

This document describes the process of question sampling (item banking).

This file is used by scanning operations to match questions on different test forms.

This sample score report is generated by our paper exam scanning system. The score report is an important tool that will help you evaluate the effectiveness of a test and of the individual questions that comprise it. The evaluation process, called item analysis, can improve future test and item construction. The analysis provides valuable information that helps instructors determine which are the “best” test questions to secure and continue to use on future course assessments; which items need review and potential revision before a next administration, and which are the poorest items which should be eliminated from scoring on the current administration.

These PowerPoint slides accompanied a presentation by Linda Suskie delivered via Zoom on Tuesday, Apr. 25, 2017. Multiple-choice tests can have a place in many courses. If they’re well designed, they can yield useful information on student achievement of many important course objectives, including some thinking skills. An item analysis of the results can shed light on how well the questions are working as well as what students have learned. Viewers will be able to use principles of good question construction to develop tests, develop test questions that assess thinking skills as well as conceptual understanding, and use item analysis to understand and improve both test questions and student learning. Be sure to open the handouts file listed below as you view the presentation!

These handouts (minus quizzes for test security) accompanied a presentation by Linda Suskie delivered via Zoom on Tuesday, Apr. 25, 2017. Multiple-choice tests can have a place in many courses. If they’re well designed, they can yield useful information on student achievement of many important course objectives, including some thinking skills. An item analysis of the results can shed light on how well the questions are working as well as what students have learned. Viewers will be able to use principles of good question construction to develop tests, develop test questions that assess thinking skills as well as conceptual understanding, and use item analysis to understand and improve both test questions and student learning.

The questionnaire includes seven items, which are the same for every instructor. Responses to items 1-4 are available to the instructor and their academic unit head. Items 5-7 are provided only to the instructor. Fall semester 2023 is the first to use this questionnaire. It replaces the SRTE.

This is a faculty peer evaluation form (peer observation, classroom observation). It has a "checklist" format, rather than a scaled rating (Likert scale) format. This form asks faculty peer reviewers to note the presence of teaching activities/behaviors that have already been established as indicative of high-quality teaching. This form is intentionally designed to be shortened by the faculty in an academic unit so that it reflects the unit's teaching values, and the priorities of the unit. It should not be used "as is" because it is too much to expect reviewers to evaluate; fewer items per section will make the form easier to use.

The form was created based on information in: Chism, N.V.N. (1999) Chapter 6: Classroom Observation, Peer Review of Teaching: A Sourcebook. Bolton, MA: Anker Publishing.

Abstract: Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing test items. All of this advice is based on extensive scholarly research and experience. Creating valid multiple-choice items is a difficult task, but it contributes greatly to the teaching and learning process for undergraduate, graduate, and professional-school courses.

Author: Thomas M. Haladyna, Arizona State University

Keywords: Multiple-choice items, selected response, test-item formats, examinations

Writing effective and useful additional questions can be challenging. Some of the most common mistakes in writing effective items are listed in this document using examples of Likert Scale items from the SRTEs. These pitfalls also apply to yes/no questions and open-ended questions. We recommend testing the questions with students before adding them to the Additional Questions section.

This handout was provided at the workshop "Writing High Quality Multiple Choice Questions" (10/26/2018) by Hoi K. Suen

Distinguished Professor Emeritus

Educational Psychology

The Penn State University

This is the second report of the Committee on Assessing Teaching Effectiveness submitted to Kathy Bieschke, Vice Provost for Faculty Affairs. This report is recommends options for improving future evaluation of teaching for tenure, promotion, annual review, and reappointment. The committee's recommendations address the unacceptable over-reliance on student feedback in the process of evaluation--specifically the numerical ratings of the Student Ratings of Teaching Effectiveness (SRTE) and the ‘Open Ended Item’ responses, which serve to amplify systemic inequities and hierarchies within our teaching community. The first report of the committee provided recommendations for evaluating teaching for promotion & tenure during the pandemic of 2020.

This is the committee's second report [for Report 1, see Appendix M in the NEW: 2020-2021 Administrative Guidelines for Policy AC23 (formerly HR23): Promotion and Tenure Procedures and Regulations]

Item Analysis (a.k.a. Test Question Analysis) is an empowering process that enables you to improve mutiple-choice test score validity and reliability by analyzing item performance over time and making necessary adjustments. Knowledge of score reliability, item difficulty, item discrimination, and crafting effective distractors can help you make decisions about whether to retain items for future administrations, revise them, or eliminate them from the test item pool. Item analysis can also help you to determine whether a particular portion of course content should be revised or enhanced.

A teaching portfolio provides materials associated with the experience of teaching and learning. This PDF describes items that might be included in a teaching portfolio, categorized by source: personal material, material from others, and products of good teaching.

This file contains a list of "item-writing rules," which will help you to write multiple choice questions in a way that will improve the ability of the test to focus on the content and prevent students from guessing the correct answer without knowing the material. The rules were developed by experts in the field of psychometrics, like the people who write questions for SATs or GREs.

This PowerPoint presentation describes how to us item analysis to determine the efficacy of multiple choice questions.

Form for weighting items for exams with items that have different weights. Used for scanning multiple choice tests that use bubble sheets.

Penn State University