Penn State University

Tools and Resources

Search Results

This document describes the process of question sampling (item banking).

This document suggests a variety of testing models and explains why each is effective. They are alternatives or supplements to the "mid-term and a final" model.

This IDEA paper from the Kansas State IDEA Center Resources provides guidelines for creating effective multiple choice tests.

This IDEA paper from the Kansas State IDEA Center Resources provides many strategies for improving essay tests.

This test blueprint template can be downloaded and manipulated to help instructors effectively map exam questions to learning objectives, topics, modules, or themes.

This file is used by scanning operations to match questions on different test forms.

This sample score report is generated by our paper exam scanning system. The score report is an important tool that will help you evaluate the effectiveness of a test and of the individual questions that comprise it. The evaluation process, called item analysis, can improve future test and item construction. The analysis provides valuable information that helps instructors determine which are the “best” test questions to secure and continue to use on future course assessments; which items need review and potential revision before a next administration, and which are the poorest items which should be eliminated from scoring on the current administration.

These PowerPoint slides accompanied a presentation by Linda Suskie delivered via Zoom on Tuesday, Apr. 25, 2017. Multiple-choice tests can have a place in many courses. If they’re well designed, they can yield useful information on student achievement of many important course objectives, including some thinking skills. An item analysis of the results can shed light on how well the questions are working as well as what students have learned. Viewers will be able to use principles of good question construction to develop tests, develop test questions that assess thinking skills as well as conceptual understanding, and use item analysis to understand and improve both test questions and student learning. Be sure to open the handouts file listed below as you view the presentation!

These handouts (minus quizzes for test security) accompanied a presentation by Linda Suskie delivered via Zoom on Tuesday, Apr. 25, 2017. Multiple-choice tests can have a place in many courses. If they’re well designed, they can yield useful information on student achievement of many important course objectives, including some thinking skills. An item analysis of the results can shed light on how well the questions are working as well as what students have learned. Viewers will be able to use principles of good question construction to develop tests, develop test questions that assess thinking skills as well as conceptual understanding, and use item analysis to understand and improve both test questions and student learning.

This is a diagnostic survey for undergraduate, non-science majors taking their first astronomy course. It was developed by the multi-institutional Collaboration for Astronomy Education Research (CAER) including, among many others, Jeff Adams, Rebecca Lindell Adrian, Christine Brick, Gina Brissenden, Grace Deming , Beth Hufnagel , Tim Slater, and Michael Zeilik. The first 21 questions are the content portion of the test, while the final 12 questions collect demographic information.

The Dynamics Concept Inventory is a multiple-choice exam with 29 questions. It covers 11 concept areas in rigid body dynamics and several more in particle dynamics. This is one of many concept tests designed to assess student's knowledge of particular scientific concepts.

Concept inventories or tests are designed to assess student's knowledge of particular scientific concepts. This link goes to a University of Maryland Physics Education Research Group. It provides information about how to access a variety of concept inventories including mathematical modeling, understanding graphs, vector evaluation, Force Concept Inventory, Mechanics Baseline Test, and several other physics concepts.

This is one of many concept tests designed to assess student's knowledge of particular scientific concepts. This particular concept test is designed for students who have learned about linear signals and systems.

A customizable observation tool used observations of teaching. The tool is a protocol that produces robust and nuanced depictions of classroom dynamics between teachers, students, and technologies. Based on research-based learning theories, the TDOP has been extensively field-tested and is being used by over 300 researchers, program evaluators, and professional developers to create detailed descriptions of what happens inside classrooms.

Inquiry-Based Learning (IBL) Guides
Discovering the Art of Mathematics includes a library of 11 inquiry-based books freely available for classroom use. These texts can be used as semester-long content for courses or individual chapters can be used as modules to experiment with inquiry-based learning and to help supplement typical topics with classroom tested, inquiry based approaches (e.g. rules for exponents, large numbers, proof). The topic index provides an overview of all our book chapters by topic.

Large classes are among the most important because many students enrolled are new to the college experience. The big challenges of teaching large classes include finding ways to engage students, providing timely feedback, and managing logistics. When faced with these challenges, many instructors revert to lectures and multiple-choice tests. There are alternatives. This special report describes some alternative teaching and course management techniques to get students actively involved without an inordinate amount of work on the instructor’s part. From the Teaching Professor, Magna.

Abstract: Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing test items. All of this advice is based on extensive scholarly research and experience. Creating valid multiple-choice items is a difficult task, but it contributes greatly to the teaching and learning process for undergraduate, graduate, and professional-school courses.

Author: Thomas M. Haladyna, Arizona State University

Keywords: Multiple-choice items, selected response, test-item formats, examinations

Writing effective and useful additional questions can be challenging. Some of the most common mistakes in writing effective items are listed in this document using examples of Likert Scale items from the SRTEs. These pitfalls also apply to yes/no questions and open-ended questions. We recommend testing the questions with students before adding them to the Additional Questions section.

Heavily abridged version of Weinstein, Y., Madan, C. R., & Smith, M. A. (in press). Teaching the science of learning. Cognitive Research: Principles and Implications, prepared for and presented at "Reframing Testing as a Learning Experience: Three Strategies for Use in the Classroom and at Home" on Tuesday, Oct. 3, 2017.

Six key learning strategies from research in cognitive psychology can be applied to education: spaced practice, interleaving, elaborative interrogation, concrete examples, dual coding, and retrieval practice. However, a recent report (Pomerance, Greenberg, & Walsh, 2016) found that few teacher-training textbooks cover these principles; current study-skills courses also lack coverage of these important learning strategies. Students are therefore missing out on mastering techniques they could use on their own to learn effectively. This handout contains the six key learning strategies to address those concerns.

This flyer lists a variety of services provided by our consultants divided into 5 broad categories: Course Design & Planning, Teaching Strategies, Testing & Grading, Reearch on Teaching & Learning, and Course Evaluation.

In this rationale, Natalie Parker, Director of CETL and Distance Education, Texas Wesleyan University, advocates for replacing high stakes exams with multiple-attempt, low-stakes quizzes. The “testing effect”, in which students recall more information about a topic after testing than after re-reading the material, was first reported by Abbott in 1909. Subsequent studies have confirmed that repeated testing is an effective way for students to recall material.

Item Analysis (a.k.a. Test Question Analysis) is an empowering process that enables you to improve mutiple-choice test score validity and reliability by analyzing item performance over time and making necessary adjustments. Knowledge of score reliability, item difficulty, item discrimination, and crafting effective distractors can help you make decisions about whether to retain items for future administrations, revise them, or eliminate them from the test item pool. Item analysis can also help you to determine whether a particular portion of course content should be revised or enhanced.

This article, from Stanford's teaching and learning center, addresses strategies for improving assessment and grading practices in the classroom.

This document provides an example of a test blueprint, which can be used to help guide test development and ensure that the test questions appropriately reflect the learning objectives of the unit that the test is designed to assess. It can also help students when they study for the test.

This file contains a list of "item-writing rules," which will help you to write multiple choice questions in a way that will improve the ability of the test to focus on the content and prevent students from guessing the correct answer without knowing the material. The rules were developed by experts in the field of psychometrics, like the people who write questions for SATs or GREs.

This article "The Agony and the Equity," produced by the Center for Teaching at Stanford University in 1992, addresses issues associated with fair testing and grading.

Form for weighting items for exams with items that have different weights. Used for scanning multiple choice tests that use bubble sheets.

This is a true and false quiz to test assumptions about student ratings.

This document is an example of a test blueprint (written for a research methods course), which can be created to help you match your test questions with your learning objectives *and* to help your students study for a test.

This activity involves students pairing up to answer questions about course content and can be used for review of material before a test, for example, or for practice.

Penn State University