C L A S S R O O M A S S E S S M E N T = I N AC T I O N 99778811444422220088338844__WWEEBB..iinnddbb ii 33//1155//1111 1100::5511 AAMM 99778811444422220088338844__WWEEBB..iinnddbb iiii 33//1155//1111 1100::5511 AAMM C L A S S R O O M = A S S E S S M E N T I N A C T I O N Mark D. Shermis and Francis J. Di Vesta ROWMAN & LITTLEFIELD PUBLISHERS, INC. Lanham (cid:129) Boulder (cid:129) New York (cid:129) Toronto (cid:129) Plymouth, UK 99778811444422220088338844__WWEEBB..iinnddbb iiiiii 33//1155//1111 1100::5511 AAMM Published by Rowman & Littlefi eld Publishers, Inc. A wholly owned subsidiary of The Rowman & Littlefi eld Publishing Group, Inc. 4501 Forbes Boulevard, Suite 200, Lanham, Maryland 20706 http://www.rowmanlittlefi eld.com Estover Road, Plymouth PL6 7PY, United Kingdom Copyright © 2011 by Rowman & Littlefi eld Publishers, Inc. All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the publisher, except by a reviewer who may quote passages in a review. British Library Cataloguing in Publication Information Available Library of Congress Cataloging-in-Publication Data Shermis, Mark D., 1953– Classroom assessment in action / Mark D. Shermis and Francis J. Di Vesta. p. cm. Includes bibliographical references and index. ISBN 978-1-4422-0836-0 (cloth : alk. paper)—ISBN 978-1-4422-0837-7 (pbk. : alk. paper)— ISBN 978-1-4422-0838-4 (electronic) 1. Educational tests and measurements. I. Di Vesta, Francis J. II. Title. LB3051.S47 2011 371.27—dc22 2010051258 ™ The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI/NISO Z39.48-1992. Printed in the United States of America 99778811444422220088338844__WWEEBB..iinnddbb iivv 33//1155//1111 1100::5511 AAMM Contents Preface ix Acknowledgments xi A Note to the Reader: Theory to Practice xiii 1 Orientation to Assessment 1 A Defi nition of Assessment 2 The Components of Assessment 3 Who Uses Classroom Assessment and Why? 4 Policy Makers’ Use of Assessment: The No Child Left Behind Act 11 How You Will Incorporate Assessment in Instruction 12 Your Knowledge about Assessment 14 The Consequences of Poor Assessment Practices 19 Contemporary Classroom Assessment 22 Summary 23 2 Planning Assessments 27 Scores and Their Interpretation 27 Criterion-Referenced Assessments 28 Norm-Referenced Assessments 30 Making Plans for Assessments 34 Steps in Assessment and Reporting 36 Creating Blueprints for Specifi c Assessments 39 Test Blueprints 46 Assessment Modalities and the Role of Observation 50 Summary 51 3 Observation: Bases of Assessment 55 Making Observations 55 Direct Observation 60 ■ V 99778811444422220088338844__WWEEBB..iinnddbb vv 33//1155//1111 1100::5511 AAMM VI ■ CONTENTS Teachers as Observers 63 Components and Purposes of Observation 70 Making Observations 73 Summary 79 4 Formative Assessment: Using Assessment for Improving Instruction 83 Distinctions between Summative and Formative Assessment 85 Formal and Informal Formative Assessment 88 Interpretation of Feedback in Formative Assessment 90 Using Assessments as Evidence of Progress 94 The Dynamics of Formative Assessment 95 Feedback 95 Asking the Right Questions in Assessment 102 Implementing Formative Assessment 105 Designing Appropriate Formative Tests 108 Summary 112 5 Performance Assessment 119 Defi nitions 120 Performance Assessments 121 Behavioral Objectives Redux 130 Creating Rubrics 133 Advantages of Using Rubrics 140 Improving Consistency of Ratings 143 Portfolio Assessment 144 Summary 149 6 Developing Objective Tests 153 Considerations in Choosing a Test Format 153 True-False Tests 157 Multiple-Choice Tests 161 Fill-in-the-Blank (Completion) Tests 172 Matching Tests 176 The Challenge of Assessing Higher-Order Thinking 179 Summary 181 7 Developing Subjective Tests 185 Constructed-Response Tests 185 Short-Answer Questions 190 Essay Questions 192 Evaluating Essays 198 Summary 207 8 Selecting Standardized Tests 209 Objectives 210 Principles for Selecting Standardized Tests 211 99778811444422220088338844__WWEEBB..iinnddbb vvii 33//1155//1111 1100::5511 AAMM CONTENTS ■ VII Sources to Guide Selection of Standardized Tests 218 Buros Mental Measurements Yearbook and Tests in Print 218 The ETS Test Collection 225 ERIC 225 Standards for Educational and Psychological Testing 226 Standardized Tests and Classroom Assessments 230 Summary 233 9 Technology in Assessment 237 Technological Formats for Instruction: Emergence of Formative Assessment 237 Technology and Assessment 243 Some Available Testing Software 249 Measurement and Reports of Problem Solving 253 Expansion of Computer-Based Assessment Technology 256 Observations 262 Summary 264 Appendix: Resource List of Websites Pertaining to Assessment 266 10 Improving Tests 273 The Context of Test Improvement 273 Item Improvement 277 Testing for Mastery 289 Keep an Item Bank or Item Pool: Putting It All Together 295 Some General Guidelines for Improving Test Items 297 Summary 300 11 Domain-Specifi c Assessment and Learning 305 Perspectives on Subject-Matter Instruction and Assessment 306 Constructivism in Assessment 313 Assessment at Instructional Phases 324 Constructing and Using Questionnaires to Measure Dispositions, Metacognitions, and Affect 338 Summary 342 Helpful Readings 344 12 Grading 347 On the Nature of Learning in Public Schools 347 Technology Applied to Grading 357 Reporting Grades: The Report Card 361 Beyond Grades 365 Summary 367 13 Supplementary Assessments of Individual Differences 371 A Classroom Model Underlying Assessment 371 Using Measures of Individual Differences 376 Categories of Individual Differences 377 99778811444422220088338844__WWEEBB..iinnddbb vviiii 33//1155//1111 1100::5511 AAMM VIII ■ CONTENTS Culture as a Source of Individual Differences 378 Measuring by Use of Self-Reports: Assessment of Affect and Learner Motivations 378 Graphic Organizers of Course Content Achievement 388 Assessment Based on Multiple Intelligences 391 Self-Reports of Preferences: Assessment of Learning Styles 398 Response to Intervention (RTI): A System for Instruction-Based Assessment 399 Dynamic Assessment: Measuring Change and Potential for Learning 404 Integrating Assessments: The Case Study 409 Summary 414 14 High-Stakes Testing: Policy and Accountability 419 An Overview 420 Policy and Accountability 422 Assessment for Educational Policy and Accountability: The National Assessment of Educational Progress 427 Assessment in Policy for Accountability: The No Child Left Behind (NCLB) Legislation 433 Summary 452 15 Assessment and Best Practices 457 Purposes Served by Best Practices Studies 458 Teacher Preparation for Assessment Practices 459 Benchmarking as a Component of Best Practice Studies 461 Best Practices Research 463 Best Practices for Educational Improvement 468 Using Best Practices in Teacher Education 475 Summary 479 16 Test Bias, Fairness, and Testing Accommodations 483 Obtained Scores, True Scores, and Error 484 Student Characteristics and Normative Comparisons 486 Bias in the Construction of Assessments 488 Bias through Question Formats 490 Bias in the Administration of Assessments 494 Fairness 499 Testing Accommodations 502 Summary 516 Index 523 About the Authors 543 99778811444422220088338844__WWEEBB..iinnddbb vviiiiii 33//1155//1111 1100::5511 AAMM Preface THIS BOOK EVOLVED because it had to. We had both been teaching assessment courses to young, enthusiastic teacher candidates, but had been subjecting them to one of two kinds of texts. The fi rst kind of textbook arose out of the traditional psychometric approach. It was chock full of for- mulas and distinctions that seemed unimportant (or seemingly irrelevant) to the construction of a fi fth-grade social studies exam. Sure, classroom teachers need to know something about the consistency of scores their students receive on the assess- ments they construct, but mastery of this concept probably doesn’t require the memorization of KR-20 or coeffi cient alpha formulas. Yes, classroom teachers should understand how stu- dent scores relate to a test they just administered, but whether these teacher candidates need to comprehend the defi nition of consequential validity is a slightly different matter. These texts tended to emphasize the content domain of testing. At the other end of the spectrum were those textbooks that focused primarily or solely on the processes associated with assessment. For example, one challenge for aspiring teachers is to develop expectations as to what constitutes excellent, good, fair, and poor performance levels for the grades they might be teach- ing. Would they know a good seventh-grade essay if they saw it? Should a fifth grader be expected to incorporate perspective in a pencil drawing? These texts tended to emphasize the relation- ships between the curricula being taught and how assessments in those areas might be conceived or calibrated, but sometimes at the expense of ignoring the actual mechanics of testing. What we were looking to create was a text that provided a strong rationale for integrating assessment and instruction into a functional process, one that offered concrete guidelines on how ■ IX 99778811444422220088338844__WWEEBB..iinnddbb iixx 33//1155//1111 1100::5511 AAMM
Description: