ebook img

ERIC ED531639: Technical Adequacy of the easyCBM Reading Measures (Grades 3-7), 2009-2010 Version. Technical Report #1005 PDF

2010·3.4 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC ED531639: Technical Adequacy of the easyCBM Reading Measures (Grades 3-7), 2009-2010 Version. Technical Report #1005

Technical Report #1005 Technical Adequacy of the easyCBM Reading Measures (Grades 3-7), 2009-2010 Version Leilani Sáez Bitnara Park Joseph F.T. Nese Elisa Jamgochian Cheng-Fei Lai Daniel Anderson Akihito Kamata Julie Alonzo Gerald Tindal University of Oregon Published by Behavioral Research and Teaching University of Oregon • 175 Education 5262 University of Oregon • Eugene, OR 97403-5262 Phone: 541-346-3535 • Fax: 541-346-5689 http://brt.uoregon.edu Note: Funds for the data set used to generate this report come from a federal grant awarded to the UO from the Institute for Education Sciences, U.S. Department of Education: Reliability and Validity Evidence for Progress Measures in Reading (Award # R324A100014 funded from 2010 –2012) and from the Institute for Education Sciences, U.S. Department of Education: Postdoctoral Fellowships on Progress Monitoring in Reading and Math (Award #R305B080004 funded from 2008-2012). Support for this work was also provided by the Center on Teaching and Learning (CTL) and Behavioral Research and Teaching (BRT), College of Education at the University of Oregon. Copyright © 2010. Behavioral Research and Teaching. All rights reserved. This publication, or parts thereof, may not be used or reproduced in any manner without written permission. The University of Oregon is committed to the policy that all persons shall have equal access to its programs, facilities, and employment without regard to race, color, creed, religion, national origin, sex, age, marital status, disability, public assistance status, veteran status, or sexual orientation. This document is available in alternative formats upon request. Abstract In this series of studies, we investigated the technical adequacy of three curriculum-based measures used as benchmarks and for monitoring progress in three critical reading- related skills: fluency, reading comprehension, and vocabulary. In particular, we examined the following easyCBM measurement across grades 3-7 at fall, winter, and spring time-points: internal consistency of the reading comprehension measure, construct and criterion-related validity, and practical utility for classifying student level performance. Data were analyzed both as an aggregated grade-level sample (approximately 3700 students per grade) and within ethnic background, achievement quartile, special education eligibility, and English Language Learner (ELL) status subgroups. 1  easyCBM Technical Adequacy Progress monitoring assessments are a key component of many school improvement efforts, including the Response to Intervention (RTI) approach to meeting students’ academic needs. In an RTI approach, teachers first administer a screening or benchmarking assessment to identify students who need supplemental interventions to meet grade-level expectations, then use a series of progress monitoring measures to evaluate the effectiveness of the interventions they are using with the students. When students fail to show expected levels of progress (as indicated by “flat line scores” or little improvement on repeated measures over time), teachers use this information to help them make instructional modifications with the goal of finding an intervention or combination of instructional approaches that will enable each student to make adequate progress toward achieving grade level proficiency and content standards. In such a system, it is critical to have reliable measures that assess the target construct and are sensitive enough to detect improvement in skill over short periods of time. Conceptual Framework: Curriculum-Based Measurement and Progress Monitoring Curriculum-based measurement (CBM), long a bastion of special education, is gaining support among general education teachers seeking a way to monitor the progress their students are making toward achieving grade-level proficiency in key skill and content areas. While reading in particular has received a great deal of attention in the CBM literature, a growing body of work is beginning to appear in the area of mathematics CBM. By definition, CBM is a formative assessment approach. By sampling skills related to the curricular content covered in a given year of instruction yet not specifically associated with a particular textbook, CBMs provide teachers with a snapshot of their students’ current level of proficiency in a particular content area as well as a mechanism for tracking the progress students 2  easyCBM Technical Adequacy make in gaining desired academic skills throughout the year. Historically, CBMs have been very brief individually administered measures (Deno, 2003; Good, Gruba, & Kaminski, 2002), yet they are not limited to the “one minute timed probes” that many people associate them with. In one of the early definitions of curriculum-based measurement (CBM), Deno (1987) stated that “the term curriculum-based assessment, generally refers to any approach that uses direct observation and recording of a student’s performance in the local school curriculum as a basis for gathering information to make instructional decisions…The term curriculum-based measurement refers to a specific set of procedures created through a research and development program … and grew out of the Data-Based Program Modification system developed by Deno and Mirkin (1977)” (p. 41). He noted that CBM is distinct from many teacher-made classroom assessments in two important respects: (a) the procedures reflect technically adequate measures (“they possess reliability and validity to a degree that equals or exceeds that of most achievement tests” (p. 41), and (b) “growth is described by an increasing score on a standard, or constant task. The most common application of CBM requires that a student’s performance in each curriculum area be measured on a single global task repeatedly across time” (p. 41). In the three decades since Deno and his colleagues introduced CBM, progress monitoring probes, as they have come to be called, have increased in popularity, and they are now a regular part of many schools’ educational programs (Alonzo, Tindal, & Ketterlin-Geller, & 2006). However, CBMs – even those widely used across the United States – often lack the psychometric properties expected of modern technically-adequate assessments. Although the precision of instrument development has advanced tremendously in the past 30 years with the advent of more sophisticated statistical techniques for analyzing tests on an item by item basis rather than relying 3  easyCBM Technical Adequacy exclusively on comparisons of means and standard deviations to evaluate comparability of alternate forms, the world of CBMs has not always kept pace with these statistical advances. A key feature of assessments designed for progress monitoring is that alternate forms must be as equivalent as possible to allow meaningful interpretation of student performance data across time. Without such cross-form equivalence, changes in scores from one testing session to the next are difficult to attribute to changes in student skill or knowledge. Improvements in student scores may, in fact, be an artifact of the second form of the assessment being easier than the form that was administered first. The advent of more sophisticated data analysis techniques (such as the Rasch modeling used in the development of the easyCBM progress monitoring and benchmarking assessments) have made it possible to increase the precision with which we develop and evaluate the quality of assessment tools. In this technical report, we provide the results of a series of studies to evaluate the technical adequacy of the easyCBM progress monitoring assessments in reading, designed for use with students in Grades 3 - 7. This assessment system was developed to be used by educators interested in monitoring the progress their students make in the area of acquiring skills in the constructs of oral reading fluency and comprehension. Additional technical reports report the results of similar studies of the easyCBM assessments in mathematics (Anderson et al, 2010; Nese et al., 2010) and in reading with a focus on earlier grades (Jamgochian, et al., 2010; Lai et al., 2010). The easyCBM™ Progress Monitoring Assessments The online easyCBM™ progress monitoring assessment system, launched in September 2006 as part of a Model Demonstration Center on Progress Monitoring, was funded by the Office 4  easyCBM Technical Adequacy of Special Education Programs (OSEP). At the time this technical report was published, there were 92,925 teachers with easyCBM accounts, representing schools and districts spread across every state in the country. During the 2008-2009 school year, the system had an average of 305 new accounts registered each week, and the popularity of the system continues to grow. In the month of October 2010, alone, 11,885 new teachers registered for accounts. The online assessment system provides both universal screener assessments for fall, winter, and spring administration and multiple alternate forms of a variety of progress monitoring measures designed for use in K-8 school settings. As part of state funding for Response to Intervention (RTI), states need technically adequate measures for monitoring progress. Given the increasing popularity of the easyCBM online assessment system, it is imperative that a thorough analysis of the measures’ technical adequacy be conducted and the results shared with research and practitioner communities. This technical report addresses that need directly, providing the results of a series of studies examining the technical adequacy of the 2009 / 2010 version of the easyCBM assessments in reading. 5  easyCBM Technical Adequacy Methods In this section, we describe the setting and participants, measures, research procedures, and data analyses. Setting and Participants During the 2009-2010 school year, 18,755 students in grades 3-7 participated in this study, with roughly 3700 students represented at each grade level (n range per grade = 3690- 3886). Students were recruited from three school districts (n = 61 elementary, 36 middle, and 10 elementary-middle combined schools) reflecting metropolitan and suburban locations within a Pacific north-western state. All general education students in attendance on the days of testing were included in the sample. Nearly half of the students were female (range = 47.4% in fourth grade to 49.8% in seventh grade). Across grades, the majority of students were White (61.9%- 65.3%). On average, Hispanics represented 22% of the sample at each grade except for sixth grade (6.7%). In addition, Asian/Pacific Islanders comprised approximately 5% of the sample at each grade (range = 4.7%- 5.7%). American Indian/Alaskan Native, Black, and Multi-Ethnic subgroups each represented less than 5% of the sample at each grade. Students eligible to receive special education (SPED) services represented approximately 17% of each grade level sample, except for seventh grade (27.3%). The number of students identified as English Language Learner (ELL) varied more among the grades, but on average, represented 7% of the sample (i.e., 10.0% in third grade, 8.1% in fourth grade, 7.3% in fifth grade, 6.2% in sixth grade, and 5.1% in seventh grade). See “Student Characteristics” tables in the Results section for specific student demographic information pertaining to particular data analyses conducted. 6  easyCBM Technical Adequacy Measures The analyses in the current study include measures from the easyCBM battery of reading assessments: Word Reading Fluency (WRF; third grade only), Passage Reading Fluency (PRF), Multiple Choice Reading Comprehension (MCRC), and Vocabulary (VOC). EasyCBM, originally developed in 2006, currently has more than 60,000 users across 50 states. The fluency measures are administered via paper- pencil, and the comprehension and vocabulary measures are administered online. EasyCBM is administered at three time points during the school year to identify student at risk for academic difficulties (except for Vocabulary, which is only administered twice): fall, winter, and spring. In this report we examine relationships among these measures both collectively and individually with the Oregon state assessment in reading, the Oregon Assessment of Knowledge and Skills (OAKS), which is also an online assessment. Word Reading Fluency (WRF; third grade only). Students read as many decodable and sight-words as possible aloud, moving left to right and then down the rows on a sheet of paper within 30- seconds. Errors and skipped words are counted as incorrect while self- corrections and words read correctly are counted as correct. Students receive one point for every word read correctly (maximum total possible = 60 words). Passage Reading Fluency (PRF). Students read aloud a short, original narrative passage on a single side of a sheet of paper within 60-seconds. Examiners follow along on their own test protocol, marking as errors any words skipped or read incorrectly. If a student pauses more than three seconds on a word, the examiner supplies the word and marks it as incorrect; self- corrections are counted as correct. The passages were written to be at middle of the year reading level for each grade. The score, total words read correctly, is calculated by subtracting the 7  easyCBM Technical Adequacy number of errors from the total words read (maximum total possible = approximately 250 words). Multiple Choice Reading Comprehension (MCRC). Students first read an original, narrative passage (approximately 1,500 words), then answer multiple choice questions based on the story, all online. All passages were developed and reviewed to fit grade-level using the Flesh- Kinkaid index feature available on Microsoft Word (Alonzo & Tindal, 2008). Reading comprehension is assessed by performance on seven literal, seven inferential, and six evaluative comprehension questions. Students receive credit for each question correctly answered (maximum total possible = 20 items). Each question is comprised of a question stem and three possible answers: the correct answer and two incorrect, but plausible, distractors. There is no time limit for passage reading, and students are allowed to look back at the passage while answering the comprehension questions. Typically, students spend approximately 30-minutes completing this measure. Vocabulary (VOC). Students were presented with 25 multiple-choice vocabulary items, with one correct answer and two distractors (one of which was a synonym, but incorrect choice). Vocabulary words were drawn from a set of word lists developed by vocabulary researcher Edgar Dale at Ohio State University. In the primary grades, the test measures structural word knowledge of high frequency words. In contrast, in the intermediate grades semantic, rather than syntactic, elements of words is assessed. Typically, students spend approximately 15-minutes completing this measure. Oregon Assessment of Knowledge and Skills in Reading (OAKS). The OAKS is Oregon’s computer-adaptive statewide assessment, administered up to three times per year within an eight-month testing window. However, paper-pencil, Spanish, and Braille versions are

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.