ebook img

ERIC ED562526: Fixing the Academic Performance Index. Policy Brief 13-1 PDF

0.69 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC ED562526: Fixing the Academic Performance Index. Policy Brief 13-1

JANUARY 2013 Fixing the Academic Thirteen years ago, the Cali- Performance Index fornia State Senate passed Executive Summary the Public Schools Account- The Academic Performance ability Act (PSAA) which Index (API) is the centerpiece of Morgan S. Polikoff, created the Academic Performance California’s state assessment and University of Southern California Index (API) as the primary measure accountability system. With the Andrew McEachin, for accountability. The PSAA was recent passage of SB1458 and the University of Virginia intended to hold schools accountable pending reauthorization of both state and federal accountability for students’ achievement and progress, legislation, there is now an in order to ensure that all students were unprecedented opportunity prepared to become lifelong learners to improve the API for next able to succeed in the 21st century. generation accountability in Shortly after the implementation of California. In this policy brief PSAA, the federal government reau- Morgan S. Polikoff and Andrew thorized ESEA, also known as the No McEachin draw on their own previous work and more than a Child Left Behind Act of 2001. Since decade’s worth of research on then, schools have operated under a effective accountability policy dual-accountability system with both design to describe the lessons that state and federal achievement bench- have been learned and to propose marks. Although only the federal policy changes that would improve Policy Brief 13-1 NCLB system includes real monetary the API. and other sanctions, schools are still The research literature on very responsive to their performance accountability systems has Morgan Polikoff is an Assistant Professor under PSAA, and specifically to their produced a number of key at the University of Southern California API scores and growth. The local findings with regard to API and Rossier School of Education. He studies media, realtors, and other stakeholders API-like measures of school standards, assessment, and accountability performance. For instance, the API look to the API as a measure of school policies and their influences on teachers’ is heavily influenced by student instruction and student learning. progress. Schools’ API scores have demographics, as are other status been used in the allocation of high pro- measures of achievement. Year-to- Andrew McEachin is an IES Postdoctoral file interventions and sanctions, such year changes in API, often heralded Fellow at the Curry School of Education as the Quality Education Improvement by schools and the media, are and a Research Associate at UVa’s Center Act (QEIA), District Assistance Inter- highly unstable and do not reflect on Education Policy and Workforce sustained improvement. The API vention Teams (DAIT), and School Competitiveness. His research interests is biased against small schools and Improvement Grants (SIG). They are include the design and impact of educational accountability policies, teacher sources of pride for local schools and Continued on page 2. labor markets, and math education policy. districts, and even a few points worth P O L I C Y B R I E F Staiger, 2002; Linn, 2004; McEachin PSAA and API and Polikoff, 2012). A number of these Executive Summary (Cont.) The California State Senate passed findings are directly relevant to the API SB376 in 1997, which created the high schools and in favor of larger and the future of California’s account- Standardized Testing and Reporting schools and elementary schools. ability system. These lessons question (STAR) system and the California Perhaps most importantly, the API the underlying assumptions of the API is too narrowly focused on test- Standards Tests (CST). The CSTs are and the utility of the index for helping based measures of performance. offered in grades 2 through 11 in math, the state improve underperforming Fortunately, many of these English Language Arts (ELA), history, schools; they also challenge stakehold- problems are relatively simple to and science. Students receive both a fix, as the operation of current ers’ ability to effectively use the API as a scale score between 150 and 600 and accountability systems in other means of assessing school quality (e.g., one of five performance levels: Far states and some school districts has parents deciding where to buy a house). Below Basic (FBB), Below Basic (BB), made clear. Our purpose in this brief is to discuss Basic (B), Proficient (P), and Advanced The authors conclude with a existing research and present new evi- (A). A score at the proficient level is the number of recommendations for dence as to the potential improvement state goal and is considered on grade- improving the API, including of the API. Beyond merely criticizing level for accountability purposes. The a) tracking the achievement of the measure, we offer a set of detailed CSTs are the primary assessments used individual students across years; suggestions for revising the API in the b) using multiple years of school- in the API and Adequate Yearly Prog- next round of California accountability level data to measure growth in ress (AYP) performance measures in achievement; c) incorporating policy. PSAA and NCLB, respectively. both growth and levels of Governor Brown recently signed legis- achievement in identifying schools The API is a weighted average of for intervention/support; and d) lation (SB1458) that calls for a thorough students’ ELA, math, history, and sci- exploring alternative measures revision of the API, particularly in the ence CST performance levels, which of school performance. While state’s high schools. SB1458 requires is calculated each year for each public these solutions will not solve that student test scores comprise no school in the state. Each student’s all of California’s educational more than 60 percent of the API in score on each state exam is translated problems, they will certainly help high schools, with the remaining 40+ make our state’s assessment and into a point value: 1000 for advanced, accountability systems more percent to include other measures of 875 for proficient, 700 for basic, 500 effective in determining where to school and student performance. In for below basic, and 200 for far below target improvement efforts. elementary schools, student test scores basic. Schools receive only a single must comprise at least 60 percent of API score between 200 and 1000, cal- the API. SB1458 opens the door to a culated using an algorithm that weights of gains in API scores are often cause reconsideration of the way California students’ performance in each subject, for celebration. uses test scores to measure school while heavily favoring mathematics performance—the main focus of this and ELA. In the 2009-10 school year, The API serves an important purpose policy brief—and also to consideration the weights for the grade 6-8 API were in California’s accountability system, of the kinds of indicators that might be 51.4 percent for ELA, 34.3 percent for but researchers have learned quite included in the API to measure other math, 7.1 percent for science, and 7.1 a lot about the design and impact important educational goals. The state percent for history. High schools are of school performance measures in should capitalize on this moment to also evaluated on student performance accountability systems since it was make improvements in the PSAA, and on the California High School Exit initially introduced (e.g., Kane and specifically in the API. Examination. 2 FIXING THE ACADEMIC PERFORMANCE INDEX The PSAA establishes the API as the FIGURE 1. Number of students in a school of 1000 at each performance level required to move a school 8 points in API in one year. primary measure for state account- ability. Schools are expected to have 70 an API score of 800; those with an API lower than 800 are expected to make 60 up at least five percent of the differ- nts 50 ence between their current API and e d 800 each year. Schools that meet or Stu 40 exceed the API requirements can apply er of 30 for special recognition as a California mb u 20 Distinguished School, a National Blue N Ribbon School, or a Title I Academic 10 Achievement Award School. Schools 0 that do not meet their API goals are Far below basic Below basic Basic to Proficient to subject to interventions and sanctions to below basic to basic proficient advanced from the state, although these may not occur in practice due to lack of funding. 26 Far Below Basic students to Below Polikoff, 2012). Most of these problems Basic, while it would have to move 62 also apply to federal accountability Unlike the performance measure used students from Proficient to Advanced measures, but the California system in the federal accountability system to meet the API growth goal. is the focus of this brief. In the next (NCLB’s AYP) schools under PSAA section, we present results drawing benefit from moving students to higher Similar to the subgroup provisions from our own work and the work of performance levels along the entire in NCLB’s AYP system, schools also achievement distribution. However, others that highlight what we believe receive API scores and growth tar- schools benefit more from moving are the most important lessons learned. gets for racial and ethnic subgroups, lower performing students to higher Where possible, we illustrate the les- English Language Learners, socioeco- proficiency levels than moving higher sons using statewide school-level per- nomically disadvantaged students, and performing students. Schools benefit formance data from the last decade. students with special needs, which are the most from moving students from calculated in a similar manner as the Lesson 1: API Scores Primarily Far Below Basic to Below Basic, as is Measure Student Demographics school-wide API scores and growth evident in the fact that the gap between targets. The intent of the subgroup the points allotted for these two levels The API first holds schools account- API scores and growth targets is that is the largest. For example, assume able according to whether they have schools will have incentives to focus school A is only held accountable for an API score above 800. As is well on traditionally lower performing one subject and has 1000 students with known in education research, student subgroups of students, and, in turn, 20 percent of students at each level, test scores are very highly correlated reduce achievement gaps. advanced through far below basic. with student poverty and racial/ This school would have an API of ethnic characteristics (Linn, 2004). Lessons Learned approximately 655, with a goal of rais- This is especially true of measures of ing its API score 8 points the following There are several serious problems achievement levels (e.g., API) rather year. As shown in Figure 1, the school with API that limit its potential util- than achievement growth (e.g., value- would reach this goal if it only moved ity and effectiveness (McEachin & added). For example, there is a strong FIXING THE ACADEMIC PERFORMANCE INDEX 3 P O L I C Y B R I E F negative correlation (approximately growth comes at a significant cost, tions in test scores in a school. For -.7) between the percent of students which we discuss next. instance, the correlation of one year’s in a school who qualify for the federal API growth (e.g., 2009-2010) with the Lesson 2: API Growth is a Highly Free/Reduced Price Lunch (FRPL) next year’s (2010-2011) is -.29. This Unstable Measure program and the school’s API level. means that schools with large API gains The average school in California has The API growth targets were created in one year are likely to have among about 58 percent FRPL students and to provide schools with API scores the smallest API gains in the next year. an API of 780. A school with one stan- below 800 an opportunity to demon- The judgment of school performance strate improvement over time. The based on a “noisy” measure that fluctu- dard deviation more FRPL students (a growth targets can also be used to ates from year-to-year does not inspire school with approximately 83 percent differentiate between schools making confidence in the possibility that the FRPL) would be expected to have an significant progress towards the goal measure would contribute to an effec- API of approximately 716. of 800 and those with both low API tive accountability system (McEachin Although NCLB’s AYP measure often scores and low growth. Although & Polikoff, 2012). receives more criticism, the relation- this is a desirable goal, API Growth Lesson 3: API and API Growth ship between student poverty and is an unstable measure of changes in Appear Biased Against Middle and school performance. Conceptually it school performance is equally strong High Schools is a weak measure of school growth under both the API and AYP systems. because it does not track the perfor- While it is undoubtedly difficult to In short, the measures of school qual- mance of individual students, but compare schools at different levels to ity used under both NCLB and PSAA rather successive cohorts of students. one another, it is problematic that API are biased against schools that serve It is straightforward to see that such a scores are so strongly biased against impoverished students, as are all mea- measure would be heavily affected by middle and high schools. For instance, sures based on student test score levels changes in student demographics and 58 percent of middle schools and 82 (Balfanz et al, 2007). The correlation performance across cohorts, rather percent of high schools had API scores between API scores and student demo- than reflecting the contributions of below 800 in 2011, as compared to just graphics would be less problematic if schools to individual students’ learn- 45 percent of elementary schools. If schools could also meet their API goals ing (Ho, 2008; Kane & Staiger, 2002; high schools are in fact doing a much by demonstrating meaningful growth Linn, 2004). Such a measure is also worse job than elementary schools in over time, using a measure of growth strongly affected by measurement error educating students, then these patterns that was both stable and unrelated to – much more so than measures based are perhaps not problematic. However, student demographics. on achievement levels. it is far from clear that this is the case. To be sure, the growth in API from There are a number of ways to illustrate There are a number of potential rea- one year to the next (e.g., a school’s the less than desirable characteristics sons why middle and high schools Growth API in 2011 minus its Base of API Growth. One way is to think fare worse under the PSAA system. API in 2010) is not as strongly related about the intent of the measure – to For example, it may be that the tests to student characteristics. In fact, there identify schools consistently improving used in elementary grades in the API is almost no relationship (correlation toward a goal of API 800. Our analy- system do not account for skills and less than .1) between a school’s growth sis finds that the measure is unable to knowledge students need in order in API over two years (henceforth, consistently identify schools that are to learn the more difficult material API Growth) and its share of FRPL improving – or not improving. Rather, in middle and high school. If this is students. However, this measure of it tends to reward single-year aberra- so, then the current system sends the 4 FIXING THE ACADEMIC PERFORMANCE INDEX signal that a majority of elementary FIGURE 2. Schools’ Growth in API and School Enrollment schools are meeting the stated goals 300 under PSAA, while a majority of their students are in fact not ready for the 11 0 2 200 more difficult material presented in d n later years. This would partially explain 0 a 1 the dramatic performance differences 0 100 2 n among elementary, middle, and high e e w schools. et 0 b PI Lesson 4: API Growth Appears A n Biased Against Small Schools h i –100 wt o In addition to the bias against middle Gr –200 and high schools it is well known that growth measures of achievement, of 0 1,000 2,000 3,000 4,000 which API Growth is an example, are School Enrollment biased against small schools—those serving fewer students (Kane & Staiger, 2002). Conceptually, in a school serv- Lesson 5: API Creates a Disincentive efforts on low-achieving students at to Improve Achievement for High- ing just 100 students, one student’s the expense of higher achieving stu- Achievers fluctuation in scores will have a greater dents (Booher-Jennings, 2005; Neal impact on overall school performance Because of the API’s point system for & Schazenbach, 2010). While it is than in a school serving 1000 students. calculating the index, a design decision important to increase the achievement Smaller schools are therefore also more by the original authors of the index, of schools’ lowest performing students, susceptible to measurement error and the API diminishes the importance it is also important to acknowledge the to differences in cohort composition. for schools of raising the achievement potential unintended consequences of of high achievers (i.e., advanced stu- the differential incentives in the API A straightforward way to see this bias dents). For students at the proficient calculations. Recently, some scholars against small schools is to examine the level, the bump for schools getting have begun to question whether the scatterplot in Figure 2. This scatterplot those students to score advanced is focus on proficiency has harmed the shows school size (on the horizontal just 125 points, 58 percent smaller performance of high-achievers, with axis) against the school’s 2011 API than the bump for schools to raise a some evidence suggesting that it has Growth score (on the vertical axis). As far below basic student to below basic. (Ladd & Lauen, 2010; Neal & Scha- is clear from the figure, smaller schools For students at the advanced level, the zenbach, 2010). are much more likely to have the low- school cannot possibly gain anything Lesson 6: API Does Not Track the est API growth scores than are larger by raising those students’ achievement Achievement of Individual Students schools. They are also more likely to (these students have already earned the have the highest growth. Unless we maximum 1000 points), so there is no This problem in fact underlies many really believe that smaller schools have incentive to do so. Given the extensive of the problems we have already such dramatic swings in effectiveness literature on high stakes accountabil- discussed, but it also poses its own, from year to year, these results do not ity and students “on the bubble,” it is more conceptual problem. The API benefit the long-term improvement of rational that schools would respond and API Growth systems measure California’s schools. to these pressures by targeting their school performance based on the test FIXING THE ACADEMIC PERFORMANCE INDEX 5 P O L I C Y B R I E F scores of their current students. Who As is being done now for multiple- The California Longitudinal Pupil these students are, including their measure teacher evaluations, research- Data System (CALPADS) can already backgrounds and prior test scores, has ers must investigate the uses and effects track individual students and includes no bearing on the API calculation. of these various indicators of school multiple years of CST data, so follow- Without student-level longitudinal and student performance. But the ing students over time for the purposes data, however, it is impossible to con- move to broaden the measures used to of measuring growth in achievement duct more sophisticated analyses of hold schools accountable that SB1458 should not require great technologi- the school’s impact on that portion of requires is an important next step in cal investments, nor does it require a student achievement that is under its the accountability movement. vertical scale. control. Even without a vertical scale, student-level longitudinal data can be Solutions California does not need to reinvent used to investigate school performance the wheel on the analysis of longitudi- Given the variety and scope of these much more directly than the current nal student-level achievement data. A problems, reforming the California cohort-based achievement data, as is number of states and districts includ- accountability system may seem an the case with the Colorado Growth ing North Carolina have based their impossibly daunting task. However, Model, for instance. accountability systems on growth in there are several straightforward solu- individual student learning for many tions to these problems that could be Lesson 7: API Focuses Too Narrowly years. The state could look at the on Test Scores implemented with relatively little effort alternatives currently in use, consider and that would have important, posi- As is true for NCLB’s AYP, the API the research on value-added measure- tive impacts on the API system. We rating schools receive is based entirely ment, and select a system with the ideal now discuss several of these solutions, on student test performance. SB1458 properties given the state’s goals. The which follow directly from the weak- requires the state to incorporate gradu- revised growth measure should replace nesses in the API identified above. ation rates into the API for high school the current API growth measure, which calculations, but other important goals Solution 1: Track the Performance of does not fulfill its goal of measuring Individual Students and Replace API of schooling should also be acknowl- growth in student proficiency. Growth edged and measured in California’s Tracking individual students over time accountability system, including espe- Of all our proposed policy recommen- cially students’ readiness for college dations, this is the most fundamental would also help solve the problem of and careers after they leave high school. to the success of the next generation limited incentives for improving high- Given the well-known negative con- API. Tracking individual student achievers’ achievement by making the sequences associated with focusing progress over time is essential to mea- growth in all students’ achievement accountability exclusively on test scores sure more precisely the contribution of part of the accountability equation— (teaching to the test, test score inflation, schools to student performance. The high achievers can readily be compared and others) it makes sense to broaden new state assessments emerging from with other high achievers in estimat- the kinds of measures included in the the SMARTER Balanced Assessment ing relative growth over time. These API calculation. Possible indicators Consortium (SBAC) should provide student growth measures could easily might include attendance and promo- superior assessment data to use in be rolled into a school-level perfor- tion rates, expulsion and suspension determining districts’ and schools’ mance measure. In conjunction with rates, completion rates in specific effectiveness. For example, their use the higher-quality computer-adaptive courses or sequences of courses, results of computer adaptive assessment tech- assessments emerging from the SBAC, from parent and student surveys, and niques will allow for greater precision such a change would be a welcome work and college placement outcomes. in identifying high and low achievers. contributor toward helping ensure high 6 FIXING THE ACADEMIC PERFORMANCE INDEX achievers are not neglected in Califor- FIGURE 3. Matrix of School Performance for New Accountability System nia’s schools. High Solution 2: Improve the Stability of Growth Measures Using Multiple 4 1 Years of Data Improving High performing No matter what growth measure is h t chosen, there will be instability in the w o r estimates with a single year of data. The G PI literature on this topic suggests that a A straightforward approach to addressing 3 2 this problem is to take multiple years Needs intervention Watch list of data into consideration in estimat- ing contributions to student learning. Low For instance, the state could create a Low High value-added score for each school in API Level each year and then average the results across three consecutive years. This on the API. On the vertical axis is the tive. They should also be examined by rolling average would smooth out the school’s growth in achievement, based researchers to learn about the best year-to-year variations in achievement on the revised student-level growth practices that could be applied to low- gains and contribute to a clearer, more measures. Schools can be in any of the achieving, low-performing schools. stable measure. And because each stu- four quadrants. dent would be used as his own control In the second quadrant (lower-right) in each year’s calculations, it would not In the first quadrant (upper-right) are are schools that are high-achieving and matter that different cohorts of students schools that are high-achieving and low-growing. These will be schools would be included in the rolling aver- high-growing. These schools are true with strong inputs that are not, for ages. Increased stability is necessary high performers and need no inter- one reason or another, contributing to in order to send consistent messages to vention. Schools that demonstrate large achievement gains. These schools schools about their performance. performance in this box for several might need a particular set of interven- Solution 3: Use Level and Growth years in a row might be exempted from tion strategies, perhaps drawn from the in Achievement to Identify accountability provisions for a certain successful schools identified in the first Interventions Tailored to School number of years, reducing administra- quadrant. These schools are unlikely Performance Types tive burden and increasing flexibility. to need punitive accountability sanc- With the revised growth measure, a new tions unless their achievement growth accountability system should combine Perhaps the next best performing drops to severely negative, but they level and growth in achievement to schools are those in the fourth quadrant should be monitored and encouraged identify schools in need of interven- (upper-left). These are schools that to improve. tion. One approach to combining the are low-achieving and high-growing. two pieces of information would be a These schools likely do not need Finally, in the third quadrant (lower- two-by-two matrix, such as the one in intervention, but they do need to be left) are schools that are low-achieving Figure 3. On the horizontal axis is the monitored and supported to ensure and low-growing. These are schools school’s level of achievement, based their growth trajectories remain posi- with weak inputs that are not improving FIXING THE ACADEMIC PERFORMANCE INDEX 7 P O L I C Y B R I E F outcomes—they should be the primary identifies low-performing schools of that will consider alternative measures targets of accountability from the state. all levels and sizes. of school performance. Some possible These schools are likely to need more indicators were suggested above, but A simple approach to solving this prob- serious interventions, which could be that list was by no means exhaustive. lem is to separate schools into groups chosen by the state and perhaps based To the extent that we want schools based on size and level before making on the practices in place in schools with to focus on a particular outcome – determinations about effectiveness. similar inputs but stronger growth. because we view it as an important For instance, schools could be sepa- Given their poor track record of outcome for our students – we should rated into the three levels and then into improving performance, these schools find ways to reliably measure that out- quintiles based on school size. Then, come and use it to guide improvement will likely not benefit from simply pro- the two performance measures could and accountability. The adage “What viding additional resources. be calculated based on the schools in gets measured gets done” is nowhere each group, and accountability provi- A straightforward way to report results truer than in education policy and sions could be applied accordingly. to schools would be to pare down each accountability. California currently generates a similar school’s performance to two measures schools index which compares the rela- Conclusion – an achievement level score similar tive API performance for schools with to an API score and an achievement similar student characteristics. Each California is now at an important junc- growth score, perhaps in the form of a school receives a similar school index ture in the history of state accountabil- percentile rank. Thresholds could then score of 1 to 10, with 10 indicating that ity. Several trends or policies including be established on the two measures for the school is performing in the top 10 the adoption of the Common Core separating schools into the quadrants percent of schools with similar charac- State Standards and the development of described above. These two measures teristics and a 1 indicating the school new and better assessments, the pend- could be made clear and understand- performing in the bottom 10 percent ing reauthorization of NCLB and of able to parents and educators. of schools with similar characteristics. PSAA, and the opportunities presented The school’s actual API score is left by a federal government that seems to Solution 4: Administer Accountability Separately unadjusted. Our approach, however, be moving away from a one-size-fits- by School Level and Size would adjust schools’ API scores based all model of accountability have con- on school type and size. If the three- verged to present a unique opportunity Even with the adjustments just pro- year averages we mentioned previously to revise and improve our state’s school posed, it is likely that the revised API were also included, it would increase accountability system. The recent pas- system would remain biased against the stability of these classifications as sage and signing of California SB1458 middle and high schools and small well. also requires a rethinking of API, schools. Even if we believe that mid- and our suggestions fit nicely in the dle and high schools are worse than Solution 5: Explore Alternative framework laid out in that legislation. Measures of School Performance elementary schools, it should be clear Finally, the public remains supportive that there are many elementary schools Relying solely on test scores (and, as of of accountability in education, and in need of improvement. Furthermore, next year, high school graduation rates) there is bipartisan agreement that research suggests the returns on early is likely to lead to continued narrow- previous accountability systems were investment and accountability in ear- ing of the curriculum and focus on the weak in a number of obvious ways. lier grades will be higher than in later subjects and material that are tested. The proposals sketched here represent grades. Thus, it makes sense to ensure SB1458 requires the California Depart- a potential path forward to the next than any accountability system fairly ment of Education to initiate a process generation of California accountabil- 8 FIXING THE ACADEMIC PERFORMANCE INDEX ity. Given this opportunity, California should learn from the mistakes of the past and improve the API. Of course, improving the API will not magically solve California’s educational problems. The design of accountabil- ity systems is but one piece of a much larger education policy framework that supports K-12 education in the state. Nevertheless, the identification of schools needing improvement and support is at the heart of efforts to improve education through account- ability, and it is essential that California do this as fairly and accurately as pos- sible. The improvements that we have proposed in the API will go a long way toward building a foundation for future improvement in educational perfor- mance for the state’s schools. FIXING THE ACADEMIC PERFORMANCE INDEX 9 P O L I C Y B R I E F References Balfanz, R., Legters, N., West, T. C., & Weber, L. M. (2007). Are NCLB’s measures, incentives, and improvement strategies the right ones for the nation’slow-performing high schools? American Educational Research Journal, 44(3), 559-593. Ho, A. D. (2008). The problem with “proficiency”: Limitations of statistics and policy Under No Child Left Behind. Educational Researcher, 37(6), 351-360. Kane, T. J., & Staiger, D. O. (2002). The promise and pitfalls of using imprecise school accountability measures. The Journal of Economic Perspectives, 16(4), 91-114. Ladd, H. F., & Lauen, D. L. (2010). Status versus growth: The distributional effects of school accountability policies. Journal of Policy Analysis and Management, 29(3), 426-450. Linn, R. L. (2004). Accountability models. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education. New York, NY: Teachers College Press. McEachin, A., & Polikoff, M. S. (2012). We are the 5 percent: Which schools would be held accountable under a proposed revision of the Elementary and Secondary Education Act? Educational Researcher, 41(7), 243-251. Neal, D., & Schanzenbach, D. W. (2010). Left behind by design: Proficiency counts and test-based accountability. Review of Economics and Statistics, 92(2), 263-283. 10 FIXING THE ACADEMIC PERFORMANCE INDEX

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.