First published April 9, 2014 on K-12 News Network
"The President of the United States and his Secretary of Education are violating one of the most fundamental principles concerning test use: Tests should be used only for the purpose for which they were developed. If they are to be used for some other purpose, then careful attention must be paid to whether or not this purpose is appropriate" — Gerald Bracey, PhD
The American Statistical Association (ASA) released their ASA Statement on Using Value-Added Models for Educational Assessment today. While their spokesperson explicitly said they neither support, nor oppose the use of so-called "Value Added" methodologies, the actual document provides strong support to those who oppose this wrongheaded use of statistics to make high stakes decisions effecting the lives of students, educators, and our school communities. Too bad the amateur statisticians at the Los Angeles Times were able to commit their egregious acts several years ago before this document was released. It's also too bad that LAUSD recently implemented one of these seriously flawed models, one that will abjectly harm students' education and further undermine the morale of our professional educators for years to come.
Some important excerpts from the document (all emphasis mine):
Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes. (1)
VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality. (2)
In practice, no test meets this stringent standard, and it needs to be recognized that, at best, most VAMs predict only performance on the test and not necessarily long-range learning outcomes. Other student outcomes are predicted only to the extent that they are correlated with test scores. A teacher’s efforts to encourage students’ creativity or help colleagues improve their instruction, for example, are not explicitly recognized in VAMs. (4)
Attaching too much importance to a single item of quantitative information is counter-productive—in fact, it can be detrimental to the goal of improving quality. In particular, making changes in response to aspects of quantitative information that are actually random variation can increase the overall variability of the system. (5)
The quality of education is not one event but a system of many interacting components. (6)
A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. (6)
Overreliance on VAM scores may foster a competitive environment, discouraging collaboration and efforts to improve the educational system as a whole. (6)
The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences. (7)
The VAM scores themselves have large standard errors, even when calculated using several years of data. These large standard errors make rankings unstable, even under the best scenarios for modeling. (7)
A VAM score may provide teachers and administrators with information on their students’ performance and identify areas where improvement is needed, but it does not provide information on how to improve the teaching (7)
All in all, the document is an academic condemnation of the VAM/AGT pseudosciences that have been ushered in by neoliberal corporate education reform project. While the ASA is populated with actual scientists and statisticians, we can be sure that the corporate reform crowd will be quick to try to refute the document. Here the tag-line of a recent article in Salon by Paul Rosenberg is apropos: 'Like global warming deniers, "education reformers" have nothing to lose and everything to gain by sowing confusion'.
For a copy of the ASA Statement on Using Value-Added Models for Educational Assessment see http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf. For additional information, please visit the ASA website at www.amstat.org.
"The majority of the variation in test scores is attributable to factors outside of the teacher’s control"—ASA #LAUSD http://t.co/Ihydoazv4K
— Robert D. Skeels (@rdsathene) April 8, 2014
No comments:
Post a Comment