Value Added Modeling (VAM) Research Prepared for New Mexico Legislative Office

Definition, Use & Misuse, and Solutions


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.


A Research Findings Report


Bryan Lindenberger

Center for Research and Outreach

College of Education, New Mexico State University


Prepared for New Mexico State Senator (REDACTED)



Policymakers have placed increasing emphasis on student and teacher evaluation over the past three decades. No Child Left Behind and Race to the Top are just two recent examples of attempts to increase the value of education through legislation. One of the more controversial attempts to measure teacher quality is called Value Added Modeling, or VAM. Rarely does something as seemingly unexciting as professional assessment through statistical analysis garner so much attention as has VAM, with countless and sometimes heated reports and commentaries delivered by mainstream outlets including The Washington Post and The New York Times. The level of emotion is understandable. As implemented, VAM has the potential to drastically influence the future of our nation’s education, as well as significantly impact the careers of our educators and in fact the way they conduct their classrooms.


This brief report, delivered by the Center for Research and Outreach at New Mexico State University, will attempt to step back from the emotion and make sense of this complex issue. Using some of the best studies and reports in recent years, we will:


Define Value Added Modeling from an historical context.

Explore the use and misuse of VAM as currently implemented in our schools.

Discuss better means of implementation as well as alternative means of promoting teacher quality.



Part One: Definition & History of VAM


The teacher accountability movement stems largely from a report titled A Nation at Risk, delivered to the U.S. Secretary of Education in 1982.[1] While some political issues grab headlines only briefly, an “education crisis” provided policymakers with a critical issue to which nearly every citizen could connect over the long term. One in a litany of proposed solutions that gained popular impetus was Value Added Modeling.


A Rand Education research brief describes VAM simply as “a collection of statistical techniques that uses multiple years of student test score data to estimate the effects of individual schools or teachers” on rates of individual student progress.[2] It’s understandable that the public would respond to such a technique, as it applies an easily understandable grade, number, or rank to educators and their institutions. Yet as easily understandable as a “grade” for a teacher may be, there is nothing simple about VAM.


VAM came into being through the work of a statistics professor, Bill Sanders, at the University of Tennessee. In response to A Nation at Risk, Sanders found the opportunity for “a good, rich vein of longitudinal data—something that would make a meaty project for his graduate students.”[3] While others—perhaps surprisingly in retrospect—debated the impact of quality teachers on student outcomes, Sanders traced student scores over years and believed he had “firmly established the advantage for students of having a high-quality teacher.”[4] Though not without its critics, his statistical analysis purportedly takes into account demographic factors such as income level of student families, under-resourced schools, and other factors beyond the control of the teacher. Subsequent studies have likewise determined a strong connection between teacher quality and student outcomes, with one study showing an 11% deviation in student test scores based on teacher quality and another showing a teachers’ value added score as “the strongest predictor of future teacher value-added performance” for the academic growth of a student.[5]


The data was in, and the stage was set for policymakers. If teachers did in fact influence student outcomes, did it not make sense to reward teachers and schools whose students made the greatest gains? And if so, did it not also make sense to penalize schools and educators whose students slipped behind? VAM had the momentum and the data to support it, and the model became integral to policy such as No Child Left Behind.




Part Two: Use and Misuse of VAM


Like it or not, we all respond to rankings. We’d all rather get an A on a quiz than a D. We’d all rather stay at a 5-star hotel than a 2-star. It’s very appealing to have a simple grade or number to respond to, but that grade is only as meaningful as the data put into it. Your idea of a great hotel might be one with a quiet, sandy beach and golf or shopping nearby. But your best friend might prefer bustling streets, parties, and an uproarious night club. More is at stake here than vacation plans. Career choices, classroom decisions, and the future of our children’s education are critical. Before leaping headfirst into using VAM as a be-all measure to teacher and school quality—as we have largely done—we have to carefully consider the complexity of what surrounds these scores. What’s the data input? What does it actually measure? What other impacts are there when teachers are ranked, and how much weight should we give these rankings?


Through complex data analysis, VAM takes into account past student achievement and determines anticipated growth, taking into account various community factors such as income. The principle is simple: if we accept that teachers impact their students, then students who exceed predicted outcomes must have had high quality teachers. The data appear to bear this reality. Even detractors of VAM rarely dispute its utility in pointing to teachers who potentially need assistance. The weight it is given in these determinations, as well as what outcomes should result, however, draw much greater scrutiny and controversy.


A simple question asked by the Journal of Teacher Education: “Will teacher education institutions…guide their students away from employment at schools that experience problems with test scores, making these schools even more difficult to staff?”[6] The answer would seem to be yes, but has anyone considered this outcome? Are we willing to damn some schools to failure? Yes, the motivation is there for improving teacher and school quality, but what about the means?


The situation gets even messier. Writing for Kappan, Jimmy Scherrer of the Learning Policy Center at the University of Pittsburgh offers the following scenario:


“Consider a 5th-grade teacher who, out of the kindness of his heart, holds an after-school math club. The teacher opens the door to all 5th-graders—and interested parents—who might benefit from additional learning. Using VAM at the teacher level, any learning that occurs during this math club will be attributed to the students’ regular teacher and not the teacher who conducts the math club.”[7]


As a society, we tend to reward those who go above and beyond the call of duty. Examples such as the one above show how VAM undermines this basic principle, and can leave the most caring teachers behind.


Margins of error may sound like statistical jargon, but how many thousands of careers are hurt or helped by what they might include. How much burden of fear do we want to put on the shoulders of our educators? Do we want teachers who inspire our children, or do we want teachers afraid to deviate from the assigned curricula? Is any student average, or is there untapped potential that requires creative, caring teachers rather than another round of standardized tests?


One of the unintended consequences of VAM is the narrowing of the curriculum. As Scherrer continues, “Since current standardized primarily focus on basic, procedural skills and not high-level conceptual understanding…students are now immersed in impoverished learning environments of test prep and basic fact memorization that appear on the tests.”[8] We have to examine what we intend to create in our schools. Do we want the next Steve Jobs? Or are we seeking people who can pencil in a circle, perfectly and within the lines, with the “right answer” without even quite knowing why that answer is right, or how it applies to his or her world.


The College of Education Dean at NMSU, Dr. Michael Morehead, has in numerous presentations taken on standardized testing as currently implemented for students in the United States. He posits that, in ranking students, we test students not on what they should know, but rather with criteria and questions that result in a neat, Bell-Curve of average, above average, and below. In other words, every 2nd grader should know that 1+1=2. So we don’t ask that on standardized tests. Too many students get it right!


A similar phenomenon applies to VAM. “Teachers are compared to one another,” writes Scherrer, “and not some criteria.”[9] In other words, you can put the 10 best teachers by any criteria you wish into a norm-referenced assessment. Four of them will always be “below average.”


No matter what knowledge they have imbued, awards they’ve achieved, lives they’ve touched, or great inventors they have produced … four of 10 will be below average. Given the current momentum toward standardization (pandering to the average) and politicization of education, perhaps the next question for our policymakers is how to punish this bottom four without too much teacher union blowback.


We’ve gone astray. There has to be a better, more reasoned approach.




Part Three: Solutions


The problem isn’t with VAM. Even according to its detractors, VAM provides a useful snapshot of student progress and the influence of engaged, quality teachers in that progress. As stated by an author of the American Educational Research Journal on behalf of the American Educational Research Association, “Researchers almost unanimously acknowledge that value-added methods represent a substantial improvement over traditional analyses based only on test score levels.”[10]


No, the problem lies in how we interpret and apply that data. We have busy lives, and it’s too easy to believe in a single number, or a single grade, to simplify a complex reality.


If we are certain that value-added assessments are an appropriate approach, then it is important to ask whether student standardized tests are truly the best way to determine the value a teacher adds to a child’s education.”[11]


VAM delivers a snapshot not of quality versus poor teachers, but of where students are struggling. Rather than an end—who to reward and who to penalize—VAM offers a red flag, a signal for potential intervention where an instructor may not be performing to highest potential or the modern needs of his or her students. A Darwinian approach—survival of the fittest—will reward teachers and schools seen as high performing while penalizing those seen as under-performing. By and large, that is the current approach, based on the idea that competition between schools and educators will yield stronger outcomes. In the process, we penalize innovation, genius, extra work by caring teachers, and pander to the average—not who will inspire the next great innovation, but who teaches students to fill in the correct box on a testing company’s exam. In fact, it is students who can think – who can solve problems and understand how their curricula relate to the world – that score highest on standardized tests. In other words, by not “teaching to the test” educators will find their students gaining higher test scores.


The value of VAM is not in inspiring competition between schools and educators, but in signaling who needs help.


What VAM fails to address is the additional resources and training required by schools and educators to keep up with a standard, however arbitrary. “Promoting Teacher Quality” published by Stanford University, effective models not based on reward and punishment, but rather interventions.


Accomplished Teaching Pathway (A-PATH), for instance, offers a tier system for professional development among teachers, including “Residency, Initial Educator, Professional Educator, and Master Educator… during residency, novices teach 50% time and spend the rest of the day working with mentors and master teachers.”[12]


Closer to home, NMPED offers a “three-tiered career ladder plan that reflects a commitment from the state as a whole to support teacher development, leadership, and professional pay [where] National Board Certification is a proven indicator of teacher effectiveness.”


As Gerald Zahorchak, acting Secretary of Education for Pennsylvania said, “A lot of the time, it’s the teaching that’s bad, not the teachers.”[13]


VAM does not necessarily pinpoint less dedicated teachers and schools so much as signal ineffective practices. No amount of reward or punishment through tenure or pay will change practice until educators are provided the direction they need to implement better practices aligned with the technological, STEM-based, and creative demands of our current workforce demands.





Hill, Heather C.; Laura Kapitula; Kristin Umland. “A Validity Argument Approach to Evaluating Teacher Value-Added Scores.” 2010, American Educational Research Journal.


Knight, Stephanie L, et al. “Examining the Complexity of Assessment and Accountability in Teacher Education.” Oct 17, 2012, Journal of Teacher Education.


Papay, John P. “Different Tests, Different Answers: The Stability of Teacher Value-Added Estimates Across Outcome Measures.” April 2010, American Educational Research Journal.


“The Promise and Peril of Using  Value-Added Modeling to Measure Teacher Effectiveness.” 2004, RAND Corporation Research Brief.


“Promoting Quality Teaching: New Approaches to Compensation and Career Pathways.” 2012, National Board Resource Center, Stanford University.


Scherrer, Jimmy. “What’s the value of VAM (value-added modeling)? May 2012, Kappan Magazine.


Stewart, Barbara Elizabeth. “Value-Added Modeling: The Challenge of Measuring Educational Outcomes.” 2006, Carnegie Corporation of New York.



[1] “Value Added Modeling: The Challenge of Measuring Educational Outcomes.” 2006, Carnegie Corporation of New York.

[2] “The Promise and Peril of Using Value-Added Modeling to Measure Teacher Effectiveness.” 2004, Rand Education.

[3] “Value Added Modeling: The Challenge of Measuring Educational Outcomes.” 2006, Carnegie Corporation of New York.

[4] “Examining the Complexity of Assessment and Accountability in Teacher Education.” Journal of Teacher Education, 2012.

[5] “A Validity Argument Approach to Evaluating Teacher Value-Added Scores.” 2011, American Educational Journal.

[6] “Examining the Complexity of Assessment and Accountability in Teacher Education.” Journal of Teacher Education, 2012.

[7] “What’s the Value of VAM (Value-Added Modeling?” 2012, Kappan Magazine.

[8] “What’s the Value of VAM (Value-Added Modeling?” 2012, Kappan Magazine.

[9] ibid

[10] “Different Tests, Different Answers: The Stability of Teacher Value Added Estimates Across Outcome Measures.” 2010, American Educational Research Journal.

[11] “Examining the Complexity of Assessment and Accountability in Teacher Education.” Journal of Teacher Education, 2012.

[12] “Promoting Quality Teaching: New Approaches to Compensation and Career Pathways.” 2012, National Board Resource Center, Stanford University.

[13] “Value Added Modeling: The Challenge of Measuring Educational Outcomes.” 2006, Carnegie Corporation of New York.




Home l Résumé l Portfolio l Outdoors l Fiction l Services


Follow and Connect


Bryan Lindenberger Blogger   Bryan Lindenberger LinkedIn

Copyright 2016-2017 Bryan Lindenberger


Website design by MK Design House, LLC


Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger
Bryan Lindenberger