Home | Site Map | Join Discussion | Contact | Support Us

Search This Site 

Browse Articles and Reports 
 Education facts and networking for parents, policymakers, and taxpayers #DateFormat(Now(),"MMMM dd, yyyy")#  
Don't Reinvent the Wheel
Search thousands of questions, answers, and discussions in our archive (subscribers only)
Network with Consumers
ClearingHouse
Second Opinions™
Consumer Associations™

Education Consumers Foundation

Education Consumers Foundation >>About TVAAS


 

Tennessee’s Value Added Assessment: Why It Is Important and How It Works.

Ten years prior to the federal No Child Left Behind Act, Tennessee enacted its Value Added Assessment System (TVAAS). TVAAS was and is a major advancement in educational accountability. It remains the most sophisticated and mature school accountability system in use today. It is TVAAS’s statistical precision that makes it possible to rank Tennessee’s schools according to their effectiveness in helping students learn.

Prior to 1980, school quality was measured mainly by inputs—indicators such as the number of books in the school library, the percentage of teachers with master’s degrees, and dollars spent per pupil. Because of the concerns raised by the 1983 Nation at Risk report, most states today have some type of outcome-based accountability system, with some measure of student achievement reported school-by-school.

School outcome data permits users to judge school quality on the basis of the percentage of students who meet or exceed certain minimum achievement levels. The No Child Left Behind Act uses this approach. So does the National Assessment of Educational Progress and many state education agencies. They report the percentage of students whose test scores reach the following broad levels of proficiency: Below Basic, Basic, Proficient, and Advanced.

Although reports of students reaching the various proficiency levels are more informative than reports of inputs alone, neither type of accountability is a sufficient basis for identifying effective schools. A school may have excellent inputs but fail to boost student learning. The same can be true for a school with high test scores. The test scores of a given school may be above average simply because it has a high percentage of talented and advantaged students. Suburban schools are usually thought of as good schools precisely because they have these characteristics. Unfortunately, these data may hide the fact that such “good” schools are permitting their mostly talented and advantaged students to slide into mediocrity.

Surprisingly, the schools that maximize the annual achievement growth of their students are in many cases not the ones with the advantaged students, the most resources, and the highest test scores. Rather, there are quite a number of small rural schools and inner city schools that are doing an exceptional job of getting the most out of their students. They are exceptional in the sense that they add the most educational value to their students regardless of whether their entering test scores are high or low, or whether they are advantaged or not.

These are the schools where all children are both encouraged and enabled to “be all they can be.” These are the schools that can be identified only through an accountability system such as TVAAS.

How It Works

Conceptually, value-added assessment is based on the year-to-year achievement of individual students. Gains are measured by comparing each student’s annual test score increase to his or her year-to-year increases in previous years. If the gains obtained by the students of a given teacher, school, or school system meet or exceed the rate of increase exhibited by those same students in previous years, that teacher, school, or system is said to be doing a good job. If the increases are less than those of previous years, the teacher, school, or school system is said to be in need of improvement. In short, value-added assessment is a form of statistical analysis that can ascertain whether the students taught by a teacher, school, or system are obtaining a year’s worth of achievement growth per school year.

Sound value-added analysis requires certain tools, the most basic of which is annual testing of students with a reliable and valid test. Reliable tests provide consistent results—like the measurements made by a good bathroom scale. Valid tests are accurate—like a bathroom scale that has been properly calibrated.

There are many ways of assessing student performance, but only standardized tests permit an accurate comparison of teachers, schools, and school systems to each other and to norms or established standards. If schools are to be accountable to policymakers and the public, annual testing with standardized tests is required.

Ideally, a test used for value-added assessment is comprised of fresh, non-redundant, but equivalent items and tied to an underlying linear scale. If the same test items are used year after year, schools can teach to the test. If the test lacks an underlying linear scale, year-to-year progress cannot be accurately stated. In addition, the test must contain items that range from easy to very challenging. If a test does not contain a wide range of items, it will artificially limit the scores of very low and very high-performing students. Sophisticated value-added accountability systems like that of Tennessee have all of these characteristics.

By using an analytic strategy called blocking, TVAAS statistically excludes the influence of all preexisting influences on student performance. These include, but are not limited to socioeconomic status, ethnic differences, previous learning, family background, and other characteristics, known and unknown. As repeatedly confirmed by empirical studies, it permits fair comparisons among teachers, schools, and school systems that serve differing student populations. As explained in a recent paper by William Sanders, not all value-added accountability systems include the full range of statistical properties necessary to preclude all biasing influences.

It should be noted, however, that neither TVAAS nor any other accountability system—including those outside of education—automatically remove current exogenous influences such as an illness or natural disaster or, conversely, improved living conditions or the introduction of tutoring. For example, in the business world, individual and organizational bottom lines are commonly shaped by events that advance or retard performance in a given year but average out over time. TVAAS minimizes such impacts by reporting rolling 3-year averages and by comparing teachers, schools, and systems other teachers, schools and systems that are exposed to similar events.

In-depth discussions of value added assessment are widely available. They range from briefings intended for parents and policymakers to technical assessments intended for scholarly audiences. The National School Board Association and the American School Board Journal have published briefings for policymakers. Descriptions suited to teachers and school administrators are available from Harcourt Assessment and the Evergreen Foundation. Technical analyses intended for researchers include a Carnegie Foundation sponsored assessment by RAND and the Spring 2004 issue of the Journal of Educational and Behavioral Statistics. Of particular importance is the above noted paper by Dr. William Sanders comparing TVAAS with less sophisticated approaches and his recent summary of findings from 22 years of value-added research.

Limitations

One question often raised about the use of value-added assessment is whether it effectively sets low expectations for low performing students and high expectations for higher performing students.

It is true that value-added assessment judges a student’s present gains by comparing them to past gains—a low rate of progress for students who have gained little in the past. It must be kept in mind, however, that such data is intended as a means of determining whether a given teacher, school, or school system is as effective as other teachers, schools, or systems. It is not intended as the sole measure of whether a student is making adequate educational progress.

Adequate progress would be the average level of progress needed to bring a given student or group of students to a prescribed level of educational achievement by the end of a school year or a school career. This latter question can only be answered by projecting the present rate of gain over the available time to see if the desired level of achievement will be attained.

The application of value-added assessment to the question of whether students are gaining at a rate necessary to attain minimum standards is called projection modeling. The U.S. Department of Education is requiring all states to develop this capacity and Tennessee was the first state to have its “growth model” fully approved for use in the 2006-2007 school year.

When projection modeling indicates that a given rate of increase will fall short of some minimum standard, accelerated schooling is required. Placement in a more effective school or with an exceptionally effective teacher is an option. A recent study by the Tennessee Department of Education found that effective teachers are not distributed equally among schools. High poverty schools have a greater share of the less effective teachers (who are mostly novices). Other options include lengthening of the school day and/or school year or more selective hiring and better training of teachers in those schools with high percentages of students in need of accelerated progress.

Whichever option is employed, it must be understood that accelerated learning—whether the result of a more intensive schooling experience or an extended one—will inevitably entail more time and effort on the part of students. Learning requires educational engagement by the student, and more learning will require more engagement. For most students, the hoped-for improvements stemming from any accelerated program are unlikely to be obtained without significant changes in student focus on schooling.

Effective teachers such as those identified in Sanders and Rivers’ classic study and the high performing schools selected for Education Consumers Foundation’s Value-Added Achievement Awards show that accelerated learning environments are available. With support and leadership, the practices that make these teachers and schools exceptionally effective can be exported to the ones that are less effective.

In summary, value-added assessment is a necessary foundation to the assessment of teacher, school, and school system effectiveness. However, by itself, it is not a sufficient basis for determining whether students will achieve an acceptable minimum level of educational proficiency. Assessment of outcome proficiency requires a unique application of value-added assessment called a “growth model.”

One other question commonly asked about TVAAS is whether the highest value-added schools are the best for all students. On average, the answer is yes. In particular, however, the answer may be no.

Whether a given school is the best for a given student depends on the fit between the student and the instruction provided by the school. Schools with high TVAAS scores are doing a good job of providing instruction that is suited to the students that are currently enrolled. If these students have a wide range of entering achievement test scores, the high performing school must be offering instruction suited to students who exhibit a wide range of achievement. If, however, a school has a narrower range of student achievement levels—high or low—and high TVAAS gain scores, it may or may not be effective with students who are outside of that range. For example, a student who is achieving at a low level may or may not benefit by transferring to a school that is producing high gains only with high achieving students. The same can be said of a high achieving student who transfers to a school that is producing high gains with low achieving students.

Parents seeking find the best fit for their child need to consider both the value-added gains demonstrated by a given school and the school’s record of success with students at various levels of achievement. Fortunately, the TNDOE provides some helpful data.

The trend analysis data for a given school shows how effective the school is in maintaining the average achievement levels of students at the 25th, 50th, and 75th percentile achievement levels for two different cohorts of students—a group that was enrolled from 1996-1998 and a group enrolled from 2003-2005. The two cohorts may be compared to see if the school has become more effective in 2003-2005.

Parents wanting to determine the fit between their student and a particular school would be most concerned with the 2003-2005 cohort. By looking at the achievement level that most closely corresponds to the level of their child, i.e., 25 th, 50 th, or 75 th percentile, they can see whether students in a given school tend to increase, decrease, or maintain their level of achievement (relative to other students within the school district) over their several years of schooling.

While most schools exhibit the same pattern of year-to-year increase or decrease across the three student achievement levels, some schools are more effective with only one or two of the groups.

One final caution should be noted. Within a given school there may be several teachers for a given grade; thus the trend data is an average of the annual performances of teachers who differ in regard to which level of student their teaching best serves. For this reason, placement with a particular teacher in a particular grade can optimize a child’s learning experience; thus parents in this situation are advised to talk with the principal about a child’s needs in order to place him or her appropriately.

Conclusion

While no assessment system is without limitations, Tennessee’s value-added assessment model stands head and shoulders above others with regard to its focus on the effectiveness of schools, its ability to remove the biasing effects of social and economic influences, and its usefulness in answering critical questions about student progress and educational quality. It has been validated by independent reviewers and proven to be a useful tool for policymakers. Today, TVAAS is a model for several states (Ohio, Pennsylvania, and North Carolina are adopting similar systems) and is under consideration as a national model by the U.S. Department of Education. As the availability of value-added data increases, the Education Consumers Foundation will expand its efforts to increase the public visibility of demonstrably effective schools.

 



Trustworthy
Research & Analyses

Experts & Bias
Issues

Adv. Teacher Certification
Brain-Based Learning
Class Size
Curriculum
Effective Schools
Fads in Education
Head Start
Higher Education
Multiculturalism
Reading
Research Quality
Results of Reforms
Schooling Costs
Smaller Schools
Self-Esteem
Student Effort
Teaching Practices
Teacher Training
Value-Added Assessment
Worker Skills

Help in obtaining
'Fair Use' copies