- Measuring Growth
- Reports
- Additional Resources
- Admin Help
- General Help
Misconception: The EVAAS methodology is too complex; a simpler approach to measuring effectiveness would provide better information to educators.
Although conceptually easy, the statistical rigor necessary to provide precise and reliable growth measures requires that several important analytical problems be addressed when analyzing longitudinal student data, which is critically important in any reporting used for educator evaluations.
In short, a simple gain calculation does not provide a reliable estimate of growth for students linked to an educator. Value-added estimates based on simple calculations are often correlated with the type of students served by the educators. Such models often unfairly disadvantage educators serving students with a history of lower achievement and unfairly advantage educators serving students with a history of higher achievement.
However, it is not necessary to be a statistician to understand the educational implications of EVAAS reporting. With the EVAAS web application, educators have a wealth of reports that go beyond a single estimate of student growth and assist in identifying accelerants and impediments to student learning.
EVAAS in Theory
Any student growth or value-added model must address the following considerations in a statistically robust and reliable approach:
- How to dampen the effects of measurement error, which is inherent in all student assessments because the tests themselves are estimates of student knowledge, not an exact measurement.
- How to accommodate students with missing test scores without introducing major biases by eliminating the data for students with missing scores, using overly simplistic imputation procedures, or using very few test scores for each student.
- How to use all the longitudinal data for each student when all of the historical data are not on the same scale.
- How to use historical data when testing regimes have changed over time to provide educational policymakers flexibility.
EVAAS modeling approaches address all of these concerns to provide reliable estimates of student growth, and more details are provided below.
- EVAAS value-added measures are based on all of a student's previous years' performance data on an assessment instrument (rather than one or two years of data in one or two subjects) to determine the teacher/school/system's estimated impact on its students' academic growth. The inclusion of multiple years of data from multiple subjects for each individual student adds to the protection of an educational entity from misclassification in the value-added analysis. More specifically, using so much data at the individual student level can dampen the effect of measurement error, which is inherent in any test score and in all value-added or growth models.
- EVAAS value-added measures are sophisticated and robust enough to include students with missing data. Since students with a history of lower achievement are more likely to miss tests than students with a history of higher achievement, the exclusion of students with missing test scores can introduce selection bias, which would disproportionately affect educators serving those students.
- EVAAS value-added measures provide estimates whether, on average, the students fell below, met, or exceeded the established expectation for improvement in a particular grade/subject. Assessing the impact at the group level, rather than on individual students, is a more statistically reliable approach due to the issues with measurement error.
- EVAAS value-added measures account for the amount of evidence (standard error) when determining whether an educational entity is decidedly above or below the growth standard as defined by the model. Any model based on assessment data relies on estimates of student learning, and it is important that any value-added measure account for the amount of evidence in the growth measure when providing estimates.
EVAAS teacher value-added measures will account for both measures of uncertainty (standard error) and measures of magnitude (effect size) when determining a teacher's level. In addition to accounting for the inherent uncertainty when providing estimates of student learning, this approach takes into account the practical significance a group of students met, exceeded, or fell short of expected growth.
- EVAAS value-added models are sophisticated enough to accommodate different tests or changes in testing regimes. This provides educators with additional flexibility. First, they can use more tests, even if they are on differing scales. Second, they can continue to provide reporting when the tests change as was the case when M-STEP replaced MEAP.
SAS EVAAS statistical models have been validated and vetted by a variety of value-added experts. The references below include recent studies by statisticians from the RAND Corporation, a nonprofit research organization:
- On the choice of a complex value-added model: McCaffrey, Daniel F. and J.R. Lockwood. 2008. "Value-Added Models: Analytic Issues." Prepared for the National Research Council and the National Academy of Education, Board on Testing and Accountability Workshop on Value-Added Modeling, Nov. 13-14, 2008, Washington, DC.
- On the advantages of the longitudinal, mixed model approach: Lockwood, J.R. and Daniel F. McCaffrey. 2007. "Controlling for Individual Heterogeneity in Longitudinal Models, with Applications to Student Achievement." Electronic Journal of Statistics 1: 223-52.
- On the insufficiency of simple value-added models: McCaffrey, Daniel F., B. Han, and J.R. Lockwood. 2008. "From Data to Bonuses: A Case Study of the Issues Related to Awarding Teachers Pay on the Basis of the Students' Progress." Presented at Performance Incentives: Their Growing Impact on American K-12 Education, Feb. 28-29, 2008, National Center on Performance Incentives at Vanderbilt University.
EVAAS in Practice
Although the statistical approach is robust and complex, the reports in the EVAAS web application are easy to understand. Provided by subject, grade, and year, the value-added estimates are color-coded quick interpretation: blue indicates that students in a district or school made more than the expected growth; green indicates that students in a district or school made about the expected growth; and yellow or red indicates that students in a district or school made less than the expected growth. Educators and admins can identify their strengths and opportunities for improvement at a glance. The reporting is wide-ranging, so that authorized users can also access Diagnostic reports for students by achievement level, individual student-level projections to achievement, and other reports. Educators have a comprehensive view of past practices as well as tools for current and future students. Thus, educators benefit from the rigor of the EVAAS models by gaining insight in an accessible and non-technical format. EVAAS Value-Added reports are customized for Michigan reporting and preferences, but the sample EVAAS school report below illustrates how EVAAS reports can be user-friendly and do not require sophisticated statistical knowledge.
SAMPLE EVAAS DISTRICT VALUE-ADDED REPORT