- Measuring Growth
- Reports
- Additional Resources
- Admin Help
- General Help
Misconception: EVAAS methodology has not been vetted.
EVAAS is based on established statistical models that have been in use among many industries for decades and, in some instances, centuries. These models are designed to work well with large amounts of information and accommodate common issues with student testing, such as non-random missing data. Although the underlying program code for these models and algorithms used for Michigan is proprietary, the EVAAS methodologies and algorithms are published and have been in the open literature for over 20 years. Details about the EVAAS models are available in the references below:
- On the statistical models upon which Michigan's reporting is based:
"SAS® EVAAS® for K-12 Statistical Models" (2015) available at http://www.sas.com/content/dam/SAS/en_us/doc/whitepaper1/sas-evaas-k12-statistical-models-107411.pdf.
- On the Tennessee Value-Added Assessment System: Millman, Jason, ed. Grading Teachers, Grading Schools: Is Student Achievement a Valid Evaluation Measure? Thousand Oaks, CA: Corwin Press, 1997.
EVAAS in Theory
While EVAAS reporting benefits from a robust modeling approach, this statistical rigor is necessary to provide reliable estimates. More specifically, the EVAAS models attain their reliability by addressing critical issues related to working with student testing data, such as students with missing test scores and the inherent measurement error associated with any test score.
Regardless, the EVAAS modeling has been sufficiently understood such that value-added experts and researchers have replicated the models for their own analyses. In doing so, they have validated and reaffirmed the appropriateness of the EVAAS modeling. The references below include recent studies by statisticians from the RAND Corporation, a non-profit research organization:
- On the choice of a complex value-added model: McCaffrey, Daniel F., and J.R. Lockwood. 2008. "Value-Added Models: Analytic Issues." Prepared for the National Research Council and the National Academy of Education, Board on Testing and Accountability Workshop on Value-Added Modeling, Nov. 13-14, 2008, Washington, DC.
- On the advantages of the longitudinal, mixed model approach: Lockwood, J.R. and Daniel F. McCaffrey. 2007. "Controlling for Individual Heterogeneity in Longitudinal Models, with Applications to Student Achievement." Electronic Journal of Statistics 1: 223-52.
- On the insufficiency of simple value-added models: McCaffrey, Daniel F., B. Han, and J.R. Lockwood. 2008. "From Data to Bonuses: A Case Study of the Issues Related to Awarding Teachers Pay on the Basis of the Students' Progress." Presented at Performance Incentives: Their Growing Impact on American K-12 Education, Feb. 28-29, 2008, National Center on Performance Incentives at Vanderbilt University.
EVAAS in Practice
EVAAS includes two main statistical models, each described briefly below.
- The gain model used in value-added analyses is a multivariate, longitudinal, linear mixed model. The gain model is typically used when there are clear "before" and "after" assessments in which to form a reliable gain estimate. This is used for
- M-STEP Mathematics and English Language Arts in grades 4-7
- PSAT 8/9 Mathematics and English Language Arts in grade 8
- MAP Mathematics and Reading in grades 1-8 (Teacher reports only)
- STAR Mathematics grades 2-8 and Reading & Literacy K-8 (Teacher reports only)
- i-Ready Mathematics and ELA grades K-8 (Teacher reports only)
- The predictive model used in value-added analyses is conceptually an analysis of covariance (ANCOVA) model. The predictive model is based on the difference between expected scores and exiting scores for students. In Michigan, this is used for
- M-STEP Social Studies in grades 5, 8, and 11
- M-STEP Science grades 5, 8, and 11
- PSAT 8/9 grade 9 Mathematics and Evidence-Based Reading and Writing
- PSAT 10 Mathematics and Evidence-Based Reading and Writing
- SAT Mathematics and Evidence-Based Reading and Writing