- Measuring Growth
- School Reports
- School Value-Added
- Diagnostic Reports
- Decision Dashboard
- Projection Summaries
- District Reports
- District Value-Added
- District Diagnostic Reports
- Projection Summaries
- Teacher Reports
- Accessing the Teacher Reports
- Teacher Value-Added
- Teacher Diagnostic
- Teacher Custom Diagnostic
- Reports for Administrators
- Student Reports
- Comparison Reports
- School Reports
- Additional Resources
- Admin Help
- Understanding Accounts
- Managing Accounts
- State Admin Tasks
- District Admin Tasks
- School Admin Tasks
- Changing a User's Email Address
- Resetting a User's Password
- Deactivating and Reactivating Accounts
- Sharing Account Management
- Creating Usage Reports
- General Help
How EVAAS Measures Growth
Each year, the academic performance of students is evaluated using a variety of assessments. Districts, schools, and teachers receive results from these assessments, which provide important information about the achievement level of their students in tested grades and subjects or courses.
But because the achievement data is based on different groups of students each year, direct comparisons of data across years are often not meaningful or useful. For example, comparing the performance of last year's fifth graders to the performance of this year's fifth graders does not tell us how much academic growth either group of fifth graders made.
We offer a different set of measures. The growth of each group of students is measured as they move from one grade to the next or enter and complete a tested course. This approach yields growth measures that are fair, reliable, and useful to educators.
The process begins by generating measures of the average entering achievement level of the group of students served by each teacher, school, and district. Then a similar measure is generated for the group's average achievement level at the end of the subject and grade or course. To ensure that the measures are precise and reliable, EVAAS incorporates assessment data across years, grades, and subjects for each student.
The difference between these two achievement measures is calculated and then compared to a standard expectation of growth called expected growth. Levels are then assigned to indicate how strong the evidence is that the group of students exceeded, met, or fell short of expected growth.
Simply put, the expectation is that regardless of their entering achievement level, students should not lose ground academically, relative to their peers in the same grade and subject or course statewide. This standard is reasonable and attainable regardless of the entering achievement of the students served.
With this approach, it's possible for a group of students to demonstrate high growth, even if all of them remain in the same performance level from one year to the next. Each performance level includes a range of scores, so it's possible for a group's average achievement to rise or fall within a single state academic performance level.
Each growth index is color-coded to indicate how strong the evidence is that the teacher's students exceeded, met, or fell short of expected growth.
Teacher Value-Added reports categorize teacher growth measures using a two-step process based on the growth index and the effect size.
The growth index is the growth estimate divided by the standard error, which is specific to each estimate. The effect size is the growth measure divided by the student-level standard deviation of growth. The effect size provides an indicator of magnitude and practical significance that the group of students met, exceeded, or fell short of expected growth.
- The first step uses the growth index to determine thresholds for the statistical certainty that the growth measure is above or below the expectation of growth. The thresholds are an index of +2 or greater, an index of -2 or less, or an index between -2 and +2. These thresholds are similar to the concept of a 95% confidence interval. If a 95% confidence interval around the growth measure did not contain the growth expectation, then they would fall outside the thresholds.
- The second step uses the effect size to determine thresholds for whether the growth measure is above or below the growth expectation by a certain magnitude. The second step uses an effect size threshold of 0.4 and -0.4. These values correspond to a "medium" effect size as referenced in John Hattie's work.*
*John Hattie, Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement (London: Routledge, 2008).
Michigan Department of Education policies specify four categories for teacher growth. The table below describes these levels and how they are color coded on the Teacher Value-Added reports.
|Color||First Step: Growth Measure Compared to Expected Growth||Index||Second Step: Effect Size Compared to Threshold||Effect Size||Interpretation|
|Level 4, Exceeds||At least 2 standard errors above expected growth||2.00 or greater||At least 0.4 standard deviations above expected growth||0.40 or above||Significant evidence that the teacher's students made more progress than expected growth (index is greater than or equal to 2 and effect size is greater than or equal to 0.4)|
|Level 3, Met||No more than 2 standard errors below expected growth||-2.00 or greater||Less than 0.40 standard deviations above expected growth||Less than 0.40||Evidence that the teacher's students made progress similar to expected growth (index is greater than or equal to -2 and either the index is less than 2 or the effect size is less than 0.4)|
|Level 2, Nearly Met||No more than 2 standard errors below expected growth||Less than -2.00||No more than 0.40 standard deviations below expected growth||Between 0.40 and -0.40||Evidence that the teacher's students made less progress than expected growth (index is less than -2 and effect size is greater than or equal to -0.4)|
|Level 1, Not Met||More than 2 standard errors below expected growth||Less than -2.00||More than 0.40 standard deviations below expected growth||Less than 0.40||Significant evidence that the teacher's students made less progress than expected growth (index is less than -2 and effect size is less than -0.4)|
The distribution of these categories can vary by year/subject/grade. There are many reasons this is possible, but overall, these categories are based on the amount of evidence that students made more or less than the expected growth, and the magnitude of their growth above or below the expected growth.