- Measuring Growth
- School Reports
- District Reports
- Teacher Reports
- Accessing the Teacher Reports
- Teacher Value-Added
- Teacher Diagnostic
- Teacher Custom Diagnostic
- Teacher Value-Added Summary
- Reports for Administrators
- Student Reports
- Comparison Reports
- Additional Resources
- Admin Help
- Understanding Accounts
- Managing Accounts
- State Admin Tasks
- District Admin Tasks
- School Admin Tasks
- Changing a User's Email Address
- Resetting a User's Password
- Deactivating and Reactivating Accounts
- Sharing Account Management
- Creating Usage Reports
- General Help
Teacher Value-Added Summary
Understanding the Report
The growth index is a reliable measure of whether students exceeded, met, or fell short of expected growth. This value takes into account the amount of growth the students made as well as the amount of evidence above or below expected growth. Specifically, the growth index is the growth measure divided by its standard error.
Each growth index is color-coded to indicate how strong the evidence is that the teacher's students exceeded, met, or fell short of expected growth.
Teacher Value-Added reports categorize teacher growth measures using a two-step process based on the growth index and the effect size.
The growth index is the growth estimate divided by the standard error, which is specific to each estimate. The effect size is the growth measure divided by the student-level standard deviation of growth. The effect size provides an indicator of magnitude and practical significance that the group of students met, exceeded, or fell short of expected growth.
- The first step uses the growth index to determine thresholds for the statistical certainty that the growth measure is above or below the expectation of growth. The thresholds are an index of +2 or greater, an index of -2 or less, or an index between -2 and +2. These thresholds are similar to the concept of a 95% confidence interval. If a 95% confidence interval around the growth measure did not contain the growth expectation, then they would fall outside the thresholds.
- The second step uses the effect size to determine thresholds for whether the growth measure is above or below the growth expectation by a certain magnitude. The second step uses an effect size threshold of 0.4 and -0.4. These values correspond to a "medium" effect size as referenced in John Hattie's work.*
*John Hattie, Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement (London: Routledge, 2008).
Michigan Department of Education policies specify four categories for teacher growth. The table below describes these levels and how they are color coded on the Teacher Value-Added reports.
|Color||First Step: Growth Measure Compared to Expected Growth||Index||Second Step: Effect Size Compared to Threshold||Effect Size||Interpretation|
|Level 4, Exceeds||At least 2 standard errors above expected growth||2.00 or greater||At least 0.4 standard deviations above expected growth||0.40 or above||Significant evidence that the teacher's students made more progress than expected growth (index is greater than or equal to 2 and effect size is greater than or equal to 0.4)|
|Level 3, Met||No more than 2 standard errors below expected growth||-2.00 or greater||Less than 0.40 standard deviations above expected growth||Less than 0.40||Evidence that the teacher's students made progress similar to expected growth (index is greater than or equal to -2 and either the index is less than 2 or the effect size is less than 0.4)|
|Level 2, Nearly Met||No more than 2 standard errors below expected growth||Less than -2.00||No more than 0.40 standard deviations below expected growth||Between 0.40 and -0.40||Evidence that the teacher's students made less progress than expected growth (index is less than -2 and effect size is greater than or equal to -0.4)|
|Level 1, Not Met||More than 2 standard errors below expected growth||Less than -2.00||More than 0.40 standard deviations below expected growth||Less than 0.40||Significant evidence that the teacher's students made less progress than expected growth (index is less than -2 and effect size is less than -0.4)|
The distribution of these categories can vary by year/subject/grade. There are many reasons this is possible, but overall, these categories are based on the amount of evidence that students made more or less than the expected growth, and the magnitude of their growth above or below the expected growth.
Each growth measure is a conservative estimate of the academic growth a teacher's students made, on average, in a grade and subject or course. A teacher's growth measure is based on students that were linked to the teacher and the percentage of instruction. Because the growth measures are estimates, consider their associated standard errors as you interpret the values.
See also: Measuring Growth.
All growth measures on the EVAAS reports are estimates. All estimates have some amount of measurement error, which is known as the standard error. This value defines a confidence band around the growth measure, which describes how strong the evidence is that the group of students exceeded, met, or fell short of expected growth.
For more information about standard errors, see Growth Measures and Standard Errors.
Expected Growth represents the point at which the teachers' students' scores, on average, align with expectations.
Expected Growth signifies the minimum amount of academic growth that educators should expect a group of students to make in a subject and grade or course. In general, this signifies appropriate, expected academic growth. Simply put, the expectation is that regardless of their entering achievement level, students served by each district, school, or teacher should at least make enough progress to maintain their achievement level relative to their peers. This is a reasonable target for educators who serve all types of students. Expected Growth is represented by a vertical green line in the graph.
The effect size is the growth measure divided by the student-level standard deviation of growth. The effect size provides an indicator of magnitude and practical significance that the group of students met, exceeded, or fell short of Expected Growth.
The standard deviation describes variability in the growth made by individual students within a given year, subject, and grade. Dividing the growth measure by the standard deviation provides an indicator of magnitude and practical significance that the group of students met, exceeded, or fell short of Expected Growth. The practical significance is related to how large the growth measure is relative to the student-level standard deviation in the given subject and grade.
Effect Size Threshold
The effect size threshold of 0.4 and -0.4 multiplied by the standard deviation is denoted by the orange bars in the chart. For example, if the standard deviation is 11, the chart shows the threshold (orange bars) at -4.4 and 4.4. This represents the amount of growth needed to demonstrate an effect size of -0.4 and 0.4. If the growth measure is beyond these bars, then it has exceeded the threshold.
Interpreting Effect Sizes
Effect sizes are sometimes classified as small, medium, or large to assist with interpretation and whether any differences in student performance are meaningful. Various researchers have offered thoughts on what defines a small, medium, and large effect size.
- Cohen describes +/- 0.20 as small, +/- 0.50 as medium, and +/- 0.80 as large (Cohen, Jacob. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Mahwah, NJ: Lawrence Erlbaum, 1988).
- Hattie describes an effect size of +/- 0.40 as the average seen across all interventions, and +/- 0.40 as the "hinge point" (Hattie, John, Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. London: Routledge, 2008).
- Kraft suggested 0.05/-0.05 as small, +/- 0.05 to 0.20 as medium, and > 0.20 or <-0.20 as large based on the distributions of effect sizes and changes in achievement (Kraft MA. "Interpreting Effect Sizes of Education Interventions." Educational Researcher. 2020; 49 (4):241-253).
All the researchers agree that it is important to interpret results within the distribution of actual results. In other words, what constitutes a small, medium, or large effect size is determined by what is observed in the actual results.
To see the list of students linked to the teacher in the data, click the Student List button above the chart.
The Used in the Analysis column indicates whether each student was used in the analysis that generated the teacher's value-added report. The most common reasons that students are excluded from the analysis are:
- They don't have assessment scores
- They aren't linked to a teacher
- They don't meet membership or attendance rules
- Their scores are outliers
- They are new to the state
- They don't have enough prior assessment data
Some of these exclusion rules might not apply to all subjects. Statistical Models and Business Rules describes various conditions that can cause a student to be excluded from the analysis.
Teachers who have a Teacher Value-Added report in the selected subject, grade, or course in the most recent year have access to this report.
To view the Composite report, select that option from the Tests/Subjects menu. A teacher's composite is a combined measure of tested subjects, grades, and courses in which the teacher received a value-added report. For more information, see Composites.