- Measuring Growth
- Reports
- Additional Resources
- Admin Help
- General Help
Concept of Growth
Value-added reports provide reliable measures of the academic progress a group of students has made, on average, in a tested grade and subject or course. These measures are different from measures of student achievement. Achievement measures, such as test scores or the percentage of students who tested proficient or above, indicate where students performed academically at a single point in time. Growth Measures indicate how much progress the students have made, as a group, over time.
To understand what growth means in the EVAAS reports, it's helpful to imagine a child's physical growth plotted on a chart. Every year, during the child's wellness check, the pediatrician will measure the child's height. Each measurement is then plotted on a chart to show the child's growth over time.
Typically, the child's growth curve doesn't follow a smooth line. Instead, there are "dimples" and "bubbles" along the way. These variations might occur because the child had a growth spurt or because there was error in measuring the height. Perhaps the child did not stand up straight for last year's measurement. Although the growth curve isn't smooth, we can see the child's progress over time. The child's height at a single point in time would not be meaningful to the doctor, but seeing the child's growth over time gives the doctor important information about the child's health and ongoing development.
A growth chart used by a pediatrician has more information than just that one child's growth. It also has curves that show average, or typical, growth for children at all heights. The pediatrician compares the child's growth to these curves to determine whether the child is making appropriate growth. The pediatrician would not be alarmed if a child's current height is at the 10th percentile if the child has been relatively short historically. On the other hand, if a child was average in height at a younger age but has not made expected growth over the past few years, then a current height at the 10th percentile might be cause for concern.
We can think about measuring students' academic growth in a similar way. Although EVAAS does not measure growth for individual students, this analogy can be helpful when thinking about growth measures for districts, schools, and teachers. When students are tested at the end of each grade or course, we can plot the scores for the group of students who are served the same way a pediatrician plots a child's physical growth. Like the pediatrician's graph, the curve we get from plotting a group of students' average achievement level each year will likely exhibit a pattern of dimples and bubbles.
If we discover a dimple effect occurring in fourth-grade math in a school, then the dimple is evidence that the instructional program for fourth-grade math might need to be examined. Likewise, if we see a dimple effect for the group of students for whom a teacher had instructional responsibility, the dimple is evidence that the teacher might need to adjust the instructional practices to better meet students' academic needs. Comparing the growth of this teacher's students to a standard expectation of growth is helpful in determining whether the students' progress has been sufficient.
Let's consider another analogy: measuring the progress or growth of two track relay teams, team A and team B. Each team includes runners whose individual times contribute to the overall speed of the team. The two teams have performed very differently in past races.
- Team A has been lower performing, typically only beating about a third of the teams in the league.
- Team B has been higher performing, typically beating three-fourths of the teams in the league.
Given his team's historically lower performance, Team A's coach wants his team to grow and improve. But Team B's coach wants to see growth for her team, too. Even though they have been high performing, the coach still wants each runner to continue making strong progress. Both coaches need to measure their team's improvement, or growth, in a meaningful way.
To start, each coach needs a solid measure of the team's performance at the beginning of the year. Later, the coach could then compare the team's performance level from the beginning of the season to the end of the season to see how much they've grown.
A data-savvy coach would avoid relying on a single race to determine the team's overall performance level at the beginning of the year. Imagine how inaccurate that might be. If the team has an unusually good day and each runner scores a personal best, the team would appear to be higher achieving than they actually are. Likewise, if one runner stumbles and costs the relay team a lot of time, the team would underperform that day. Instead of using a single race time, the coach would consider all the data from multiple races and practices to get a good sense of how the team is performing at the beginning of the year. This assessment of the team's performance would provide a solid starting point for measuring the team's growth and improvement.
Both coaches would reasonably expect their teams to run faster after a year of practice and training. But faster race times aren't necessarily enough to ensure that the team moves up in the standings. To demonstrate that kind of growth, each team needs to show improvement compared to others. If Team A improves their race times but go from beating a third of the teams in the league to beating only a fifth of the teams, they have not improved as much as the other teams have. Likewise, if Team B improves their already fast race times, but go from beating three-fourths of the teams to only beating two-thirds of them, then the coach should be concerned that her team did not show enough growth and improvement that year.
Despite their teams' very different levels of performance, the key question in both coaches' minds is whether their team's performance in the league improved, dropped, or stayed about the same. Both coaches have the same expectation and the same goal. They expect their team to at least maintain their standing in the league, and they both have a goal of helping the team improve and perform at a higher level compared to their peers.
The approach these coaches use to measure their teams' growth is similar to the way EVAAS determines the academic growth of students served by districts, schools, and teachers. EVAAS growth measures provide a reliable comparison of the achievement level of each group of students from one year to the next. Just as the two track teams in our analogy had different levels of performance, the students in different districts, schools, and classrooms across the state are at different academic achievement levels.
Despite these differences, all educators want to help their students grow and improve. To determine whether students are making enough growth, we need reliable growth measures based on as much data as we can include.