- Measuring Growth
- School Reports
- District Reports
- Teacher Reports
- Accessing the Teacher Reports
- Teacher Value-Added
- Teacher Diagnostic
- Teacher Custom Diagnostic
- Teacher Value-Added Summary
- Reports for Administrators
- Student Reports
- Comparison Reports
- Additional Resources
- Admin Help
- Understanding Accounts
- Managing Accounts
- State Admin Tasks
- District Admin Tasks
- School Admin Tasks
- Changing a User's Email Address
- Resetting a User's Password
- Deactivating and Reactivating Accounts
- Sharing Account Management
- Creating Usage Reports
- General Help
Assessments analyzed with the Predictive model can be used with any assessment that has sufficient prior testing. This prior testing, or predictors, can include assessments in the same or different subjects.
This model generates an expected score for each student. Entering achievement reflects students' achievement before the current school year or when they entered a grade and subject or course.
An expected score is the score the student would make on the selected assessment if the student makes average or typical growth. To generate each student's expected score we build a robust statistical model of all students who took the selected assessment in the most recent year. The model includes the scores of all students in the state, along with their testing histories across years, grades, and subjects.
By considering how all other students performed on the assessment in relation to their testing histories, the model calculates an expected score for each student based on their individual testing history.
To ensure precision in the expected scores, a student must have at least three prior assessment scores. This does not mean three years of scores or three scores in the same subject, but simply three prior scores across grades and subjects.
Let's consider an example. Zachary is a high-achieving student who has scored well on state assessments for the past few years, especially in math. To predict Zachary's score on the assessment, we:
- Determine the relationships between the testing histories of all students and their exiting achievement on this assessment in the same year.
- Use these relationships to determine what the expected score would be for Zachary, given his own personal testing history.
Based on Zachary's testing history, a score at the 83rd percentile would be a reasonable expectation for him.
In contrast, Adam is a low-achieving student who has struggled in math. Their prior scores on state assessments are low. Just as with Zachary, we use the relationships between the testing histories of all students and their exiting achievement on the assessment statewide to determine an expected score for Adam. Based upon Adam's own personal testing history, a score at the 26th percentile would be a reasonable expectation for him.
Once an expected score has been generated for each student in the group, the expected scores are averaged. Because this average expected score is based on the students' prior test scores, it represents the entering achievement in this subject for the group of students.
Next, we compare the students' exiting achievement on the assessment to their entering achievement. If a group of students scores what they were expected to score, on average, we can say that the group made average, or typical, growth. In other words, their growth was similar to the growth of students at the same achievement level across the state. This is the definition of meeting expected growth in the predictive model.
If a group of students scores significantly higher than expected, we can conclude that the group made more growth than their peers across the state. If a group scores significantly lower than expected, the group did not grow as much as their peers.
The growth measure is a function of the difference between the students' entering achievement and their exiting achievement. This value is expressed in scale score points and indicates how much higher or lower the group scored, on average, compared to what they were expected to score given their individual testing histories. For example, a growth measure of 9.3 indicates that, on average, this group of students scored 9.3 scale score points higher than expected. When generating growth measures for Teacher Value-Added reports, students are weighted for each teacher based on the proportion of instructional responsibility claimed in the data submitted for analysis.