Contact Support Sign Up
Welcome
Login  Sign up

FAQ for Analytics

Following are answers to frequently asked questions about Administrative Analytics.

Changes in Predictions

Why did last week’s persistence prediction of 90% jump to 95% today?
It’s typical to see more shifts in persistence prediction occur during the open registration period. This is because “next-term enrollment” is an important variable in modeling persistence.

We had no students with a “very low” persistence prediction, and now we have 60!
Critical time thresholds cause updates to the predictions, such as the term starting and triggering unregistered students to change to a status of Inactive. Note that Powerful Predictors are not the same set of data as model variables: they are often a subset.

Algorithm and Modeling

How many prior years of data does the algorithm use? Will there be generational drift?
In the persistence model, we use the most recent 2 years of historic data to have enough data for modeling and to ensure an accurate representation of current students. Every time we retrain a model, we use the most recent 2 years of historic data to keep up with your ever-changing student population.

How can a student be predicted to have low persistence yet high completion?
This unintuitive pairing might point to an event or aberration in the student’s situation: their overall long-term indicators match well for academic success, but something triggered the model to predict a short-term problem. It’s these flags that are to help advisors to know when to reach out, where they might uncover a family, financial, or medical situation needing support.

How can a student be predicted to have high persistence yet low completion?
Long-term indicators, such as cumulative GPA and credits earned, have significant impact on completion predictions. These indicators are often what separate the High/High group from the High/Low group. Some behaviors, such as re-enrolling after absence, attending part-time, and attempting too few credits may have little impact on immediate continuation (persistence) but bode poorly for completing a credential on time.

By heavily weighting LMS engagement, can the algorithm deal fairly with non-LMS courses?
Most of the LMS model variables focus on relative comparisons, such as looking at student LMS activity only relative to peers in the same section or course. That helps the model work around uneven use of the LMS across specific faculty, courses, and time periods.

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.