We're putting together information to help you move to virtual advising, deploy nudging, get answers on data science questions, and use your solutions during COVID-19. ACCESS RESOURCES HERE. 

Support is Moving! You can submit a new ticket at support@civitaslearning.com

Welcome
Login
Open navigation

Updated Statistical Test

Product Update -    Released On - 02/08/19

Updated statistical test to provide more clarity

Based on user feedback, we have updated the p-value and confidence interval calculations in Impact to be more consistent with one another and more accurately reflect the results of your Impact analyses.

Because of this, if you plan to reassess an initiative that was analyzed previously, we recommend re-submitting all prior term data of the new analysis to ensure consistent calculations.


Interested in the details behind this enhanced clarity? Keep reading ...

Impact previously conducted p-value calculations with a one-tailed test, assuming typical impact would be positive. But, given the variety of initiatives and signals analyzed in Impact each day, it became clear that Impact should provide more conservative statistical tests. So now, the p-value calculation will be based off a two-tailed test. Additionally, the confidence interval calculation will now be based off the same mean values across bootstrap samples that p-value calculations are based on. Testing of this update has shown very little change in impact results for initiatives with large sample sizes and persistence lift calculations, so the only anticipated difference in historical impact results will include more non-statistically significant results for initiatives with smaller sample sizes and persistence lift calculations.


Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.