We're putting together information to help you move to virtual advising, deploy nudging, get answers on data science questions, and use your solutions during COVID-19. ACCESS RESOURCES HERE. 

Support is Moving! You can submit a new ticket at support@civitaslearning.com

Welcome
Login
Open navigation

Measuring Campaigns

Following are guidelines and recommendations for how to measure the impact of your outreach (nudge campaigns) that are run through Administrative Analytics.

Takeaway: When planning your campaign and selecting your strategy for measuring outcomes, work with your Civitas team to determine the best methods and metrics to gauge impact on multiple measures, including key student success metrics and persistence and completion impacts.

Here are guidelines and recommendations for measuring the outcomes of your nudge campaigns, in five areas:

Running a Campaign — A nudge campaign is a communication strategy, built upon a data-inspired opportunity, to nudge a targeted group of students to achieve a specific, measurable outcome. Conducting a nudge campaign is a simple but effective way to improve outcomes at your institution. Each nudge campaign involves these steps:

  1. Identify a data-inspired opportunity to improve outcomes

  2. Select a target student group

  3. Determine the owner and sender/s for the nudges

  4. Set a timeline for the campaign

  5. Choose a behavioral psychology based content strategy designed to influence mindset

  6. Craft your nudges with engaging subject lines, personalized and encouraging messaging, and relevant and timely calls-to-action

Design it to be measured. When planning a campaign, decide how you want to measure the campaign and design it to support that. Knowing the impact of a campaign is important because you can use the results from different nudge campaigns to find what approaches and content work and which do not, as well as which nudges work best for which student groups. With each nudge campaign, you can iterate on these learnings to increase your campaign efficacy.

1 - Campaign Performance Metrics

Using Administrative Analytics, you can create, send, and track your nudge campaigns within the application. Under the ​Outreach ​tab, you will see key details for the campaign. For each nudge campaign, the following information is available:

  • Name of the nudge campaign

  • Nudge campaign context

  • Subject line of the message

  • Who sent the message

  • Date the message was sent

  • Content of the message

  • Filters used to create the Student List

Other email campaign tools - You may use an external email system or communication tool for your campaigns, but be sure to document the key information listed above for each campaign, so that you can measure the outcomes.

After the nudge has been sent, Administrative Analytics reports on real-time analytics so you can monitor campaign success. These analytics appear on each campaign and are included in the Report (CVS) data download:

  • Open Rate

  • Click-Through Rate

These metrics provide important campaign performance information about the success of your campaign and offer insights for how to improve the efficacy of future campaigns.

 Questions to Explore

  • Which nudges had the highest open rates? The lowest? Are there any notable differences among these nudges?

  • If you differentiated outreach by prediction score, or other filter group, what are differences in open/click through rates?

  • How would you compare the subject lines between nudges with high open rates and those with low open rates? Based on your findings, what improvements could you make to subject lines overall?

  • If you experimented with different senders, what effect did this have on open rates? Do you notice a difference between those nudges sent from a generic account versus those from an individual familiar to the students receiving the nudge?

  • Which nudges had the highest click-through rates? The lowest? Are there any notable differences between the content of these nudges and the call-to-action?

Reports show that open rates for higher education are on the high end of cross-industry ranges and click-through rates fall in the middle. Partners who have run nudge campaigns have shown incredible engagement, with open rates averaging in the 40-60% range, and some partners seeing over 70% engagement with nudges:

Across Industries

Higher Education

Nudge Campaigns

Open rates

10% - 25%

19.93%

40% - 60+%

Click-through rates

4% - 14%

8.33%

These rates show us that it is possible to beat benchmarks by being thoughtful about choosing recipients and testing subject lines, sender addresses, and nudge content that might compel those students the most.

In addition, we have seen a high correlation between open rates and prediction scores with open rates aligned with persistence likelihood. Average open rates:

  • Very high and high persistence scores: 72%

  • Moderate persistence scores: 60%

  • Very low and low persistence scores: 40%

Once you have explored these campaign metrics, identify which nudges had the highest response rates that you would want to implement again, and flag those nudges that need improvement. Consider testing different subject lines and senders in your next campaign. Monitoring open rates and click-through rates and learning from what worked can help improve your campaign efficacy and lead to better student outcomes.

2 - Qualitative Data

As part of each nudge campaign, collect qualitative data and anecdotal evidence as part of your campaign measurement. ​Qualitative analysis deals with data that cannot be measured in numbers but can be important in ​gaining an understanding of underlying reasons, opinions, and motivations​.

These indicators of student engagement, response, and activity can be helpful yardsticks in determining the success of your nudging efforts on eventual student persistence and graduation.

Qualitative data can include:

  • Student response.​ Collect student replies to the nudge. What part of the message resonated most with them? How did they respond? What voice or emotion do you notice in these student responses?

  • Anecdotes and quotes from advisors and faculty. ​What conversations are students having with advisors, faculty, and other student service resources as a result of the nudge campaign? What are their observations about the effects of the nudge campaign?

  • Contact with student services.​ Is there an observed increase in visits or interactions with student support and academic services that correlates to the timing of the nudges? Do students mention the nudge when seeking out these services?

    Encourage advisors, faculty, and other student service staff and resources to collect and share their stories. Learn about student reactions to these nudges. Anecdotes and observations can provide important insight into student mindset and behavior and can help inform your next nudge campaign. Use qualitative data as one part of your overall campaign measurement plan.

Sample Student Responses from Partner Institutions:

“I wanted to say thank you. I have been having a rough couple of weeks with everything in my life and this small email really helped me. So, thank you, and I am excited to be able to meet with you next week to plan for my future.”

“I was wondering if it would be smart for me to drop my philosophy 103 class. I bombed my first exam and I’m still not understanding what we’re learning. Would it be better to have the W than to get an F? Because I don’t see myself doing any better. Thanks!”

3 - Leading Indicators

Based on the call-to-action you have embedded in your nudge campaign, you can also monitor leading indicators​ that show students are engaging in recommended behaviors correlated to student success.

When designing your nudge campaign, determine the call-to-action for each nudge and identify if there is data you could collect to determine how many and which students engaged in the suggested activity or behavior connected to improved student success. Before you launch your campaign, determine what data you want to collect as part of your overall campaign measurement plan. Select your methods for gathering and reporting this data.

For example, if a nudge encourages students to make an advising appointment to complete their degree plan, create a system to track which students who got the nudge made an advising appointment and which did not. You could also track the appointment type and topic for deeper analysis.

Examples of these indicators include:

  • Advising interactions.​ Did the nudge have an effect on the number or types of advising visits and interactions?

  • Tutoring center visits.​ If students were nudged to visit the tutoring center or other academic support resource, was there an increase in the number or length of these visits?

  • Resource usage. ​If students were directed to a specific online or campus resource, did students engage with these resources and how?

  • Course enrollments.​ Did the nudge encourage students to enroll early, take one more course to get closer to degree completion, or enroll in a specific support course? Were those goals achieved?

    Exploring your data in multiple ways provides key insights into your student population and learnings about which behaviors and activities are most strongly connected to improved student outcomes. Continue to nudge these activities in future campaigns.

4 - Student Success Metrics

The next level of analysis in looking at campaign measurement is to track the change in the outcome for the target group of students. Usually, this means looking at the same population before and after the intervention. If your nudge was designed to:

  • Impact course success rates or persistence, what was the average course success rate or persistence rate for the same target population in the same term the prior year in comparison to the nudge year?

  • Move students across the finish line, what was the graduation application rate vs. the prior year? What was the graduation rate?

  • Influence students to take an additional course, how many of the students who received the nudge actually signed up for an additional class? (This is considered a success metric because it brings the student closer to their degree and is typically incremental revenue for an institution.)

    For example, if an institution sent a series of “belonging” nudges to all incoming transfer students in their first term, what was the actual persistence rate for this group of students for the term where the nudge was sent? Then, compare it to the same group’s persistence rate in the same term for the prior year as well as the trend over the last several years.

    Does this tell you impact? Not necessarily. Because it does not control for potential differences in the student population you cannot know for sure whether there was impact.

    However, ask yourself the following questions when looking at success metrics over time:

    • What was the trend for this population over the past several years? Was the trend flat for several years and then changed in the nudge year? If changing, which direction? If the trend is not flat what is the change in persistence observed? For example, if the trend has moved up by 1% per year for several years, but in the term where you ran the nudge campaigns and interventions, the trend is larger, say 3%, you might discount the difference in persistence based on the earlier trend and a consider a 2% improvement.

    • Are there any known differences between the nudge year population and the prior year population? Ex. Were there changes in admissions requirements?

    • Were there significant changes in policy or practice at the institution or in the general environment that would explain the differences in persistence?

    • If all other things are held equal between the nudge population and prior year population and the only significant difference was the nudge, it may appear that the nudge was the difference. This would provide evidence of a correlation between the nudge and the outcomes. While this will not absolutely confirm impact and isn’t as rigorous as using running a randomized control trial or using prediction-based propensity score matching to control for student differences, it does gives an indication of whether the nudge had an effect on student success.

      Sometimes, when nudge populations are small (less than 1,000) or difficult to match to a comparison population, this may be the only means at your disposal to look at results. When doing so, just be careful to note that while the nudge may likely be the cause of the outcomes you cannot ascribe causality through this approach. There may be unknown factors influencing the student populations and therefore the outcomes.

To get much closer to causality you need to use Prediction-based Propensity Score Matching to control for differences between students and to rigorously measure the impact of the campaign.

  • Guidelines for Selecting Student Success Metrics:

  1. Determine the ​student success goal​ for your campaign. Goals can include successful course completion, enrolling in one more course, or improved persistence. ​(See #4 below for guidelines on how to measure persistence impact.)

  2. When crafting your nudge, include a ​specific call-to-action​ that encourages students to demonstrate a behavior or participate in an activity that will increase their chances of reaching the campaign goal.

  3. Establish and implement a ​data collection strategy​ for a descriptive analysis of these student success metrics.

Some of the ​student success metrics​ that you can evaluate include:

  • Successful course completion.​ For the select student group, what was the rate of successful course completion?

  • GPA.​ If the nudge encouraged students to finish strong and improve their grades, was there an effect on GPA?

  • Persistence rates. ​For the target student group, what was the impact on persistence?

  • Graduation.​ If the nudge encouraged near completers to finish strong and graduate that

    term, how many students in the target group graduated at the end of the term?

Example: ​An institution conducted a nudge campaign during the Fall 2017 term to encourage re-enrollment via a series of three email nudges targeting students with First Time In College (FTIC), Full Time students .​ ​There were no other targeted student success initiatives aimed at this exact student group and their experience and path are very similar to previous students at the institution. Post-census date of the Spring 2018 term, the institution analyzed the student list from the campaign to see how many students persisted and calculated an actual persistence rate (​h​ow many of the students who received outreach persisted versus did not persist). They then compared the Fall 2017 actual persistence rates with the persistence rates of Fall 2016, 2015, and 2014 FTIC, FT students to see if the outreach had an impact directionally on the overall persistence for the group that received the nudges.

By collecting data on these different student success metrics, you will be able to conduct a descriptive analysis to provide an indication of the success of the campaign. You can look at differences for the entire nudged population, those students who opened the nudge, and students who participated in the call-to-action. Consider the time duration between the nudged call-to-action and the outcome you are measuring. The closer the action to the outcome, the more likely the action contributed to that outcome.

5 - Persistence Impact Analysis

You also may want to measure the impact of your large nudge campaigns on overall student persistence through a statistically rigorous approach. ​(​Persistence is defined as a student enrolling in a specified future term and staying enrolled past the add/drop date. ) For a nudge campaign to have a measurable impact on persistence, it must be designed to nudge students to behave in a specific way that will increase their likelihood to persist. Impact on persistence may be measured using ​prediction-based propensity score matching (PPSM)​.

  • Prediction-based Propensity Score Matching (PPSM)

    To provide a rigorous standard of analysis, (for quasi-experimental initiative design) that meets What Works Clearinghouse​ guidelines, Civitas Learning developed proprietary software to measure impact using Prediction-based Propensity Score Matching (PPSM).

    To enable our partners to understand the impact of an initiative or intervention, such as a nudge campaign, we use PPSM to match participant students who received a nudge with similar comparison students who did not receive the nudge, in order to control for selection bias often seen through other measurement approaches and render more precise apples-to-apples comparisons. PPSM matches students based on highly similar persistence predictions (those featured in Administrative Analytics and Inspire for Advisors) and propensity scores, which represent students' likelihoods to participate in the initiative. By accounting for students’ likelihoods both to persist and to participate in the initiative, PPSM ensures that the students who received the nudge are matched to very similar students who did not before calculating the measurement of impact. The results then reveal whether the campaign had a statistically significant impact on persistence.

Not all campaigns can be measured through this approach. That does not mean you shouldn’t do them or that they don’t have value. You can still learn things from the other measurement approaches listed above and you can still have an impact even if it cannot be measured atmore a granular level. The use of PPSM should be employed when you carefully designed the campaign answering the questions and meeting the requirements below.

✓ Is the nudge campaign meant to affect persistence?

Persistence is the nudge campaign outcome that is currently measured using PPSM as the persistence model is necessary for matching. Persistence is defined as a student enrolling in a specified future term and staying enrolled past the add/drop date. Nudge campaigns should be designed to nudge students to behave in a way that will increase their likelihood to persist to facilitate the appropriate analysis.

For example, if your institution uses a Fall - Fall persistence model, the campaign should be designed to encourage students to behave (see an advisor, register, utilize tutoring services, etc.) in a way that will likely increase their chances of persisting to the following Fall term.

✓ Will your nudge campaign reach enough students?

More students and more nudges means a greater likelihood of statistically significant results. Nudge campaigns targeting larger groups of students are more easily measured and more likely to reach statistical significance. The smaller the target student population, the larger the lift in persistence must be between the participants and the comparison group to reach statistical significance. Therefore, the fewer students used for analysis, the lower the likelihood of achieving statistically significant results. Drill-down results (​i.e.,​ impact by sub-population) will also be affected, as the sample size for a specific student group within the overall participant group could be very small.

Also, the number of eligible comparison students should be at least as large as the number of participants to ensure as many participants as possible can be matched to comparable students for analysis. Civitas typically sees between a 1 and 3 percentage point lift in overall persistence that is statistically significant for nudge campaigns targeting approximately 2,000 students. The table below shows the size of the target student population for the nudge campaign and the corresponding lift typically needed to detect statistical significance.

Target Student Population Size

Percentage Point Lift Typically Needed for Statistical Significance

1000

3.1%

2000

2.2%

3000

1.4%

Assumptions: Statistical Significance (alpha) = 0.05; Power (1 – beta) = 0.8; Assumed Match Rate = 90%

✓ How do you want to measure the campaign?

There are currently two recommended options for Civitas measurement of campaigns run through Administrative Analytics. Each option has pros and cons so selecting the appropriate approach is an institution specific choice. We encourage you to consult with your Partner Success Consultant on which option will work best for your team.

Option 1: Hold out a comparison (or control) group of a similar or larger size within the target student population. This could be done through randomization - selecting half of the nudge population to receive the nudge and half to not receive the nudge. However, this is usually not a desired approach for partners due to logistical and ethical reasons. The approach that is more often used is to identify a few programs, departments, or campuses to receive the nudge and then use the others to be the comparison group. PPSM then controls for student level differences within the comparison population. Benefits of this approach are that sample sizes are typically larger than in Option 2. Downside of this approach is that the comparison group will need to be documented and captured for measurement leading to a somewhat slower measurement process than Option 2 and greater likelihood of data integrity issues with capturing the correct comparison group for analysis.

Option 2: Use the students who did not open the nudge as the comparison (or control) group. Nudge emails are sent to students but we typically see approximately a 60% open rate. If the nudge population is large enough we can use PPSM to match students who did not open/read the nudge to students who did and measure the difference. Benefits of this approach are that all of the data needed for measurement is captured in Administrative Analytics, data integrity is high, and once the census date passes measurement is faster than other approaches. However, this approach will decrease the sample size and may reduce the chances of meeting statistical significance. Furthermore, when we observe response rates increasing with higher prediction scores, impact numbers generated from open-vs.-unopened PPSM analyses are likely to be understated. The reason is that the matching process will discount students with high prediction scores proportional to the degree of imbalance between open and unopened cases across prediction scores. On the contrary, regular PPSM analyses will match high-prediction-score students who open their emails with control students with similarly high prediction scores.

Both approaches measure campaigns by comparing student populations from within the same term, which is highly recommended. The dynamic nature of data in higher education makes the process of using comparison students from a previous term or year problematic and is therefore not recommended.

✓ Who, exactly, is in the target student group for the nudge campaign?

You must clearly define and document the group of students that will be targeted for the nudge campaign. Whowillreceivetheoutreachandwhowillnotfromtheeligiblepopulation?Arethere any exceptions? Or, will recipients be selected based on specific, non-random criteria? Which Administrative Analytics filters can be applied to get a list of the target student group? Administrative Analytics can be used as the source of documentation. Data on the nudge group is captured at the time the nudge is sent.

However, if you choose to use Option 1 for measurement (hold out a comparison group) you will need to document how the nudge group was selected and which student populations (campus, department, program, etc) will be the comparison group.

✓ Are there potential confounding factors?

There are ​confounding factors,​ or other circumstances, that could affect the nudge campaign participants or comparison group and make it difficult to determine what exactly influenced outcomes.

Consider the following common confounding factors during nudge campaign design prior to impact analysis:

  • Either the participating group or comparison group contains a single study unit: e.g., one of the groups is representative of a single advisor/faculty member/course/etc. In this case, it would be difficult to tease out whether the the difference in outcomes were due to the nudge campaign or the advisor/faculty member/course.

  • The participating group and the comparison group are systematically different in a way that may be directly related to persistence outcomes, e.g., the participants have a high GPA and the comparison group has a low GPA. Since the analysis measures impact on persistence outcomes, if the participation criteria are chosen based upon something that may correlate to persistence, it will be challenging to find enough comparison students to match with the participants.

  • Another initiative is offered in conjunction with the same group at the same time, e.g., first-time, full-time students are required to attend a Student Success Course ​and​ are participants in a nudge campaign during their first term. This can be a problem because it will be difficult to isolate the effects of one initiative. You can measure the impact but it would not be able to determine ​which ​of the two initiatives led to the outcome.

  • The participating group and the comparison group are from different time periods or terms and persistence outcomes were measured at different points in time.

Even when using matching techniques, confounding factors cannot simply be eliminated or ignored from analysis. If confounding factors are a possibility, any impact analysis should include appropriate caveats and results should be interpreted with such confounding characteristics in mind.

Communicating Your Results

Often, after measuring the results of a campaign, our partners want to translate the lift in persistence into additional students retained or dollars of retained revenue.

To calculate additional students retained:

  • Multiply the lift in persistence (ex. 2.3%) by the number of students in the nudge population (ex. 1,436) to identify the number of additional persisting students from within that population (ex. .023 x 1,436 = 33 additional persisting students)

To calculate retained revenue:

  1. Identify the average revenue per credit hour. Often revenue includes more than just tuition. What is the overall funding that your institution receives per enrolled credit hour per student? (ex. $156)

  2. Identify the average credits hour enrollment per student. (ex. 9 hours)

  3. Multiple the funding by average credit hours by the count of additional persisting students.

    ($256 x 6 x 33 = $76,032)

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.