Prepare for impact measurement
Do you wonder if you're ready to measure program outcomes? Before you dive in, ask yourself these questions:
- Is the program aligned with the organization's mission and strategy? An effective program begins with a clear mission and tight strategy.
- Are you confident that the program's design and implementation will lead to the desired outcomes? For example, a two-hour workshop isn't adequate to help most first-generation prospective college students prepare to apply to college. More support is needed around navigating high school success and the college application process.
- Are you actually observing the outcomes you want to measure? Be careful to avoid wishful thinking, or measuring outcomes that you're not actually influencing yet. This is like getting on the scale before starting a diet and exercise regimen — as though the act of weighing yourself will help you lose weight.
If you answer "yes" to these three questions, you're ready! Consider these 10 steps to make your outcomes measurable.
1. Select the most important outcomes sequence to measure
- Don't pick the outcomes that look easiest to measure. Remember, any outcome can be made measurable with the steps outlined below. You want to measure the outcome sequence that most closely represents the heart or purpose of your program, or what's most important to your participants' success.
- Don't measure everything. You're a service provider, not a researcher. Measure only what's necessary to learn about your program's effectiveness and improve results for your participants.
2. Identify indicators for your outcomes
Indicators are measurable data that reveal whether participants have achieved success on a priority outcome. Indicators help you know what it looks like when an outcome happens, and how you know when it happens.
In general, there are three types of indicators:
- Yes/no (such as graduation rates)
- Count (such as school attendance)
- Scales or ladders with defined steps (such as an improvement in skills or knowledge or the achievement of initial outcomes)
To measure initial outcomes, you may select or design questions to measure participants' knowledge or skills. If you're just starting to measure outcomes, consider this simple three-step scale:
- Red: outcome not observed
- Yellow: outcome partially observed
- Green: outcome observed
The goal is to define exactly what it looks like for your participants to demonstrate the outcome, and what progress looks like along the way. Indicators are specific to your participants' unique demographics, culture, language and situation, as well as the unique expectations of your program. Be sure you're selecting indicators that represent the outcome you seek to measure, rather than a related outcome on your logic model.
By setting indicators, you're defining success for your program and its participants. Be careful not to set the bar so high that no one can attain it, nor so low that everyone attains it. Your goal is to improve outcomes attainment overall.
3. Determine who will provide the most accurate data on the indicator
This might be a participant, a staff member or a third party (such as a parent, teacher or partner). This person is the respondent in the measurement process.
4. Design or select a measurement tool or process that fits your indicators and your respondent
Examples of written or electronic sources include surveys, interview protocols, pick-list scales and measurement logs. Also consider these online resources for outcomes evaluation. When assessing a tool's relevance, ask these questions:
- Does the tool address the outcome to be measured?
- Does the tool address the indicators used by the program to determine outcomes success?
- Are the questions phrased appropriately — both culturally and developmentally — so that participants can understand what's being asked and provide a truthful response?
- What method of administration — such as face-to-face or in writing — will lead to the best quality and number of responses from the target population?
- Is the tool the appropriate length? Does it contain the necessary questions for the given indicators?
- Do I know how to compile the results from this tool?
5. Consider what internal and external factors will influence your participants' change process
You may decide to measure a couple of these to enrich your understanding of how to promote success.
6. Decide how often you'll collect data, who'll do it and how
This is your work plan for data collection. If you're not specific about who, what, where, when and how, data collection won't happen. Here are some tips:
- Be sure the staff or volunteers you ask to help with measurement tasks have the time and training to take on these tasks.
- Make a plan to protect the confidentiality of individual participant data, if you haven't done so already.
- Determine how you'll notify participants about the data collection process.
- Select the participants from whom you'll collect outcomes data (as in, those who've had sufficient dosage and duration to reach the expected level of improvement at each phase of the program).
7. Consider where you'll store the data
There's some great customer relationship management (CRM) software available to nonprofits. Consider this guide to CRM options.
8. Try out your measurement plan before you bake it into your program and database
Odds are, something will need to be tweaked. During the trial run, remember that your measurement system should enhance your relationship with your participants — not disrupt it.
9. Ask the right questions after your trial run
Consider these questions from United Way of America:
- Outcome findings. Did you get all the data you need? Did you actually measure what you intended to measure? Do you still think what you measured is important? Does it appear that your findings will be useful?
- Measurement system features. Do you have adequate data collection instruments? Data collector training? Data entry procedures? Monitoring procedures for the measurement system? Is the time and cost of collecting and analyzing data reasonable?
10. Remember that you're the expert
Be comfortable with your own knowledge. You know your participants and what success should look like for them. That said, do rely on guidance from any evaluators involved in designing your system.