Acquiring Edit Lock
is currently editing this page.

Time-tested options for tracking progress toward an outcome or goal

Ready to dive into outcomes data collection methods? Begin by identifying key indicators that will help measure progress — then look to ways to effectively track progress.

Key indicators

Indicators are the specific, measurable signs that track progress toward an outcome or goal. For example: tracking the number of children who attend a local school readiness program will help "indicate" to volunteers, tax payers and donors that many children are benefiting from their work.

Indicators don't have their own mission. For that reason, achievement terms such as "improved" or "decreased" don't apply. Instead, indicators measure inputs, outputs, processes and outcomes to "indicate" signs of progress.

A well-chosen indicator should be precise and include a plan for collecting data from your target population. It should also take into account the type of information you need, the validity of your method and available resources — including time, money and human capital.

Timing

Next, consider when data will be collected and how often. Although time span and frequency will vary depending on your objectives, a link to a specific time period is critical. For example, participants may be given surveys before and after services, or you may evaluate classroom behavior changes every quarter. Without a link to a specific time period, there's nothing to prove.

Data collection methods

There are many ways to track progress toward an outcome or goal. Consider these time-tested options:

  • Surveys. The primary advantage of surveys is that they allow you to collect information from a huge audience for a relatively low cost. The main disadvantage — especially for surveys sent by mail — is that the response rate can be unpredictable, which might jeopardize the integrity of the results.
  • Interviews. One-on-one evaluations provide a holistic picture of your programs and allow you to explore or ask for clarification on responses as they come. For this reason, you can delve deeply into complex issues, getting answers that wouldn't be possible with a survey. The main deterrent to interviews is their high cost. Even when conducted efficiently, interviews require a substantial time investment.
  • Focus and other small groups. When participants are encouraged to debate observations and opinions to provide real-time feedback, you'll get a snapshot of how the population at large might react to your work. In this bubble, small groups can generate a staggering amount of information in a short amount of time. The main issue with this method is that it can be hard to generalize findings to apply to the population at large. Still, focus groups are often successfully used as test balloons prior to sending out surveys or launching expensive interview campaigns.
  • Observations. Observing participants is about a lot more than sticking them in a room and watching what happens. Observation as a collection method is based on detailed guidelines about what to observe, when and for how long. When done well, observations are considered reliable, valid data points. The main drawbacks are the time investment and the need for observers to be well-trained, which can be costly up front.
  • Records and documents. Studying previous initiatives or pulling out old measurement rubrics can provide a valuable touch point to evaluate how your organization has changed. If you have a backlog of data to study, this can be an economical and efficient method. If not, a few changes to your current efforts may help you start to build a trove of information to track progress down the road.

Checks and balances

Before you begin collecting data, put some time into making sure your methods will produce both valid and reliable results. These two metrics are important for evaluating the effectiveness of your measurement systems.

Validity refers to how accurately a test measures what it's designed to measure. For example: Jane steps on a bathroom scale every morning at the same time. Assuming all factors are equal, the scale should consistently flash her current weight of 140 pounds. If this is correct and Jane actually weighs 140 pounds, then the scale has validity as a measurement method because it's displaying an accurate result.

Reliability refers to the test's ability to produce stable and consistent results. Can Jane count on the scale's reliability to produce the same results day after day? In another context: do questions on your survey repeatedly get the same results from both men and women, even across demographics?

Given that Jane's not actively trying to gain or lose weight, the scale should accurately capture her weight (validity) every single time it's used (reliability). However, it's also possible the scale might be a reliable method that's not valid. If the scale consistently tells Jane that she weighs only 65 pounds, then it's technically a reliable method (flashing the same number again and again), though not a valid one because it doesn't reflect her actual metrics — in this case, the correct weight.

Instrument development and pretesting

Ideally, you'll pretest collection methods with a focus group or small group of typical participants to sort out troublesome issues before they happen and determine what works best for your target audience.

For instance, a poster aimed at encouraging parents to vaccinate their children will have a very different message than a spotlight advertisement on a podcast exploring holistic medicine. Though both are pro-health initiatives, they speak to different audiences.

Pretesting helps ensure you're conveying a clear and consistent message. Pretesting also helps program leaders:

  • Choose message style and content
  • Polish key words and images
  • Stimulate creative work
  • Strategize improvement initiatives

Body

Disclaimer

MissionBox editorial content is offered as guidance only, and is not meant, nor should it be construed as, a replacement for certified, professional expertise.

Disclaimer

References

tools4dev: How to pretest and pilot a survey questionnaire

BetterEvaluation: Collaborative outcomes reporting

Strengthening Nonprofits: Creating and implementing a data collection plan

UN Women: Indicators

Harvard Family Research Project: Indicators: Definition and use in a results-based accountability system by Karn Horsch (1997)

Center for Applied Linguistics: Reliability

References

Author

Writer and firm believer in using business as a tool for positive change