Take simple steps to improve data quality and accuracy
Measuring outcomes is a critical part of telling your organization's story. Once you know what data you're collecting, you can take care of the prep work within — and beyond — your organization.
Before you begin collecting data, specify who'll be responsible for managing the process. For example:
- Identify a leader. Whether you're a team of one or 50, someone needs to be accountable to lead the team. There's nothing worse than realizing your 500 brochures weren't sent because you didn't identify who was supposed to put them in the mail.
- Create the team. Who'll be on the front line tabulating raw numbers? If you're a shoestring nonprofit where the executive director or chief executive wears most (or all) of the hats, it can be helpful to think of your desired team in broader terms than just "more people." What resources or support do you need?
- Train and educate. Will your team need special training before they can get started? Are you asking them to pass out surveys on the street? Or are you building an elite team to manage a complex observational study? Specify ahead of time if there are prerequisite skills required or if you'll be training on the job. If you'll be revealing sensitive information, such as staff salaries, make sure everyone at the table can be trusted with that information.
Participants are more likely to speak openly and honestly if they know their opinions will be kept confidential. You can ensure they'll feel safe to participate by developing strong privacy practices and adopting confidentiality clauses. For example:
- Protect confidentiality. Assure participants that the measures you're taking to protect identities will keep their results confidential. If needed, provide literature or other resources for participants to read at home. For long-term projects, consider attaching ID numbers in place of names. This will allow you to keep a master list in a secure file at another location, thus adding a second level of safety.
- Practice consent. Many people are happy to participate in a nonprofit research project, but it's best if they're informed ahead of time. Let them know what data you're tracking and why. Do you need to have them sign a release or is verbal consent okay? How you plan to use their feedback in the future will help determine what steps to take.
- Evaluate your measurement campaign from the participant's perspective. For example, if you're planning a focus group of new mothers, consider that it'll be hard for them to find time for a 4-hour one-on-one interview. Likewise, gathering information from minors will require parental approval that needs to be secured ahead of time. Reasonable planning beforehand can make all the difference for the actual collection process.
Stepping up quality and accuracy
With a few simple strategies you can dramatically improve both the quality and accuracy of the data you'll be gathering. Double-entry systems, spot checking and database formatting may not sound like particularly exciting work, but it'll help to catch discrepancies before they become a major issue.
To improve quality and boost response rates:
- Keep in touch with potential participants. Host events, send newsletters or find other ways to keep your cause top of mind. Be careful not to be too overzealous, though. No one likes an inbox full of unwanted emails.
- Tie data collection to intended impacts. Frame measurement as part of a larger goal (which it is) — let people know how their responses will contribute to the next stage of the project. For instance, perhaps you need to achieve a particular result before a donor will release additional funding.
- Offer multiple ways to communicate information. You can improve response rates by giving participants options for providing feedback. Some people may want to have an exit interview over the phone while others would be more comfortable responding to an anonymous online survey. To survey populations that don't speak your native language, you might provide a translator or offer bilingual information.
Ultimately, the goal is to make it as easy as possible for people to respond. A low response rate won't provide an accurate picture of how your projects are progressing and can leave you vulnerable to excluding opinions and other valuable feedback. In contrast, a high response rate will improve both the validity and reliability of your results.
Understanding validity and reliability
Validity refers to how accurately a test measures what it's designed to measure. For example: Jane steps on a bathroom scale every morning at the same time. Assuming all factors are equal, the scale should consistently flash her current weight of 140 pounds. If this is correct and Jane actually weighs 140 pounds, then the scale has validity as a measurement method because it's displaying an accurate result.
Reliability refers to the test's ability to produce stable and consistent results. Can Jane count on the scale's reliability to produce the same results day after day? In another context: do questions on your survey repeatedly get the same results from both men and women, even across demographics?
Given that Jane's not actively trying to gain or lose weight, the scale should accurately capture her weight (validity) every single time it's used (reliability). However, it's also possible the scale might be a reliable method that's not valid. If the scale consistently tells Jane that she weighs only 65 pounds, then it's technically a reliable method (flashing the same number again and again), though not a valid one because it doesn't reflect her actual metrics — in this case, the correct weight.