Acquiring Edit Lock
is currently editing this page.

Outcomes data collection : rely on creative and critical thinking

Your organization's impact — and its very existence — depends on outcomes. Underlying everything you do are questions such as:

  • Do our programs make a real difference in the lives of the people or the causes we serve?
  • Do we have evidence of impact to bring to our funders?
  • Given our limited resources, which of our programs deserve the most time and money?

Answering these questions calls for data. Unfortunately, there's no magic software that will automatically point you to the most useful data and collect it for you. Evaluating outcomes with data is a process that calls for creative and critical thinking. Get started by answering the following questions.

What outcomes do you want to measure?

Before collecting data, think clearly about what you intend to measure. Many nonprofits start by monitoring their activity. For example, you might want to measure:

  • How many people you serve
  • How many volunteers you recruit
  • The key demographics of your program participants
  • The specific programs that those people attended
  • How many people visit your website and what devices they use to do so

Once you have a clear picture of your activities, think about ways to measure their effects. There are perceived effects — for instance, what participants say when asked to evaluate your workforce development program. In addition, there are observed effects — such as how many participants are getting jobs and staying employed.

With specific outcomes in mind, what data will you collect?

Activity is generally the most straightforward kind of outcome to measure. Much of it boils down to raw numbers and text. Examples include program schedules, event attendance, and volunteer names and addresses.

Measuring perceived effects calls for different data. You'll need to ask staff members, program participants and volunteers questions such as:

  • What specific knowledge and skills are program participants gaining?
  • What's their attitude toward our programs?
  • How can our programs be improved?

Measuring observed effects is even more complex. You might need to review case notes and data from third parties, such as employers who hire the graduates of your workforce development program.

The key is choosing data points that matter. It's possible to collect data that looks good in a report, but has no direct relationship to your desired outcome. When it comes to data, quality counts more than quantity.

How will you collect existing data?

You probably have a fair amount of data about your activities already on hand. However, it might "live" in various applications and handwritten notes filed away by individual staff members.

The first step in data collection is to find where all that disparate data is hiding. You might find:

  • Analytics information automatically collected by your website
  • Event information stored in calendar and scheduling software
  • Information about donors and volunteers stored in contact applications and paper-based lists

Import as much of this as possible into one central hub — ideally, an application that can analyze the data and generate reports. The application you choose for this purpose will depend on the nature of your organization's activities. For instance:

  • Case management systems store information about clients who use your services and progress toward individual goals
  • Customer (or constituent) relationship management systems (or CRMs) store information about donors, volunteers, clients, customers and others who interact with your organization
  • Membership management systems track attendance, participation and membership renewals

How will you collect new data?

Collecting existing information is often called passive data collection. In contrast, active data collection is seeking out information that you don't already have. To measure the effects of your activities, for example, you could conduct focus groups or ask clients to fill out a survey.

Here's where you might supplement your hub application with more specialized software. Narrative analysis software can filter interview transcripts and focus group notes for key points.

How will you keep the data clean?

Clean data is accurate and usable. In contrast, so-called dirty data — such as donor records with missing email addresses or phone numbers — is inconsistent and incomplete. One advantage of using a hub application is that it forces you to collect and store data in consistent formats.

Train your staff members and volunteers to properly use your software for data collection. In addition, lift their eyes to the horizon and convince them that collecting data is worthwhile. It's all about measuring the outcomes that matter.



MissionBox editorial content is offered as guidance only, and is not meant, nor should it be construed as, a replacement for certified, professional expertise.



NTEN: Strategies for program evaluation in the 21st century by Kyle Andrei (2013)

Idealware: Nonprofit performance management: Using data to measure and improve programs by Chris Bernard and Elizabeth Pope (2014)

Compassion Capital Fund: Measuring outcomes (2010)



Writer and editor fascinated by knowledge management, behavior change and technology for nonprofits