Skip to content. | Skip to navigation

Masterlinks
You are here: Home Assessment How To Evaluate My Office or Grant-Funded Program Assess Evidence: Tracking, Pre-Post Tests, Surveys and Focus Groups

Assess Evidence: Tracking, Pre-Post Tests, Surveys and Focus Groups

How do I assess evidence?

Administrative offices and grant-funded programs come in many forms, and will therefore be exploring many different types of evidence to see if their goals are being met. It is possible, however, to break down different types of evidence, and their associated assessment tools, by the type of question you are asking. If you have a logic model, look at your outputs and impacts to determine your question.

Type of Question

Appropriate Tools

How many/ how much of something (time, people, services)
Track totals, averages, and percentages
Whether population has learned something (knowledge, skills) Conduct pre-post tests
How a population feels/ experiences something
Conduct surveys, interviews, and focus groups

What is data tracking?

Data tracking is a way to keep on eye on how your program is running by storing measures that are pulled at certain moments and comparing those measures over time.

For example, a tutoring program might want to track the number of students coming in each day (output), the number of times an individual student comes in (output), and any changes in that student's GPA (impact). At the end of the academic year, the program can count the total number of students who came in, the average number of times any one student came in, and the percentage of students whose GPA increased over the academic year.

To do this kind of assessment, it is best to use some sort of software that allows you to easily calculate these types of summary measures. Most often, people use a spreadsheet program (e.g. Microsoft Excel), but you can also use more sophisticated database software that lets you pull customizable reports based on the information you are interested in (e.g. Microsoft Access).

What are pre-post tests?

Pre-post tests are a way to measure whether your population knows more about something, or is better able to do something, as a result of participating in your program. In many ways, this type of assessment is similar to student learning assessment found on the academic side of things, but instead of measuring the effectiveness of an academic course or curriculum, you're measuring the effectiveness of training program.

Pre-post tests can take many forms, depending on what you are looking to measure. However there are two critical elements to remember:

  • the questions on the pre-test should match exactly the questions on the post-test, and
  • each test should be labeled with an identifier unique to each participant

So, for example, if you are assessing a staff training program, and you ask three questions about compliance regulations when they first start the training, you should ask the same three questions when you end the training. And if you ask for participant identifiers, you will not only ensure the same set of people took the test, but will be able to calculate fancier measures of programmatic impact, such as t-tests.

What are surveys, interviews and focus groups?

Surveys, interviews and focus groups are used to gather information that is harder to quantify - things like feelings, experiences, and perceptions. All three are built upon asking people questions about themselves. Though it's not a hard and fast rule, surveys generally occur on paper or online and tend to have more close-ended questions (pick among choices), while interviews generally occur face-to-face or on the phone and tend to have more open-ended questions (no defined choices). Focus groups are like group interviews, in which people can respond to the facilitator or to each other.

The more open-ended your questions are, the richer your data will be, offering new perspectives that might not have surfaced if you had given people a pre-defined set of choices. However, open-ended data will require additional analysis, such as qualitative coding, to say anything definitive about your question. Most often, people use survey software to create a survey (e.g. Qualtrics, available for free to the Hunter community).

Remember!

  • Matching your tool to your question is very important! If you do not make this match, you will spend a lot of time gathering evidence and it won't answer any of the things you wanted it to answer!
  • Hunter's Technology Resource Center is a great place to get started with any of the software described above.
  • Feel free to submit draft surveys, interview protocols and focus group questions to the Office of Assessment for review and feedback.

Continue to Close the Loop

Document Actions
Assessment website feedback:
Room 1008 East
(212) 396-6299 | email us
HUNTER COLLEGE
695 Park Ave
NY, NY 10065
212.772.4000