Monitoring, also referred to as process evaluation, is the routine (day-to-day) tracking of activities and deliverables to ensure that the campaign is proceeding as planned.

Monitoring can:

  • Uncover problems or deviations from the campaign results
  • Provide information for improved decision-making
  • Measure attitude, perception or behavior changes

If necessary, adjustments to message, materials, or activities can be made in a timely manner if regular monitoring is undertaken.

Whenever planning data collection activities, be mindful of the ethics of ensuring the privacy and security of information regarding program participants

Key Monitoring Action Steps

1. Prepare an operational plan: Describe the information that will be collected, from which source(s), by whom, by what dates, and at what cost.

2. Develop indicators: Indicators should reflect variables that are included in, or effect, the caregiver's choice to vaccinate their child. Using the framework of the Global Strategy, it may be helpful to consider the stages of the caregiver's journey. For example:

  • Awareness
    • Awareness of polio
    • Awareness of the vaccine
    • Awareness of where and how to get vaccinate
    • Awareness of the campaign
      • Brand recall
      • Message recall
    • TV impressions
    • Radio impressions
  • Resonance
    • Perception of polio as likely and serious
    • Understanding importance of polio vaccination
    • Perception of OPV as safe and effective
    • Understanding of heard immunity
    • Communal perceptions of polio vaccine
  • Consideration
    • Intent to vaccinate
  • Health Worker Contact
    • Perception of health worker as part of community
    • Perception of health worker as honest and moral
    • Perception of health worker as competent
  • Vaccination
    • Number of successful vaccinations
  • Repeat Vaccination
    • Intent to vaccinate again
    • Vaccination coverage
    • Contact efficiency
    • Repeat vaccination success
  • Social Mobilization and Advocacy
    • Peer to peer communication
    • Peer advocacy
    • Net promoter score (vaccinated and vaccinators)

Many of these indicators can be collected through the Harvard KAP Polling questionnaires. Click here to see the standard questionnaires.

Develop monitoring data collection templates: Create the tools that program staff will use to conduct monitoring activities. For example:

  • Observation checklists
  • Weekly viewer discussion groups
  • Weekly brief survey questionnaires
  • Quarterly rounds of Rapid Audience Assessment surveys
  • Quarterly focus group discussions
  • Knowledge Attitudes and Practices Studies (KAPS)

4. Develop a monitoring data analysis plan: Describe what information will be analyzed, how, by whom, and by what dates. It is helpful to create dummy tables for the data analysis.

5. Develop monitoring reporting templates: Create easy-to-use reporting forms that are mindful of the time it will take to complete and read. The format should be concise so that the information can be readily interpreted and acted upon. Click here to see dashboard templates.

6. Develop a mechanism for using monitoring data to adjust communication and program activities: Create a process for reviewing monitoring reports, discussing them with staff, partners, and stakeholders as necessary, and creating action points to address any issue that are detected through the monitoring activities.

7. Disseminate Results: Share and discuss evaluation results with relevant partners, donors, and all stakeholders, communities, and program/study participants as appropriate. Program staff should seek out opportunities to convey evaluation results via briefings, Websites, e-mail, bulletins, Listserves, press releases, journal articles, conference presentations and other appropriate forums. In order for the findings to be most useful, you should make sure that they are communicated using formats that fit the needs of the recipients.

Data Collection Methods

There are many methods for collecting quantitative and qualitative data. The method(s) selected for an evaluation will depend on (1) the purpose of the evaluation, (2) the users of the evaluation, (3) the resources available to conduct the evaluation, (4) the accessibility of study participants, (5) the type of information (e.g., generalizable or descriptive), and (6) the relative advantages or disadvantages of the method(s). All evaluations should aim to use mixed methods, that is, a combination of quantitative and qualitative methods in order to capture multiple facets of the program outcomes/impacts, and to be able to triangulate the findings.

Quantitative examples

  • Rapid appraisal survey or rapid audience assessment

  • Pre-post surveys for trainings

  • Audits (e.g., of medical records)

  • Tracking logs

  • Content analysis (e.g., of media coverage)

Qualitative examples

  • Focus group discussions

  • Community group interviews

  • Key informant interviews

  • In-depth interviews

  • Direct observation (field visits)

  • Mystery client

Learn More

Explore the other two learning modules in this 3-step tutorial to design evidence-driven communication strategies to help vaccinate every child. 

Define your target audience and barriers to change, then develop messages and choose channels to reach your audience.

You cannot do everything and your ability to prioritize your interventions and target behaviours is paramount. One simple way to do this is to evaluate importance of the behavior and its changeability.