Evaluation
An evaluation is an essential way to use monitoring data to measure whether your disaster risk communication has met its overall objectives and goal, usually at the end of your initiative.
An evaluation can also inform learning and improvement such as your project’s strengths, weaknesses and best practices, ensuring stakeholder transparency and accountability and continuous improvement in your future disaster risk communication and that of others.
Did the project meet its objectives?
The following research questions can be adapted for different communication projects.
- Reach: How many people/what proportion of the target audience did the content reach? Who was not reached and why?
- Understand: Did people understand the content in the way it was intended?
- Relevance: Was the content relevant, timely and useful?• Engagement: Did people want to watch/read/listen to the content? Did they tune in/follow regularly? Why or why not?
- Trust: Did people trust the content? Why or why not?
- Effectiveness and impact: Was the content effective? This will depend on your objectives, but might include:
- Did the content help people feel better informed? Did it help people understand key risk information, including weather forecasts?
- Did it increase their knowledge? What did people learn about how to prepare for a hazard?
- Did it strengthen people’s perception that there is a real risk?
- Did it lead to discussion about the risk? Did people share information from the content with others?
- Did people take preparatory action as a result of the content?
- Did the content have any unintended impacts at the individual or population group level?
Evaluations can use various quantitative and/or qualitative research methods. Choose a research method that can best answer your research question, and measure progress against your indicators. Also consider the time you have to conduct and analyse the research, your budget, and the complexity of the research (who has the expertise/skills to conduct it – a research agency, an academic or a researcher).
It is useful to look at the existing literature in your country and context to understand what evidence already exists on disaster risk communication. Seeing how other researchers have framed research questions and survey questions, and their findings and learnings might help shape your evaluation. In turn, it is good practice to share your evaluation results and analysis to help guide others.
Quantitative research
Randomised controlled trials (RCTs) are studies that measure the effectiveness of a new intervention or treatment. Participants are randomly assigned to a control group (no treatment) and a treatment group (that can be exposed to the media content). Both groups can then be interviewed to measure differences in outcomes.
While no single study can definitively prove causality, randomization reduces bias and offers a rigorous method for examining cause-and-effect relationships between an intervention and its outcome.
Quantitative surveys are the best way to learn what proportion of your target audience was reached by your content. A survey of a representative sample population also allows you to compare the reactions of people who were reached by the content (exposed) and those who were not (unexposed). Statistical techniques such as multiple regression can control for other influences of (measured) third variables such as demographic characteristics, and support more robust association claims between exposure and change in outcomes. You can also disaggregate data from a large sample by sex, age, ethnicity or any other relevant category. You might discover that different groups have very different understandings of the information and prefer different media.
Baseline, midline and/or endline surveys: These quantitative methods can help you to compare your results (eg audience understanding) over time.
Longitudinal designs: include the repeated measurement of outcomes with the same individuals to assess changes over time.
Qualitative research
Qualitative, In-depth interviews and focus group discussions are useful for getting detailed perspectives from your target audiences. This can provide you with rich data on what audiences think of the content, what engaged them, what they recalled and learned and what encouraged action. This can help complement the quantitative data findings.
Qualitative Key Informant interviews can provide expertise in a certain field or topic of interest and provide deeper understanding of the effectiveness of your media and communication content. They can also provide an in-depth understanding of the community and the issues that people face that can give useful insight into the effectiveness of the content.
There are different qualitative impact evaluation methodologies that can be considered such as; process tracing, contribution analysis and Qualitative Impact Protocol (QuIP).
The above research methods can be done in partnership with an academic institution, research agency, or researchers with experience in these areas.
Ethical considerations when conducting research
- Do no harm: Prioritise the safety of both researchers and participants. Do not go ahead with research if anyone risks being harmed for participating.
- Collaborate with others: To avoid over-surveying crisis-affected populations, consider partnering with other organisations or adding your questions to existing research studies.
- Be prepared: Think about what is available in the affected area and what you will need to bring with you to avoid being a drain on limited resources. If possible,bring a list of services that you can tell participants about if they are in need.