/ Resources
/
Blog
/
Audience research - common mistakes (Part 2)

Audience research - common mistakes (Part 2)

This is part 2 of our blog on avoiding common mistakes when it comes to audience research. Throughout 2021 BehaviourWorks Australia (BWA) is publishing a book on the BWA ‘Method’ to design and deliver more effective behaviour change programs. In Chapter 7 (published 1 July 2021), focusses on how to research audiences.

Authored by Peter Slattery and Denise Goodwin (our health lead)

In this post we continue with some tips on avoiding common mistakes with audience research.

Don’t trust your audience too much

  • Don’t assume that your audience understands themselves completely. People are usually not careful or complete observers of their beliefs, desires, and opportunities.
  • For example, research has shown that research participants significantly underestimate how much social norms influence their behaviour. Similarly, a whole body of research examines how revealed and stated preferences differ.
  • Watch for Courtesy Bias where your audience will tell you what you want to hear, or what is socially acceptable, rather than their true beliefs
  • An audience might also act differently just because they know they are being studied.

Don’t trust your results too much

  • Don’t update too strongly based on your results; don’t become certain on the back of a single successful study.
  • Your data is evidence, a model, a set of results, but it isn’t the actual reality that it measures – there will always be mistakes and measurement errors.
  • Your data is from a specific point in time, and what was relevant when you did the study may no longer be relevant.
  • Be careful when generalising your findings to different situations or populations. The results will be ‘contextually bound’. What works for your sample, may not apply at scale.
  • Assume that there are systemic effects and feedback loops that also should be considered. Read more about these in our related chapter and post.

Don’t reinvent the wheel

  • Don’t develop a new survey if you can copy from or rework a previous one. Building on what has worked previously saves time and reduces the risk that something will be done badly.
  • Similarly, try to find and reuse tried and tested approaches for asking research questions. For example, if you want to know the personality of your audience or their mental health then you should look to use one of the many accepted scales for doing this, rather than make your own.

Know what you are looking for and why

  • Ensure that you clearly understand your research’s aims, end-users, and intended use, before you design it.
  • Consider making if/then plans and predictions for the results so you know what was expected and what to do if you get, or don’t get that result. For instance, you might determine that if >x% of the sample strongly agree that something would help them, you will roll out an intervention and not hold off. One way to do this is with assumption testing (see our scale-up toolkit).

Consider trends

  • Trend data is often more interesting and useful to an organisation than static data alone. Collecting small amounts of data frequently can therefore be more helpful than collecting a lot of data infrequently or as a one-off.  Our SCRUB survey was driven in part by this logic.

Don’t force answers/outcomes

  • Don’t ask questions that will intentionally or inadvertently produce the answer you want. QuestionPro has some good advice on leading questions and how to avoid them.
  • Reflect on and recognise your personal biases. Try to ensure that you; i) make them explicit to future researchers readers, and ii) minimise their impact on your process and results.

How can you avoid mistakes like these?

So how can you reduce your risk of making the mistakes outlined in Part 1 and above? There is no complete solution but we recommend considering the following safeguards.

Use templates

  • You can avoid a lot of risk and effort if you look for and reuse similar work (e.g., a prior survey on your research question) and build on it. You can see an approach for this in our scale-up toolkit)

Engage research experts

  • Discuss your research plans with researcher’s who have experience in the area so that you can learn from their mistakes. This is one of the reasons why BWA use practice interviews in planning research.

Use research co-design

  • An effective research co-design process can significantly reduce risks such as poor research fit and bad questions.

Pretest materials

  • Pretest your materials whenever possible, ideally with your audience and expert researchers.

Start with the end in mind

  • Where possible, identify and pilot test your key outcomes and deliverables (e.g., completion rates or graphical outputs) in advance. For instance, you might test a survey with a small sample of audience members to explore completion rates and then make quick output using this data to check with the research users.

Do a ‘pre-mortem’

  • Think out the possible implementation problems/risks with your research, and identify ‘Plan Bs’ for those that are relevant. Here is an example of such an approach.

Use mixed methods

  • You can often increase your certainty and reduce risk by mixing methods so that the strengths of one will compensate for the weaknesses of another. For instance, you might pair a survey of attitudes with observations of actual behavioural data to explore if changes in attitudes correlate with changes in behaviour.

Use indirect questioning

  • A range of ‘indirect questioning’ approaches can help to reduce courtesy/social desirability bias.

Use attention checks

  • Attention checks can ensure that your audience is paying attention and being genuine. For example, this might involve asking respondents to select a specific answer to a question.

Get training, support or collaborators

  • Research services can do research for you or deliver training to help you to do it effectively. In some cases, university researchers may be happy to collaborate on research if they can publish the results.

Conclusion

In this post, we introduced audience research and explored some common mistakes and ways to overcome them. Please check out part one if you haven’t already.

Feel free to offer feedback if you think that there’s anything you’d like us to do differently in future posts. You can also find links to our chapters and previous Linked in posts.

Education & training

Looking to upskill?

Check out our short, bespoke and Monash University accredited training programs.

home-orange-arrow-right
Research

Have a project for us?

We offer a broad range of research services to help governments, industries and NGOs find behavioural solutions.

home-orange-arrow-right
Resources

Explore our resources

We believe in building capacity and sharing knowledge through multiple channels to our partners, collaborators and the wider community.

home-orange-arrow-right

Sign up to the broadcast

Get monthly behaviour change content and insights