News & Insights
Guest Article: Consultation questionnaires: measuring their performance – and understanding how people respond to them
Survey questionnaires are the workhorses of public consultation, used by many consultation practitioners. But how often do we measure their performance? Apart from the response rate that is of limited use.
(The story goes that sociologist Karl Marx once sent out 2,000 questionnaires – and got none back. My own record response rate is 110%, because the target group were determined to be heard, and not due to the quality of my pilot questionnaire. I don’t think either of us would want to be judged by response rates alone.)
I have spent the last few years researching how to make precise, quantified, and standardised measurements of three aspects of questionnaire performance. These are the amount of information conveyed by a questionnaire, the amount of participation or the ‘reach’ of the questionnaire, and the overall level to which it engaged with the respondents.
Because my new measures are standardised they are universal and can be applied to any public consultation questionnaire, whatever the topic. This allows performance comparisons between questionnaires.
Drawing on the disciplines of community ecology and information theory the starting point for the new measures is ‘semantic abstraction’. In English, that means setting aside the words used in the questions and responses and asking instead whether a particular response is ‘valid’. Does it convey any information about the survey topic, or not? A simple Yes or No, 1 or 0.
Of course this doesn’t mean jettisoning the actual responses that people make. The primary focus will be on what the responses tell you about the topic, and the conclusions you draw from them.
But what if you have a professional interest in how well your questionnaire worked as a consultation tool? How effective was it at extracting information from respondents, and making them feel engaged? This is where semantic abstraction comes in. If you have two questionnaires, one about local playing fields in Bedfordshire say, and another about Kenya’s Open Data Initiative, the issues raised will be very different and the questionnaires very difficult to compare in terms of performance.
Treating the responses as valid or invalid, 1 or 0, gets round this problem. It means that you can calculate the amount of information conveyed, the ‘reach’ of the questionnaire and the overall level of engagement and make direct comparisons.
But that isn’t all. When I applied the new techniques to nearly 200 publicly available consultation surveys from across the world, I discovered that most of the time (85%) the valid responses followed the same distinctive pattern. Here is an example, presented in what ecologists call a Rank Abundance Diagram (Figure 1). The Abundances are the number of valid responses received by each question. Ranking the abundances from highest to lowest produces a pattern that is pretty close to the continuous line in Figure 1.
Figure 1: A Rank Abundance Diagram
It turns out that something like 5 times out of 6 when people respond to a consultation questionnaire they do so in the same general way. They follow the same pattern, shown by the continuous line which comes from a theoretical model which explains this outcome.
I don’t have room here for the details of the performance calculations, which are quite straightforward, or the explanation for why consultation respondents generally behave the same way irrespective of the topic. But I have written up my findings. More details on request.
About the author
Dr John May PhD
Community Engagement and Social Research Consultant
Now retired, John’s career included a wide variety of consultation and engagement roles in the public, private and voluntary sectors which led him to a North Sea oil platform and the Bank of England (not on the same day). An initial curiosity about one set of engagement findings turned into a retirement project to understand how questionnaire surveys work.