Skip to content

Data analysis during lockdown

Analysing your data at the end of a consultation can sometimes  be a bit of a challenge. Perhaps you have more data than you expected – or, as is more often the case, less.  

This period of lockdown, though, where areas of face-to-face work are not possible and some time may be liberated, is a good time to sit down with any consultation data and have a really good look at it. The activity lends itself to undisturbed consideration and the space to think about the data logically, to consider what you want out of it, and to spend time interrogating it. 

Analysing quantitative data is often relatively easy in terms of human resourcing, and it is always good to know that numbers are numbers, and quantity is rarely a problem. Analysis software doesn’t really care whether you have 100 values or 10,000; working out percentages of different answer categories still take the same time. We can, though, take a little time and thought to look at existing datasets to see if there is anything more we can learn from them. Now might be a good time to run some cross-tabulations on the data. If you have collected demographic information from respondents, you can use these values to compare and contrast how different groups responded. Are elderly people more in favour of a proposal than younger ones? Are residents in some postcodes more opposed than others? Now is a good time to look at these factors and see if you can draw any more conclusions from the quantitative data you have from a consultation.  

Remember, one of the main purposes of consultation information is to provide decision-makers with information about what was said, who said it, and a rough idea of the strength of opinion. Looking at the data in this way unpicks a lot of this, and helps decision-makers to see exactly which groups feel most strongly about issues, and how they might be differently affected by what is decided. This is particularly pertinent for looking at equalities issues – if your demographic data allows you to look at how different Protected Characteristics feel about the subject under consideration, this may help you in demonstrating due regard under the requirements of the Equality Act. 

There are a couple of caveats here, though. 

Firstly: data protection legislation. Remember that whatever data you collect that can be traced back to named living individuals can only be used for the purposes that you have stated it is being collected. Check this with your data protection officer. 

Secondly: beware of error when dealing with small amounts of data, particularly when comparing statistics. When you are looking at different groups in your dataset, be very wary of how many people the data is coming from. Low numbers in any group make for wide margins of error, and it is entirely possible that any statistical differences between groups are down to error rather than real difference. Undertaking some significance testing here is key. 

One big area of data analysis that often takes more time than imagined is making sense of qualitative (i.e. free-text) data; particularly when there are more responses than anticipated, or the text arising from discussions is lengthier than expected.  

Data from sources where lots of discussion has taken place (e.g. focus groups or workshops) is bound to be lengthy and complex. Take time to assemble this kind of information into an easily interrogated format. Transcribe recorded information so that it is all text, and check the recordings so that everything said can be attached to a participant. 

One area where a large amount of unexpected qualitative data can come from is a badly planned questionnaire. Remember, open questions are easy to write, but requires a large amount of time afterwards to analyse. It is always better to spend time in advance, using information from pre-consultation activities, to close questions down a little, or to frame them, so that answers can be more easily analysed. 

 tCI recommends that the best way of analysing qualitative data is by coding it. A rigorous and objective process such as coding helps to counteract some of the inevitable bias that creeps into the analysis of qualitative data. Just reading through material and noting down ‘important points’ is not really enough here. What is an important point? Is it important because you believe it to be? Because the person who says it believes it to be? ‘Important points’ generally arise from the data – they are important not because we believe them to be so, but because either they have been made by many people (and you won’t know this until you start approaching the data with rigour), or they have been made by someone who is an expert in the field (for example, a technical challenge raised by an engineer). 

Coding is a long, slow and iterative process. It cannot be undertaken by a computer, and requires human cognition – only a human being can compare two statements, written using completely different words, and decide whether they mean the same thing or not. With time on our hands, now may be a good time to spend undertaking a coding exercise. 

tCI can help with all of these processes. We can run online mentoring sessions and we are planning some focused training in these areas, but a good place to start is with our online training modules on data analysis. These can be found on our website.

About the Author

Barry has been a Consultation Institute Associate for over ten years, and is now a Fellow of the Institute, providing consultation, evaluation and research services to many organisations. He delivers courses for the Consultation Institute on Better Focus Groups, Better Surveys and Questionnaires, and Better Data Analysis for Public Consultations, and has published three books with them: Effective Public Meetings, Effective Focus Groups, and Effective Surveys and Questionnaires.

Read more about Barry

Scroll To Top