News & Insights

Code framing for Consultations: From Messy Text to Decisions you can Stand Behind

Consultation analysis gets challenged when you can’t prove you listened fairly. The common failure mode: hundreds of comments reduced to a sentiment percentage (“72% support”) with no clarity on why people objected, what would mitigate the impacts, or how protected groups are affected. Decision-makers can’t act on that. Legal teams flag predetermination risk. And boards defer.

In one hospital service consolidation, comments about impacts on disabled carers were lumped into a generic “access” code. When the decision paper went to board, the Equality Impact Assessment couldn’t show conscientious consideration of protected characteristics. Legal flagged the risk. The board deferred and asked for a recode with proper equality sub-codes. Three months lost, trust damaged, and the team had to explain why they’d analysed the data badly the first time.

A structured coding frame prevents this. It’s the bridge between raw consultation text and the evidence your decision-makers need to act confidently and defensibly. Done well, it shows who said what, how often, why it matters, and what you’ll do about it—all traceable back to the comments people actually submitted.

Below is a compact, field-tested method that aligns with the Institute’s standards for fairness, transparency and traceability the same principles that underpin tCI’s Consultation Charter. Use it to produce analysis that’s proportionate, auditable, and shareable.

What breaks when coding is weak

Before we get to method, understand the risks:

  • Predetermination challenges: If you can only produce cherry-picked quotes or vague summaries, it looks like you decided first and consulted second. Judicial review applicants will argue you didn’t have “conscientious consideration” of the responses.
  • Unusable equalities evidence: Your EqIA requires you to show how protected groups are affected and what reasonable adjustments you considered. If your coding doesn’t tag these impacts explicitly, you have no audit trail.
  • Mitigations lost in the noise: People tell you what would make a proposal acceptable. If those suggestions aren’t coded as a distinct category, they don’t make it into the decision report and you miss chances to build consent.
  • Sentiment without substance: Reporting “63% oppose” tells leaders nothing. They need to know why people oppose, what sub-groups are most affected, and what changes would shift opinion. Weak coding can’t answer those questions.

What “good” looks like

A defensible coding frame should be:

  • Explicit: Categories are named and defined so another analyst could replicate your work.
  • Grounded: Codes come from what people actually said (inductive), from your consultation objectives and questions (deductive), or—best—both.
  • Proportionate: Detailed where decisions hinge; lighter where they don’t.
  • Traceable: Every finding you publish can be traced back to verbatim comments and counts.
  • Equality-aware: Able to surface impacts on protected groups and seldom-heard voices, with enough granularity to feed your EqIA and mitigations register.

A five-step method

1. Define the unit of meaning and the frame’s purpose

Decide what you’re coding—full comments, sentences, or clauses—and why. Are you comparing support across options? Extracting mitigations? Testing equalities hypotheses? Write that purpose at the top of your codebook.

This anchors every decision later and keeps analysis proportionate to the questions you actually asked. If you don’t know why you’re coding, you’ll either over-code (hundreds of unused sub-codes) or under-code (no way to answer the questions decision-makers will ask).

Example purpose statement:
“To identify reasons for support/opposition to each service option, extract suggested mitigations, and surface impacts on protected characteristics for the EqIA.”

2. Build a draft codebook (deductive + inductive)

Start with the scaffolding you know you’ll need:

  • Policy scaffolds: Options, themes from your consultation document, statutory duties, equalities prompts, and any “known issues” from pre-consultation engagement.
  • Outcome scaffolds: Support / oppose / conditional; suggested mitigations; implementation risks; alternative ideas.
  • Audience scaffolds: Stakeholder group, location, or any segmentation you’ll need for reporting (e.g., service users vs. staff, urban vs. rural).

Then sample 10–15% of responses for quick open coding. Add recurring ideas as emergent sub-codes under the scaffolds. Keep code names short, active, and non-judgmental: “Bus reliability—peak crowding” not “Unacceptable chaos”.

Critical for equalities: Add a “barriers & adjustments” branch under each major theme (e.g., “Ticketing—screen-reader compatibility”). This ensures impacts on protected groups don’t get buried in generic codes.

3. Test for reliability early

Have a second coder independently code the same 50–100 comments. Compare results, discuss where you diverged, and tighten the definitions with in-scope/out-of-scope examples. Repeat until your overlaps are high and your disagreements are about interpretation, not ambiguity.

Document this in a short methods note. That’s your evidence of fair interpretation if someone later challenges your findings. It’s also how you prove to legal and audit that the analysis wasn’t done by one person with unconscious bias.

4. Code the corpus with light QA cycles

Code in passes to maintain speed and quality:

  • Pass A—Breadth: Apply top-level themes to everything quickly.
  • Pass B—Depth: Add sub-codes and outcomes where relevant.
  • Pass C—Equality focus: Tag comments that speak to protected characteristics, barriers, or reasonable adjustments. Link these directly to your EqIA.

Work in short bursts. Keep an issues log for “do we need a new code?” and only add one when it will change a conclusion or a number in the report. Resist the urge to create a code for every tiny variation—merge ruthlessly.

Design choice that lowers risk: Separate sentiment from substance. It’s fine to tag both “oppose” and “night service reliability—evidence of missed connections”. Counting sentiment alone is weak. Pairing it with the reasons is strong and gives decision-makers something to act on.

5. Turn codes into findings decision-makers can use

For each key theme, structure your findings this way:

  • What we heard: Top sub-codes, counts/percentages, and a short neutral summary.
  • Why it matters: Link to options, constraints, statutory duties, or standards (e.g., accessibility, safeguarding).
  • What we’ll do: Proposed mitigations, design changes, or reasons no change is proposed—each cross-referenced to the coded evidence.

Publish a plain-English one-pager for sponsors and stakeholders: consultation purpose, what was reviewed, top 3–5 findings, immediate actions, and what happens next. This mirrors the Institute’s emphasis on transparency and publication, and it’s the document that gets scrutinised if the decision is challenged.

Design choices that raise (or lower) your risk

Do separate sentiment from substance. Tag “oppose” and “night service reliability—evidence of missed connections” as distinct codes. Sentiment percentages alone won’t survive scrutiny.

Don’t treat multi-issue comments as single topics. Split by idea; otherwise one loud paragraph skews your counts and you lose the ability to trace specific claims back to specific respondents.

Do keep an “unclassifiable / other (for triage)” bucket during Pass A, then empty it deliberately. If a code stays vague, it’s a sign the definition needs tightening or the comment doesn’t belong in your frame.

Don’t bury seldom-heard voices. Use tags to pull a cut of the data for protected groups or localised impacts. Feed those specific insights into your EqIA and mitigations register. Generic summaries hide the very evidence you need to show “conscientious consideration”.

If you work with architects/ARB: ARB’s own Code of practice for consultations stresses openness, clarity and accessibility, early timing (including pre-consultation), proportionate durations (typically ~3 months), and publishing a summary of responses within 12 weeks of close. Build those into your codebook’s methods note: state how you ensured clarity (plain-English labels), accessibility (alternative formats), timing (sample dates), and traceability (how published responses link to coded themes). This aligns your evidence trail with ARB’s expectations and reduces challenge risk. 

Quick win: frame before you code

How you frame the purpose of your engagement influences what people tell you—and therefore what you’ll need to code. The Institute’s guidance on framing your engagement stresses getting the intent and questions right upfront so analysis is tractable later.

If you frame only for benefits, you’ll under-collect risks and mitigations. If you ask vague “share your thoughts” questions, you’ll get unfocused responses that are hard to code consistently. Balanced questions yield analyzable trade-offs and clearer evidence for decision reports.

You can code robustly with:

Spreadsheet: One row per idea (comment-chunk), columns for respondent meta-data, codes, and analyst notes.

Qualitative tools: NVivo, Atlas.ti, or MaxQDA help with big datasets, inter-coder reliability checks, and structured exports.

Audit trail: A living codebook (definitions + examples), an issues log, and a short methods appendix for the decision report.

Whatever you use, the outputs should make fairness and traceability visible—consistent with tCI’s assurance approach (e.g., RAG-rated process notes and shareable one-pagers).

Equality integration in practice

Bake equalities into the frame from the start, not as an afterthought:

Add a barriers & adjustments branch under each major theme. Example: “Ticketing—screen-reader compatibility” or “Appointment times—carer availability”.

Tag direct impacts by protected characteristic where the text supports it. Avoid inference without evidence. If someone writes “I’m a wheelchair user and the ramp gradient is too steep”, that’s a direct impact you can code and count. If they just say “access is bad”, you can’t infer which characteristic applies.

Maintain a mitigations register sourced from coded comments. Feed this into your EqIA and decision report so “conscientious consideration” is documented with a clear audit trail. This is what legal and scrutiny committees will check.



How tCI Can Help

Quality Assurance: Independent review at critical stages, from evidence protocol design through to final reporting, ensuring your approach to qualitative data meets legal and good practice standards. Our seven-stage QA process includes assessment of analysis methods, interpretation fairness, and compliance with Gunning, PSED and ICO requirements.

Early Assurance: A snapshot review during planning to sense-check your evidence framework, codebook design, and proportionality rationale before fieldwork begins.

Charter Workshops: Half-day sessions helping your team understand good practice standards for handling qualitative consultation data, including rigorous analysis and defensible interpretation.

Whether you’re preparing for a high-stakes service change or need confidence that your evidence approach will stand up to scrutiny, we can help. Contact tCI for Quality Assurance at hello@consultationinstitute.org

More news

Consultation analysis gets challenged when you can’t prove you listened fairly. The common failure mode: hundreds of comments reduced to...

The Public Sector Equality Duty isn’t optional process theatre. It’s the first place campaign groups and lawyers look when challenging...

Public bodies have a legal duty to consult fairly. Gunning Principle 2 requires providing consultees with “enough information to enable...

Shopping Basket
Scroll to Top

Your membership questions answered

View our frequently asked questions or contact our dedicated account manager for further support.

You can reset your password here. If you’re still having issues, please send us a message below.

We have many ways you can pay for your membership.

  • Credit card
  • Online
  • Invoice
  • PO

You can renew/upgrade your membership here.

To find out more, send us a message below.

You will receive a reminder email from our dedicated membership account manager 4 weeks before your renewal date. This email will contain all the information you need to renew.

You can also renew your membership online here.

You can update your contact details here. Alternatively, please send a message to our membership account manager below.

Please send a message to our membership account manager below. 

Still need support?

Our dedicated Membership Account Manager is on
hand to assist with any questions you might have.

Request a callback

Leave a message and our team will call you back

"*" indicates required fields

Name*

Send us a message

We’ll be in touch with you soon.

Name(Required)
Email(Required)
This field is for validation purposes and should be left unchanged.