News & Insights
Gunning – When 500 Identical Emails Land: The Campaign Response Problem That Triggers Judicial Review
The challenge:Â 500 identical campaign emails arrive overnight. Most teams either delete them (triggering a Gunning Principle 4 breach) or count each as independent evidence (distorting the analysis). Both approaches create grounds for judicial review.
The cost: Wasted consultation effort. Loss of public trust. Legal challenge that exposes your process as arbitrary.
What works:Â Count every submission. Aggregate the content. Document the method before anyone asks.
Why This Matters
Digital consultations generate mass responses overnight. A single campaign delivers 1,000 identical submissions. Most teams have no protocol for handling them. Result: inconsistent treatment that looks like bias under scrutiny. Courts enforce Gunning Principle 4: “conscientious consideration” of all responses. Deleting campaign letters breaches this. But treating 1,000 copies as independent viewpoints distorts your evidence base. The EU Ombudsman reprimanded the European Commission for not reporting campaign submissions, warning this “risks discouraging organisations from launching campaigns.” Your risk: judicial review and reputational damage. The solution: acknowledge every submission, aggregate the content, and document the method.
Three Digital Failures That Breach Gunning
1. Deleting campaign responses without disclosure
- What happens: 800 people submit the same form letter. Your team removes “duplicates” to focus on “genuine” responses.
- Why it fails: Even identical submissions represent real people who chose to participate. Removing them without acknowledgement breaches conscientious consideration.
- Cost: No audit trail showing responses were read. Judicial review risk escalates.
2. Counting every duplicate as independent evidence
- What happens: 600 identical responses. Your analysis reports “600 respondents raised concern X” without noting they used the same text.
- Why it fails: Your evidence base suggests 600 people independently arrived at identical wording. When cross-checked, your analysis looks manipulative.
- Cost: Decision-makers rely on distorted evidence. Post-consultation challenge becomes easier.
3. No documented protocol
- What happens: Different analysts handle similar submissions inconsistently. No methodology note explains the approach.
- Why it fails: Without transparency, any treatment of duplicates looks arbitrary. You can’t prove conscientious consideration if you can’t show your working.
- Cost: Vulnerability in judicial review. Your process lacks rigour.
The Defensible Protocol: Four Steps
Step 1: Flag campaign responses on receipt
- Action: As responses arrive, identify submissions using identical text (usually >80% match across 50+ words). Tag as “campaign” in your tracking system.
- Why: Early identification prevents miscount. Creates audit trail from day one.
Step 2: Include all submissions in headline totals
- Action: Count every campaign response in your total. If 500 people submitted the NDCS letter, your total is +500.
- Why: Respects every participant. Satisfies Gunning Principle 4. Prevents accusations of silencing voices.
Step 3: Aggregate content, not voices
- Action: In analysis, treat campaign letter content collectively. Report the argument once, with volume clearly noted.
- Example: “500 respondents used the NDCS campaign template to raise three concerns: [list]. Full template at Annex A.”
- Not this: Listing the same point 500 times.
- Why: Avoids inflation while preserving evidence quality. Shows you read the content but didn’t let repetition distort analysis.
Step 4: Document methodology in the report
- Action: Include a methodology section explaining how campaign responses were identified and handled.
- Template language:
“We received 1,200 consultation responses. Of these, 500 used identical wording from an organised campaign. Each submission is counted in the total. Campaign responses were grouped for analysis to avoid inflating repeated points. Their key arguments are summarised once, with volume noted. This approach ensures conscientious consideration of all feedback while maintaining analytical rigour.”
- Why:Â Transparency reinforces fairness. Provides audit trail. Pre-empts accusations of bias.
Worked Example: The NDCS Campaign
- Scenario:Â Department for Education consulted on SEND provision. National Deaf Children’s Society mobilised 609 parents who submitted identical template letters.
- What DfE did: Noted that “identical (collective) responses were removed and analysed separately.” They counted all 609 in the total, then reported template content once with volume indicated.
- Why it worked:
- Every submission counted (Gunning 4 satisfied)Content aggregated to prevent distortion
- Methodology disclosed
- Every submission counted (Gunning 4 satisfied)
- Campaign organisers could verify acknowledgement
- Decision-makers got accurate evidence base
- Better phrasing:Â “609 identical campaign responses were identified and analysed collectively” (the word “removed” created ambiguity).
The Three-Question Defensibility Test
1. Audit trail check
If we received 500 identical letters, can we document when we identified them, how many we received, what they said, and how we incorporated them into analysis?
If no: Vulnerable in judicial review.
2. Principle 4 check
Could a campaign organiser read our report and verify their 500 submissions were acknowledged and considered?
If no: Failed conscientious consideration. Legal and reputational risk.
3. Transparency check
If a journalist requests our methodology, would it withstand scrutiny?
If no: Process will appear arbitrary under examination.
The Bottom Line
Campaign responses don’t break consultations. Secret processes break consultations.
Conscientious consideration doesn’t mean treating 1,000 identical emails as 1,000 independent viewpoints. It means acknowledging every submission, reading the content, reporting volume transparently, and creating an audit trail.
If you can show your working, you’re defensible. If you can’t, you’re vulnerable.
How tCI Can Help
Quality Assurance: Independent review at critical stages, from evidence protocol design through to final reporting, ensuring your approach to qualitative data meets legal and good practice standards. Our seven-stage QA process includes assessment of analysis methods, interpretation fairness, and compliance with Gunning, PSED and ICO requirements.
Early Assurance: A snapshot review during planning to sense-check your evidence framework, codebook design, and proportionality rationale before fieldwork begins.
Charter Workshops: Half-day sessions helping your team understand good practice standards for handling qualitative consultation data, including rigorous analysis and defensible interpretation.
Whether you’re preparing for a high-stakes service change or need confidence that your evidence approach will stand up to scrutiny, we can help. Contact tCI for Quality Assurance at hello@consultationinstitute.org
More news
The challenge: 500 identical campaign emails arrive overnight. Most teams either delete them (triggering a Gunning Principle 4 breach) or...
The Challenge: Making Change That Sticks You need to transform services. The financial pressures are real, the quality imperatives are...
Consultation analysis gets challenged when you can’t prove you listened fairly. The common failure mode: hundreds of comments reduced to...