News & Insights
AI and Code Framing: A Framework for Responsible Adoption
The Promise and the Caution
AI is arriving in government consultation. Cambridge City Council’s planning department and the Central Government are piloting systems to process responses faster, potentially saving hundreds of thousands of work-days annually. But citizens demand rigorous oversight. The public holds AI to higher standards than humans and expects transparency. Any hint of concealed automation will backfire.
Consultations often feed legally challengeable decisions. AI use must meet fairness standards and follow the Algorithmic Transparency Reporting Standard. Professional guidance emphasises “human-in-command” at every stage.
A Framework for Responsible Adoption
Before deploying AI, answer five questions. If you cannot do so with confidence, the tool is not ready.
1. What human task does this replace, and what judgement does it bypass?
Be specific. When AI groups “traffic concerns” with “road safety”, it makes an interpretive decision a human analyst might have handled differently. That choice shapes your final analysis.
Document every judgement AI will make: how it handles ambiguous comments, weights longer responses, or treats short submissions. Make these decisions visible so reviewers know where algorithmic interpretation occurs. This creates the audit trail you need if challenged and forces you to think through what you are genuinely automating.
2. How will we evidence the chain of reasoning from raw data to final decision?
If your report states “78% prioritise green space”, trace that claim to specific submissions. Record which AI model you used (version and parameters), save raw outputs alongside final summaries, and build reference systems linking each finding to AI themes, submission clusters, respondent IDs and the staff member who verified it.
This allows external validation. When auditors question conclusions, show them submission 1247, prove it fed into theme G7, demonstrate that appeared in 78% of responses, and confirm planning officer Sarah Chen validated it on 15 March. If your report influences Cabinet decisions months later, you can still substantiate every claim.
3. What biases exist in the model, and how do they interact with our population?
AI models typically perform best on formal, grammatically standard text. They struggle with colloquialisms, dialects, non-native English and text-message brevity. This creates risks where you must consider all voices equally.
Test before deployment. Sample submissions from non-native speakers and compare AI summaries against bilingual human reviewers. Check across age groups: does AI handle both “yeah this ain’t it” and “I must express profound reservations” equally? Examine whether submission length affects weighting, potentially excluding people with literacy barriers.
Build corrective measures. Track responses from underrepresented groups separately. Review stratified samples across demographics. Where bias appears, adjust your process: certain groups may need additional human attention, or you may need multiple AI tools to triangulate results.
4. How will we explain AI use to the public? Would disclosure weaken or strengthen trust?
Never answer “we won’t tell them”. Citizens demand transparency. The question is how to disclose it effectively.
For reports, include methodology notes: “We used software to process 3,000 submissions. Every theme was verified by qualified planners against original comments.” For responses: “This was drafted with AI assistance and fully checked by a human.” The Department for Transport tested this phrasing successfully.
For meetings, prepare clear explanations: “AI helped us process feedback faster, ensuring we considered everyone’s views. All conclusions were checked by the team and we can show how we reached them.”
Test disclosure messages with community groups first. If you cannot craft a message that strengthens trust, your process has too much AI and insufficient human oversight. Add verification steps and increase sample checking.
5. What manual checks will we apply before relying on outputs?
Specify what gets checked, by whom and to what standard. Define quality control in writing:
Random sampling: Verify 10% of AI categorisations against source material. Have analysts record whether each was accurate, marginally acceptable or wrong.
Targeted checking: Review controversial issues, low-confidence cases and surprising findings. If AI reports 90% support where you expected opposition, verify exhaustively.
Thematic validation: Have experts review AI theme definitions. Are categories too broad? Are distinct issues conflated?
Statistical checks: Do numbers add up? If AI reports 500 comments on issue X but your report shows 480 quotes, explain the gap.
Assign clear accountability. One named person must sign off that outputs are fit for purpose using a formal checklist: themes reviewed, samples checked, bias testing conducted, disclosure prepared, audit trail documented.
Build quality gates. AI outputs should not reach decision-makers until checks complete. If verification reveals problems, pause and reconfigure until outputs are reliable.
Conclusion
These five questions provide a gate. Tools that cannot pass through are not ready. Start with small pilots, train staff to interrogate AI tools rather than simply use them, and maintain alternative engagement routes for those without digital access.
AI can accelerate analysis, but technology serves consultation, not the reverse. Core values remain: fairness, transparency, accessibility and accountability. Get the questions right, demand rigorous answers, and the tools will follow.
How tCI Can Help
Skills Review & Planning Support
We help you shape a tailored “learning plan” for your organisation, deciding who needs training, what level, and when, so that your consultation capacity grows in line with your strategic needs. This ensures consistent, recognised standards across teams through the tCI Learning Hub.
Applied Consultation Workshops
Half-day or full-day, sector tailored training sessions for teams working in health, local government, planning or other change sectors. These practical, scenario-based workshops are grounded in real-world application, covering rigorous analysis, defensible interpretation, and good practice standards for handling qualitative consultation data.
Consultation Risk Assessment
Independent review to identify and manage legal, political and reputational risks early. We examine your consultation scope, governance, timelines and materials against recognised standards including the Gunning Principles. Receive a structured assessment highlighting risks, rating impact, and setting out mitigation options. Essential for politically sensitive, high profile or challenge prone decisions.
Whether you’re preparing for a high stakes service change, building long-term consultation capability, or need confidence that your evidence approach will stand up to scrutiny, we can help.
Contact tCI: hello@consultationinstitute.org
More news
The Promise and the Caution AI is arriving in government consultation. Cambridge City Council’s planning department and the Central Government...
The risks of premature engagement What happens when an organisation moves to public engagement before fully developing its options? From...
Introduction Public consultations must be fair and inclusive, ensuring all participants can understand proposals and provide meaningful feedback. One of...