News & Insights
AI in Public Consultation: Wider Reach, New Risks
At the tCI, we are witnessing a subtle shift in who holds authority during public consultations. Large language models have not mechanised participation; rather, they have shortened the time it takes for a motivated resident to gain technical understanding. This single change is already affecting the tone of submissions, the expectations placed on decision makers, and the standard of evidence that the public can demand.
The core idea is simple: expertise was once constrained by time and access barriers. Reading a full case, analysing assumptions, and crafting a detailed response took hours that many do not have. AI (or Large Language Models) now transform those hours into minutes. While they do not replace human judgment, they reduce obstacles between curiosity and understanding. This enables more people to read carefully, ask more precise questions, and enhance discussions. For example, if a proposal claims savings based on a single study, citizens can identify that reliance and question its broader applicability. If estimated travel times vary widely around an average, citizens can seek analysis of the distribution. When protections are promised for a group, citizens can scrutinise potential harm and the feasibility of fallback plans. Overall, this results in more informed, clearer participation rather than louder participation.
This change affects the notion of legitimacy. Thoughtful consideration means public bodies must engage with the actual content of what people say. As AI-assisted submissions become more complex, officials will need to clarify their processes more transparently. Summaries that previously focused on broad themes will require better audit trails linking those themes back to the exact words used. We expect debates to shift from volume to interpretation- questioning whether the analysis maintained minority perspectives or marginalised them, and whether a plain language revision altered the legal intent of a commitment. These are not criticisms of technology, but questions of methodology, which will become more common as the public’s ability to ask them improves.
Emerging AI Risks in Public Consultation
Some risks are important to acknowledge. Models may oversimplify nuance and present weak inferences confidently. Inexperienced users might be misled by fluent language and accept statements as facts. Authorities could be tempted to speed up processes without proper quality control. Additionally, there’s a rising danger of synthetic campaigns that mimic genuine support but are actually coordinated. The best approach isn’t to abandon these tools, but to enhance transparency and demand verifiable links between claims, sources, and conclusions. When AI assists with analysing responses or drafting report sections, it should be clearly stated, with explanations of human review processes and how minority signals are maintained.
Equity is as important as efficiency. The advantages of AI will initially benefit those with access, confidence, and time. This is unacceptable if we aim for inclusive decision-making. A responsible approach includes setting up assisted access points in libraries and community centres, making easy-to-read materials standard, and supporting community organisations that help first-time users navigate these tools safely. Democracy is meaningful only if it expands the range of voices, not just amplifies those already engaged.
We are frequently asked if AI threatens the reputation of traditional expertise. We believe they will alter how expertise is practised in public- not eliminate its necessity. Specialists will devote less time gatekeeping and more time clarifying trade-offs, defending assumptions, and refining proposals based on well-formed critiques. This creates a healthier dynamic, demanding more from everyone. Citizens need to take responsibility for verifying quotations, distinguishing opinions from evidence, and recognising uncertainty when it exists. Officials should embrace challenge and respond with thorough analysis instead of rhetoric. tCI will continue to advocate for these standards on both sides.
What Happens Next?
What does good look like in this new normal? It involves creating consultation documents that are easily understandable yet retain legal accuracy, conducting response analyses that reflect the full range of opinions, and producing reports that clearly trace how conclusions were formed. It includes surfacing and addressing minority viewpoints instead of hiding them in an appendix. It also means providing transparent statements about where AI was used and how its outputs were verified. Most importantly, it entails making decisions that practically explain how public input shaped the final outcome or the reasons why it did not.
Several developments are anticipated in the coming years. Citizen assistants will frequently be available at engagement points, providing answers to proposals along with citations to the original texts. Authorities will routinely issue AI usage statements and provide opt-out options where possible. Consultation reports will feature brief methods notes detailing both human and machine processes. Additionally, there will be an increase in public-generated alternatives, with residents using similar tools to create credible options that merit testing instead of just criticising. While this shift may be uncomfortable for some organisations, it will ultimately be constructive.
We ask important questions and encourage others to do the same. How can we ensure that low-frequency but high-impact issues aren’t ignored by automated summaries? Where should we set thresholds to flag potential astroturfing without discouraging genuine mobilization? How can we simplify language without losing the legal meaning of duties or commitments? What audit standards will convince a court that thorough consideration was given when AI is involved? Additionally, how will we measure success beyond just speed and cost, such as by monitoring participation diversity, readability, QA error rates, and the prominence of minority themes in final reports?
Our position remains consistent: AI (LLMs) are not a threat to consultation nor a shortcut to legitimacy. Instead, they act as an accelerant. If authorities focus on transparency and sound methods, and if citizens engage with the tools thoughtfully, the quality of dialogue can improve. tCI will keep offering practical guidance and training, while also using our independent voice to highlight weak practices—whether it’s performative consultations that conceal methods or AI hype that overpromises. The public now has more capacity to contribute meaningfully, and it is the responsibility of institutions to develop processes that are worthy of that contribution.
If you’re ready to put this insight into action, we’re here to assist. tCI provides authorities and community groups with evidence-based consultation design, AI-ready methodologies, training, and independent assurance. Visit https://www.consultationinstitute.org/services/ to see how we work and select the support that best suits your next consultation.
More news
At the tCI, we are witnessing a subtle shift in who holds authority during public consultations. Large language models have...
Successful reorganisation begins with clarity about the statutory process. Outlining the consultation, approvals, and order-making stages in a public roadmap,...
Introduction – Why revisiting predetermination matters An article from January 2020 on tCI’s News & Insight page discussed two judicial review...