News & Insights
Consultation Management: From Keeping Records to Building Knowledge
Rethinking Consultation and Engagement Data
Most organisations capture consultation and engagement data. Few treat it as something worth building. This article explores why engagement data is still managed as a by-product of activity rather than a strategic asset, what it takes to change that, and why the most compelling case for doing so is often found not in what the data shows, but in what it can’t.
Communities don’t experience projects the way organisations manage them. They don’t separate the early engagement from the formal consultation from the follow-up. They experience it as one continuous relationship, and they notice when an organisation clearly doesn’t.
I’ve spent over a decade working with stakeholder relationship management systems, mainly in Canada, where that continuity shapes how consultation is practised. Coming back to Ireland, what I observed was different. Most organisations here still treat each consultation as its own event. Information gets captured, a report gets written, and the data ends up wherever the team keeps records. It’s filed. It just doesn’t go anywhere useful after that.
The rich picture that engagement information could build, across projects, programmes, and communities, is routinely lost as a result.
The missed opportunity
This pattern reflects a deeper issue in how engagement data is framed inside organisations. When the primary purpose of recording consultation information is reporting on that particular moment, the data is naturally shaped around it. That supports transparency and defensibility, but it makes comparison and analysis across time difficult.
When you look at how organisations treat other types of data, the contrast is clear. Nobody collects financial data just to prove transactions happened, and operational data isn’t captured purely to document activity. Both exist to guide decisions, highlight risks, and signal when something’s going off track. They’re how organisations steer themselves.
Engagement data doesn’t tend to occupy the same role, even though it captures information about public concerns, stakeholder relationships, and the social context around decisions.
When engagement information does connect across an organisation’s work, it starts to answer questions that currently require significant manual effort. Which issues keep resurfacing? Is trust with a particular community improving or getting worse? How did, or should, outcomes from a previous activity inform strategy for the next?
None of these are complicated questions. They just require data that was structured with them in mind.
A profession still finding its footing
When I mention “engagement data as an organisational asset”, I tend to get looks that say, very clearly: what? That reaction isn’t a failure of imagination. It reflects how engagement has historically been framed, more as a process to be delivered than as a function that generates lasting organisational value.
Consultation and engagement is still consolidating its place alongside disciplines like engineering, project management, and finance. Standards and accreditation are developing, and the practice is increasingly being recognised as strategic, but the infrastructure beneath that recognition tends to lag. The result is something of a wild west when it comes to how underlying data is managed. It tends to be treated as a by-product of activity or a defensive record of what happened, rather than a source of ongoing learning that can inform strategic direction, improve how efficiently teams and projects operate, and build better relationships with stakeholders. Which is, after all, the whole point.
There’s a version of this argument that says it will sort itself out over time. As the profession matures and engagement becomes more widely recognised as strategic, the investment in the systems and data beneath it will follow naturally. It’s an appealing idea, but it doesn’t quite hold up. Recognition of the practice and investment in the data that underpins it don’t automatically move together. You can have a highly regarded engagement function that still runs its SRM off the side of someone’s desk.
Why this has to start at the top
Coming from a systems and data background rather than a practitioner background has given me a particular vantage point, looking at practices across countries, sectors, teams, and project types. What I’ve learned is that good engagement data practice almost always needs to be built from the top down.
The reason is straightforward. When a director decides that engagement data is not a priority, the work stops. All the good intentions, well designed workflows, and carefully maintained records that a practitioner has fought to establish can be undone in a single conversation. I’ve seen it happen more times than I’d like.
So, the question isn’t just how to build better data practice, it’s how to make the case to the people with the authority to protect and resource it. And I believe the answer is insight. Not activity reports, not records of who was consulted and when, but evidence of what the data actually reveals about risk, relationships, and decisions. That’s what moves it from a compliance function to a strategic one in the eyes of the people who control budgets and priorities.
So why doesn’t this happen?
Making that case upward is harder than it sounds, and the barriers are real.
Engagement data just doesn’t feel urgent compared to other organisational priorities, the perceived risk seems manageable, and capturing enough to support reporting feels sufficient. Resources are limited, and attention naturally goes to delivering consultations well and meeting expectations around transparency and process.
When consultation is regulated, the emphasis falls on demonstrating that requirements have been met. That’s legitimate. But the way data gets recorded tends to reflect those reporting needs rather than longer-term learning. Some information may not be strictly required for the record, yet it’s essential for understanding how issues evolve across projects, places, or initiatives. When only the minimum is logged, the dataset can demonstrate activity but not learning and continuity.
I’ve seen a clear split between organisations using SRM systems primarily for reputation management and those using them for regulatory compliance. The difference shows up quickly in the data. Where engagement information is treated as something worth building, there’s more discipline, clearer accountability, and stronger thinking about what the data needs to support. Where it’s treated as compliance, the data tends to be thinner, more fragmented, and only revisited when required.
There’s also a less comfortable possibility. Keeping consultations separate can feel lower risk. If connections across projects are never made, the broader picture never has to be acknowledged, and recurring issues never have to be explained. Each consultation stands alone, which can feel simpler than answering for what a programme wide view would show.
The problem with that logic is that it only holds until someone starts asking questions. And when they do, the absence of a connected record doesn’t read as simplicity. It reads as an organisation that wasn’t paying attention.
When it suddenly matters
Engagement data tends to matter most when things get contested. A Freedom of Information request makes whatever you did capture visible, including the gaps, the inconsistencies, and the contradictions across projects. A Judicial Review raises the stakes considerably higher. A planning appeal presents the same exposure, where objectors or an inspector question whether concerns raised in earlier stages were genuinely considered in later decisions.
And it isn’t always an external trigger. A change of minister, CEO, or board is often the moment when someone asks for a coherent picture of how the organisation has engaged with key communities and stakeholders over time, and there isn’t one to give.
It also surfaces internally. Leaders question why they’re paying for an SRM platform nobody seems to be getting value from. Programme managers ask why they can’t get a simple picture of what’s been heard across related projects. Teams realise they’ve been solving the same problems in parallel for months without knowing it.
I once worked with an organisation that had run multiple consultations on related projects over several years. When questions started being asked about whether issues had been addressed coherently across the programme, nobody could easily answer. The information existed, technically, but it was scattered across different systems, managed by different teams using different approaches.
Reconstructing the narrative took weeks and required effort from people who should have been doing other work. Even then, the final answer was less confident than anyone wanted, because the underlying data had never been structured to support a cross-programme view
Those who’ve lived through that experience often become the strongest advocates for better data practices. By then, though, the work is reactive rather than planned.
Technology won’t fix this
When I was working with SRM systems in Canada, I was convinced the main problem was adoption. Get the training right, embed the workflows, and everything else would follow. It took a few years and a number of audits to understand how wrong that was.
The system is rarely the problem. These platforms are essential infrastructure for capturing and managing engagement information, but their value depends entirely on how clearly the data they collect is intended to be used. If the purpose is unclear, teams record inconsistently. If standards aren’t agreed, people categorise issues differently. And if nobody owns quality, gaps and contradictions accumulate quietly until the data is too compromised to be useful.
Ultimate, technology amplifies whatever process already exists. If that process is unclear, technology scales the problem. And when AI enters the mix, the scale of that amplification increases dramatically. What comes out is only as good as what went in, and when the foundation is fragmented, AI makes that problem bigger, not smaller.
Getting the foundations right before you procure a system is critical. Roll out an SRM as the magic fix if you want a quick way to surface every inefficiency and gap in your process, but don’t be surprised when you find yourself trying to steer a ship back to shore.
What engagement data actually needs
So, what is the fix?
It starts with culture, and that’s the hardest part. In most organisations, engagement data is managed the way it is because that’s how it’s always been managed. Teams work in silos, using their own systems and their own conventions, solving problems that other teams have already solved without knowing it. Spreadsheets multiply, email threads become records, and parallel systems spring up because nobody has agreed on a single source of truth. None of this is deliberate, it’s just what happens when nobody has decided that the data is worth treating consistently. And that rarely changes without a clear signal from leadership that it matters.
Before any process or system work begins, there needs to be a strategic conversation about intent. What is the data meant to support, not just now but in one, five or ten years from now? What should it help the organisation learn? How should it connect to decisions? Who contributes to it and who depends on it?
From that strategic foundation, the practical mechanics follow. What gets captured and how. Who is accountable when it isn’t done well, and what the process looks like for quality assurance and feedback. Who is responsible for maintaining and managing insights, flagging when things are going off track, and making sure input gets traced through to decisions rather than disappearing into a report. These are small decisions, made early, that determine what the organisation can say about its work two years later.
There’s also a capability question that’s easy to overlook. Managing engagement data well requires skills in data structure, quality management, and analysis that aren’t always part of an engagement professional’s background. In other functions, those skills are recognised and resourced accordingly. In engagement teams, they’re often expected to appear alongside everything else, without additional support or training. Addressing that is part of getting the foundations right, and it matters before any system or tool is introduced.
Once these structures are in place, technology becomes genuinely useful. Platforms and SRM systems are powerful tools for capturing, connecting, and surfacing engagement information at scale, but they depend entirely on the culture, strategy, and governance around them. Get those right first, and the technology does what it’s supposed to do. And when the data is consistent, you can start to see things that individual reports never show, patterns across consultations, trends in how issues evolve, a real picture of how relationships are developing over time.
Does this require time and resources? Yes. But think about what it actually buys. Teams stop doing the same work twice without knowing it. The engagement function starts to be taken more seriously internally, because it can point to evidence rather than just asserting its own value. And practitioners who can demonstrate impact through data tend to find themselves in different conversations, the ones where budgets and priorities get decided. That’s a reasonable return on getting the basics right.
The case for change
The most powerful case you can make, whether you’re a practitioner or a leader, isn’t built from what the data shows. It’s built from what it can’t show. “We can’t tell you whether trust with this community has improved over the last three years because we never tracked it consistently.” “We can’t tell you whether this issue was raised on the last two projects without weeks of manual work across systems that were never designed to talk to each other.” Those absences are the argument. They show the cost of the status quo in terms that are difficult to ignore.
Moving from records to insight doesn’t start with a system. It starts with a strategic decision that the data is worth more than a record of what happened.
How tCI Can Help
Organisation Wide Learning Hub Access
Equip your entire team with professional consultation skills through one platform. Self paced courses, live virtual classrooms, practical toolkits and expert resources that build a shared baseline of competence across your organisation. Trusted by councils, NHS bodies and regulators nationwide.
Bespoke Training Workshops
Training that works with your real projects, not hypothetical scenarios. Sector tailored sessions help teams apply good practice to live challenges: sharpening consultation documents, building defensible codebooks, strengthening equality analyses. Half day or full day workshops for health, local government, planning and public service teams.
Coaching for Complex or High Risk Consultations
Expert guidance when the stakes are highest. One to one and small group coaching for senior officers navigating legally exposed or politically contentious decisions. Strengthen your judgement on proportionality, evidence standards and challenge management. Essential for organisations that may face judicial review risk or major service changes.
Whether you’re preparing for a high stakes service change, building long term consultation capability, or need confidence that your evidence approach will stand up to scrutiny, we can help.
Contact tCI: hello@consultationinstitute.org
Luiseach Flynn
More news
We’re relaunching our Thursday morning sessions from February, and we want to know what you’d like to explore Good news:...
Rethinking Consultation and Engagement Data Most organisations capture consultation and engagement data. Few treat it as something worth building. This...
Budget pressures are pushing public bodies to reduce consultation spend. But cutting costs without managing the consequences introduces serious legal...