“How do we effectively manage large volumes of responses to consultations with limited resources, especially when these volumes are likely to increase as a result of more accessible online engagements”?
It’s a great question. The growth of the internet as a means for spreading our ‘engagement net’ ever wider brings with it the potential for a much greater degree of community participation.
Unfortunately, this is a double edged sword.
Greater engagement leads (theoretically) to better outcomes (be they policy, planning or community outcomes). However, increased consultation responses has traditionally led to more work for already over stretched engagement staff.
So how do we manage large volumes of responses to consultations with limited resources?
Currently, the most common form of response that an engagement officer gets is a large, unstructured block of text. Be that in the form of an email, a hard copy letter or the output from a web form (“Enter your comments here: “). In some cases, those responses might be in an excel spreadsheet offering slightly more structure.
The process they then go through is extremely resource intensive and varies per consultation but usually consists of the following steps:
1. Identify Relevance
Is the comment relevant to the current engagement activity? For example, the engagement might be about a new waterfront precinct but the member of the public decides to take the opportunity to express their views on the state of the local roads or levels of graffiti.
2. Extract Themes
Often responses might be highly specific in nature, and yet part of the job of an engagement officer is to identify broader themes – both to report on but also to provide consensus indication. For example, a response might be describing a specific usage scenario but the overall theme is a concern about the environment. Grouping responses enables officers to report on the ‘85% of responses that were concerned about environmental impact’.
3. Relate Feedback to specific topics
The traditional mechanism for online consultation relating to large documents (like community, transport or environmental plans; or policy) is to provide the document as a PDF (or as a series of HTML pages) and request responses either by email or by filling in a web form.
A huge amount of effort goes in to extracting the comments and tying them to the specific sections of the original document. For example, sentences 3 and 7 of my email response might relate to section 3.4.3 of the transport plan.
All of this takes time and increases almost exponentially based on the number of responses and variety of channels through which responses might be received.
So are there any solutions? How can the processing of responses be achieved without having to ‘throttle’ the number of responses that are sought?
Based on my experience of a number of large scale consultation activities in the UK and the US, I believe the answer might lie in both changing the mechanism for consultation, combining quantitative and qualitative questioning; and automating many aspects of the process.
In terms of the mechanism, more and more organisations are turning to online interactive documents as their primary online consultation mechanism. Interactive documents are documents that have been specially marked up during the production process to highlight specific topics of consultation – document sections, options, recommendations or proposed policy on which the engagement officer wants input from the community.
This achieves a couple of aims. Firstly it addresses step one above by focusing the attention of the consultee on the issue at hand (in a way that the traditional ‘please provide feedback on this 200 page document’ simply can’t). I am more likely to respond ‘on topic’ if I am directed towards specific discrete questions or allowed to freely comment on a given defined recommendation.
Secondly it address the third step in the process by automatically tying the response to the relevant section (recommendation, action, policy etc) of the document. No more manually relating the comments in an email to the sections of the original document.
Now, what if we could combine quantative, or structured, feedback with the unstructured comments on the topic? What if we could get the consultees to provide us with the insight as to what the broad theme of their response is?Perhaps tying a discrete question (with a series of pre-defined answers) to the more unstructured feedback will help immensely.
“Is your concern relating to a) the Environmental impact, b) the Economic impact for the town, c) the Cost to implement? Please elaborate …”
This gives us both the theme (quantitive) and the feedback (qualitative) tied directly to the section (option, recommendation, Policy etc) addressing step 2.
So combining interactive documents, inline comments and questions and a scalable consultation database has the potential to dramatically improve the efficiency of the consultation process by remove much of the manual effort involved, and thus facilitate greater levels of community input leading to better outcomes for the community.
However, collecting all this information in a more structured way is only part of the solution. Once the data is in the database we then need effective reporting tools to be able to ‘slice and dice‘ data to enable us to effectively analyse the results of the engagement.
What percentage of responses to Option 1 are positive? How many of the responses focus on environmental issues. What demographic is most passionate about the changes to the proposed land usage? etc.
There are a number of tools that can provide various pieces of the puzzle. My experience, of course, is with Objective’s uEngage product. Our aim was to address all of the above in a single, simple to use, web-based system.
(Disclaimer: I work for Objective but even if I didn’t I’d still think it was pretty cool!).
But, as always, technology isn’t the complete answer. Much of the benefit of these solutions lies in effectively marking up the consultation ‘document’ with appropriate and meaningful questions. But in addition to that, it’s about encouraging consultees to assist in ensuring an efficient process.
As an example of this latter point, one of our large central government customers in the UK experimented with allowing consultees to choose between providing their responses either through the interactive document mechanism or, if they desired, using the ‘old’ model of providing a pdf document and letting them comment by email.
Most still commented by email (perhaps because this was what they were used to). Of course, this led to the processing overheads described above.
So they decided to turn off the ability to comment by email and only provided the interactive document mechanism. To their surprise, the response rates they got to the ‘interactive document only’ engagement where similar to when they offered a choice a choice of engagement mechanisms, but of course the overheads of processing were reduced dramatically.
So I believe there is a way to efficiently manage large scale engagement activities using interactive documents, in place commenting, directed questioning and comprehensive reporting.
I’ll leave the final comment, however, to one of the attendees of our recent Government 2.0 in Queensland event in response to the suggestion that the cost of managing online consultations might become too high without such tools …
“The cost of managing online consultations is nothing compared to the cost of ignoring the community”.
Hear, hear. And, thankfully, there are tools that help us to minimise the cost of the former.