Back in December, I wrote about the concept of administrative burden and how all three levels of government in Canada are overdue at addressing the learning, compliance, and psychological costs that citizens experience when they use public services.
The post kicked off some great conversations with provincial and federal colleagues (thank you all for the generosity of your time and thoughtfulness!). We discussed how to assess burden to improve service delivery and achieve policy outcomes.
In the spirit of collaboration and working in public, I’m sharing some of the key findings from those conversations. I hope you find them useful in your work and, if you’re keen, do leave a comment or reach out and I’d be happy to chat further.
“Oh yeah, I think I’ve heard of that…”
Public servants I spoke with recognized administrative burden when described, but its origins and formal study (e.g., research by Moynihan and Herd in public administration) were not as widely known. Some recited the learning, compliance, psychological costs framework, while others defined the term as “the burdensome nature of administering a service” from the perspective of a public service employee.
Competing terms like “time tax” and the popular expression “red tape” were also discussed. Both terms had slightly different meanings or resonance for public servants.
In particular, descriptions of red tape were centred around the experience of businesses interacting with government and the compliance costs related to dense and complicated regulations. This could result in businesses adopting a “cost of doing business” attitude when using public services, leading some businesses to hire others to deal with the service. Entire industries exist to help businesses apply for things, file things, and mediate the business/government relationship, after all.
So while businesses may be able to afford to outsource their burden to others for a fee, individuals with limited resources cannot avoid these compliance costs. We acknowledged the disparity of how burden impacts different populations.
Evaluation? Audit? Review? Assessment?
When it came to the question of how we might better understand burdens and what kind of work needs to be done to create evidence about them, we arrived at the conceptually fraught territory of audits, evaluations, reviews, and assessments. It’s always re-assuring to find 30-year-old whitepapers and 45-year-old reports pointing to the different intellectual roots of audits and evaluations... and not surprising to hear that in common usage, bureaucrats can at times use the words interchangeably.
Admin burden feels to me better located in an evaluative context. How do we collect and analyze evidence of a program and its service delivery? How do we know what’s in the way for people achieving success through that service? What’s impeding the realization of our policy outcomes? These questions are as important as ever at the moment. Calls to review and evaluate programs to understand their relevance, effectiveness, and efficiency are front and centre like those found in key directives such as BC’s recent mandate letters and service plans.
Recognizing admin burden when we see it
So what kind of evidence do we need to gather when it comes to proving that admin burden exists when people experience public services?
Moynihan, Herd, and others have suggested methods like providing service users with a simple 4 to 7 question survey to determine burden. But for the public servants I spoke with, they were seeking more information about the attributes of their service.
Sample service data mentioned in our conversations included:
- Time and effort required: How long does it take to complete a service? How many steps are involved?
- Service timeline: Does the service take days, weeks, or months?
- Repetitive steps: Are users required to provide the same information multiple times?
- Complexity indicators: How many pages of forms or instructions must users navigate?
- Customer support data: What types of complaints are recorded in call centers?
- Program uptake: Is the service successfully reaching its intended audience, or is there significant undersubscription?
I attributed this quest for “harder” or more “objective” data about the service to the simple reason that people don’t just want to know that burden exists, they want to know what’s causing the burden. And more specifically, what quality or attribute of the service (the object) is causing the experience of burden for the citizen (the subject)? Why is it happening?
Sometimes the answers to these questions might reveal themselves quite easily. For example: a burdensome compliance step requires a citizen to prove who they are through an awkward digital identity mechanism. This results in service performance data that shows significant demand drop-off right at the “digital front door” of a service. Program enrollment and uptake is never realized.
And yet other times, I’d argue that a subjective marker of burden may be harder to explain through any given attribute of the “service-as-thing.” I learned that lesson when trying to gather another common and related bit of experience data: user satisfaction.
A quick story about a toll bridge
In 2011, OXD worked on the design and build of TReO, the digital-first customer experience for the tolling of the Highway 1 Port Mann Bridge that crosses the Fraser River between Surrey and Coquitlam. During the design and build of the digital tolling service, we usability tested different prototypes of the registration and account management system. The digital service would eventually register 550,000 citizens in six weeks prior to the bridge opening and then serve over 1.4 million registered users before tolls were removed in 2017.
As we usability tested the service, we captured key task completion rates. We then ended our testing sessions with a questionnaire that asked the user about different dimensions of their experience. The final question we asked was about their overall satisfaction with the service.
As progressive rounds of prototyping yielded improved task conversion rates and increasing post-test evaluation scores, a curious thing happened with our last satisfaction question: it went down.
When we asked users to explain why they would give us 0/5 for satisfaction after scoring many of the other dimensions 4/5 or 5/5, they would apologetically say things like: “Look, this works great and all, but I gave you a zero because I’m opposed to tolling. It’s a bad idea and I want someone to know that. So don’t take it personally, this website is really easy to use, but I really don’t think the bridge should be tolled. It’s not fair.”
Citizens were expressing their political frustration through our form. They were not dissatisfied with the service experience narrowly defined per se, but with the government’s decisions and policies to toll the bridge in the first place. They saw our survey instrument as an opportunity to communicate that sentiment back to the powers that be.
Citizens’ subjective experience, their unhappiness with the policy in this instance, wasn’t because the policy is somehow an attribute or quality of a service, revealed through a functional evaluation of the usability and utility of the service. The service worked as promised, its conversion rates were flawless, it was the embodiment of government efficiency.
Rather, the service was policy made material. People were unhappy with the whole arrangement, not simply with how the service worked. The focus of their dissatisfaction lay elsewhere. And it was only through conversation with those people that we discovered that somewhat obvious fact.
Next steps: evaluating services and integrating admin burden
Evaluation is hard. Gathering data and making knowledge claims based on that data may not be as straightforward as we hope. While boiling admin burden down to 5 or 7 questions on a survey seems in line with the idea of reducing yet another form of burden for the citizen (e.g. please take this 23 question survey on your way out the door...), it may not provide you as the service owner, program manager, or service designer with the ability to know what to do next. Clearly, it’s the beginning of a deeper inquiry and commitment towards understanding and improvement of any given service.
If we are to integrate the concept of administrative burden into service evaluations, then we need to examine what types of data government teams are already collecting and analyzing.
Are agencies focused on objective measures like processing times and completion rates? Are they gathering subjective feedback from users, like satisfaction or their frustrations? Are they applying design research methodologies to uncover hidden pain points?
To gain a clearer picture of the current landscape, I’m launching a short questionnaire (just 10 main questions) and would love your input. By participating and sharing with your colleagues, we can build a more comprehensive understanding of how service evaluations are conducted and how we might collectively improve them.
If you’re interested, please take a few minutes to fill out the questionnaire and join the conversation.
Your insights will help shape future efforts to reduce admin burden and create better public services for all.