In the social sector, many are hailing “user-centered design” as a revolutionary advance. It is the first of the Principles for Digital Development (defined in consultation with nearly every major global development institution) and high-profile leaders like Melinda Gates are lauding the methodology. Amid all this buzz, commercial design firms are increasingly winning international development contracts ...
... and development practitioners are increasingly disappointed with the results.
There is a backlash on the way, and for good reason. I have too often seen people use design principles about discarding assumptions as an excuse for ignorance of historical context. I have seen designers championing “creativity” as if it compensates for their lack of experience in developing countries. User-centered design was born out of the private sector, and many in my field are starting to wonder if the methodology just isn’t right for the complex global challenges staring us down.
But: I know that user-centered design can revolutionize development work. The problem is in how we apply it. To make sure design is useful for thorny problems, we must better understand its limitations—not just its strengths—and adapt it from its original uses designing commercial products.
If user-centered design is going to contribute to a revolution, we need to start with a redefinition.
“User-Centered” is a Misnomer
The first step to making design useful in development practice is doing away with its myopic focus on “the user.”
A few perceptive critics and designers have pointed out the limitations of user-centered design in recent years. My colleague Lauren Weinstein and her co-author of “Social Design and Neocolonialism,” Cinnamon Janzer, have put forth a strong argument for why focusing on the “user” is a vestige of commercial applications, inappropriate for public sector work. In their article, they argue it is an “object-centric” practice, created to design products. In contrast, development challenges usually require us to design interventions—a much more complicated category that requires the involvement of government policymakers and bureaucrats, nonprofit and development workers, business owners, service administrators, communities, ordinary citizens, and others. To encompass this broader milieu, Weinstein and Janzer call for a “situation-centered” design practice that is grounded in an understanding of the “user” as more than just one homogenous demographic.
Consider: If you’re designing a computer mouse, your “user” is a person with two hands. But if you’re trying to design better public health services for the poor in even a single small village, your “user” includes not only “poor people,” but also the local health care providers, community leaders, the state and national governments, businesses, families, and—the most oft-forgotten—the implementing organization that will take your designs into the world.
A Less-Catchy Title, but a Better Approach
To account for this heterogeneous “end-user” in our work at Reboot, we use the following working definition for “user-centered design,” with key words in bold:
A multi-stage problem-solving process that optimizes solutions based on users’ needs, behaviors, constraints, and operating contexts. Solutions are repeatedly tested and refined throughout the design and development process before implementation.
This definition, which may differ from others you’ve seen, tries to get at several misconceptions about user-centered design and the mistaken ways in which we have seen it applied in development. To break down those key words:
Multi-stage problem-solving: Design in development often follows a linear process, from research to design to implementation. Development practitioners recognize that this approach doesn’t work. After all, once you intervene in a system to address a problem, the system may change (and the problem may too—ideally decreasing in severity, but not always). Design provides a structured process and set of tools for continually reassessing the relevance and efficacy of the design concept at each stage; the challenge is to ensure that design follows this process even when bureaucratic processes throw up hurdles to doing so.
Users: As discussed above, the “users” in development projects aren’t as straightforward as in commercial ones. While “citizens” or “beneficiaries” may spring to mind, it’s important to understand the diverse perspectives of all who have influence over the target issue.
Constraints: Many laud user-centered design for its consideration of user “constraints” but forget institutional constraints. This leads to “innovative” solutions that look great in a PowerPoint but that organizations simply cannot implement. A solution has to be feasible—and, better yet, easy and attractive—for the people who have to put it in place: the frontline health workers who will deliver improved health care services or the civil servants who will administer the program supporting them.
Contexts: User-centered design helps researchers zoom in on the individual. But it’s more powerful in combination with other approaches—such as macroeconomic or political economy analysis—that zoom out to understand broader systems, trends, and power dynamics.
Repeatedly tested and refined: Getting value from user-centered design requires tight feedback loops that inform regular iterations on the intervention. Our field has a lot to learn in this area: Often, we determine development “solutions” without sufficient research into the problem and context, at which point contract structures and byzantine procurement processes prevent organizations from changing course, even when it becomes clear that the solution was based on false assumptions. By contrast, design calls for research to constantly inform an evolving understanding of the problem and its possible solutions. Learning happens throughout the project cycle, while it can still affect outcomes, not just at the end of the project.
The Tools for a Revolution
There is another common criticism of user-centered design as a tool for development: that it simply isn’t new. Development practitioners are accustomed to donor- and government-led trends that promise to change everything; the parade of buzzwords begins to wear thin.
But design really does carry the potential of revolutionary change—not because of its principles and lofty goals, which development practitioners have long known. Rather, design can contribute a set of useful tools and structured rigor that can translate these theoretical tenets into action.
Indeed, the ethos of user-centered design has much in common with the participatory/bottom-up/community-led development movements that began in the 1970s. More recently, leading practitioners have articulated some of these same ideas through Problem-Driven Iterative Adaptation, Thinking and Working Politically, and the Doing Development Differently community. The critical thought undergirding these movements is strong, yet they have struggled to articulate how development practitioners can operationalize their concepts. Practitioners often know what needs to be done, but end up not doing it. Design can teach us how to work across this gap.
We know how to agree on a set of principles. Design, judiciously applied, can help us learn how to put them into practice.