Hoping to Help: The Promises and Pitfalls of Global Health Volunteering

Judith Lasker

262 pages, Cornell University Press, 2016

Buy the book »

The debate over the value of short-term medical volunteer trips to poor countries is heating up. Some believe they are a wonderful opportunity for the privileged to learn while providing life-saving service to the poor; others believe they are a colonial venture with many possible harms to both parties. I embarked on the research for this book with the twin goals of obtaining mostly absent empirical evidence about this growing enterprise and magnifying the voices and preferences of host communities in the debate. This research—surveys, interviews, and participant observation in several countries—led me to delineate nine principles for maximizing the benefits of short-term volunteering, principles that can be applied to other fields besides health, and to domestic experiences as well. The following excerpts focus on two of those principles: the centrality of mutuality and of evaluation. —Judith Lasker

The theme of mutual learning stood out in interviews with host community staff: volunteers are very welcome and appreciated when the interaction is mutual, when each party can learn from and teach the other. People who have lived their lives and practiced their professions in the host country know they have much to teach the visitors. Volunteers who arrive thinking they have all the answers, that they have nothing to learn, and that others should do things the way they are done in the United States are not appreciated. Fortunately, such volunteers, according to host-country staff members, are in the minority.

Staff members in the host countries often realize that they offer others a model of how to address medical needs creatively when resources are scarce.1 As a hospital employee in Ghana said, “Volunteers can help. The word is help, not do. It is up to us to do but they can help. So when you collaborate, health outcomes will be improved in Ghana.”

When mutuality does not exist, the results are less satisfactory. As a Haitian physician noted,

…if we’re trying to do things the way you’re doing in the States, it will not fit here in Haiti. You can come with ideas. We are glad to hear what you think and from what you’re telling us, we will decide what can fit in our clinic.…[P]lease follow what we’re telling you to do which is very important. Because we know better how things go here.

A lack of mutuality can be seen not only in some volunteers’ attitudes toward how things are done but also in the ways organizations may relate to the host-country staff. Some staff members in both Haiti and Ecuador expressed concern that they were excluded by the organization from having a more integrated role with the volunteer programs. In Haiti, for example, one staff person told about initially being promised “access” but then never being invited to meetings at the volunteer center and never finding out what the “access” amounted to.

"What are we going to put on our résumés? ‘Hung out with white people for three or four weeks?’"

In Ecuador, the local community health workers and other staff members who worked in the clinics did not seem to be treated as an integral part of the health care team. There was no introduction to the volunteers as a whole. At day’s end, an American staff member asked one of the Ecuadoreans to take a photo of “the team,” but the photo included no Ecuadoreans—a very clear, if unspoken, statement that they were not considered team members.

Host-community members want more than helpful visitors with skills and resources, although these are valuable and greatly appreciated. They want to be involved in the work programs undertaken by volunteer organizations, and they want to be respected. They want a relationship of equality in which each partner learns from and benefits from the other.

Mutuality is difficult, even in a program that has it as an explicit goal. Brandon Blache-Cohen, executive director of international volunteering organization Amizade, recounted to me what a Jamaican community leader had said: “I have to be honest with you; this is an amazing partnership for our community. Hundreds of thousands of dollars have been injected into our community in the last ten years, but we don’t have any professional development opportunities out of this. We know that your students are going back, putting on their résumés that they worked in a community in Jamaica, but what are we going to put on our résumés? ‘Hung out with white people for three or four weeks?’ It’s not going to get us a job. It’s not going to help us move forward.”

Mutuality, perhaps paradoxically, means that nonaltruistic motivations—desire for adventure, résumé building, gaining experience, feeling good about oneself—are not necessarily bad things in the context of volunteer trips. Host-community members who seek mutuality seem happy for volunteers to gain from their experiences; that means hosts are offering something of great value. A crucial point here, though, is often missed: mutuality means that volunteers recognize and honor the gifts they are receiving and respect the givers, just as they hope the gifts they bring will be valued. It means an ongoing relationship of respect, collaboration, and exchange, if not with individual volunteers, at least with the representatives of the organizations.

The idea of mutuality directly challenges the hierarchical standard in foreign aid, including volunteer trips, that presupposes the superiority of aid “providers” over “recipients” or “beneficiaries.” Eliminating this type of language from volunteer programs is strongly recommended, especially as we know that volunteers do not always provide something useful and hosts do not always benefit. Naming each party as a partner, as volunteers and hosts, promotes a different way of thinking about the relationship that can enhance mutuality.

Volunteer programs, to succeed—indeed, to begin to achieve mutuality—require a partnership between the organization sending volunteers and a local host community or organization. Effective partnerships depend on three main components: responsible partners on both ends, basic agreement on the goals of the volunteer trips, and good coordination.

However, almost half of organizers do not always have a local partner.2 It would be especially difficult for an organization to know that it is doing something valuable for a host community when there is no local partner to help define the best use of resources and to provide feedback after a trip.

Organizations that depend on an in-country partnership to define and carry out their missions work hard to develop relationships that make their presence more productive. The challenges, and the importance, are well described by a medical mission organizer:

[W]e’ve spent the past 18 months building our relationships before we developed this plan. And I think that those 18 months are really what’s going to make us successful over the next four years. As large organizations from the U.S., we can go in and push an agenda and throw down some money on the table, and any organization is going to jump to collaborate. But I think that a sign of a good relationship is when someone says, “Wait a second. That’s not exactly what we’re trying to do.” We’ve allowed space for that pushback so that we can have some real fruitful conversations about what is realistic.3

Evaluating Programs

Great claims are made about lifelong changes in volunteers’ attitudes and behavior and about benefits of volunteer trips for host communities. When I set out on this work, I strongly suspected that genuine, measurable assessment of short-term health-related volunteer trips would be infrequent. I am fully aware of how difficult it is to evaluate social and educational programs, including my own lifelong activity as a college teacher.

What I did not expect was how often the evaluation question seemed to take people by surprise. When I asked, “How do you know if your program is benefiting the host community?” I was struck by how often that was met by a noticeable pause in the conversation, followed by “That is a really good question.” The assumption of benefit is so strong that even for many people deeply committed to doing this work, the idea that there should be some kind of formal accounting seemed surprising.

Explanations for the dearth of evaluation are easy to come by. Mark Rosenberg, CEO of the Task Force for Global Health, told me, “People hate to be evaluated. When you talk about small programs, they don’t want to assess their impact because they want to believe that they’re doing God’s work and they’re making the world a better place and not be pushed to specifics. On the other hand, sometimes the effects and the benefits are going to be very delayed and won’t be necessarily for that same community, so you have much more diffused benefits.”

Two-thirds of the organizations said they do a follow-up survey with volunteers after their trips, while only 41 percent evaluate the benefits to communities visited.

Explicit faith-based rationales for the lack of evaluation are even more direct. As Bruce Steffes and Michelle Steffes wrote in their Handbook for Short-Term Medical Missionaries, published by the Association of Baptists for World Evangelism, “You are not in a competition to see X number of patients or do X number of cases. … The success of your trip will not be judged by numbers; it will be judged by God.”4

There are other reasons. Perhaps most daunting to those who want to evaluate their programs is the difficulty of doing it well. Ideally, a good evaluation would demonstrate improved health of community members after a volunteer trip. Communities that receive volunteers should have better health than those that do not. Documenting such effects convincingly requires an enormous investment of expertise, time, and funding and must necessarily take into account myriad other influences.

Finally, organizations have had little motivation to devote resources to evaluation. After all, most people believe the work is valuable. Anecdotal reports are so inspiring: volunteers return with stories of great experiences, and most donors are more interested in seeing large numbers of people involved (both as volunteers and as patients) than in documented outcomes. This has long been true of social services in general, which is why it is only recently that the United Way and others have begun to require outcome measures from their grantees. Nevertheless, the question stands.

I asked sending organizations whether they evaluate their programs and, if they did, to describe the focus of those evaluations. Two-thirds of the organizations said they do a follow-up survey with volunteers after their trips, while only 41 percent evaluate the benefits to communities visited.5

The focus on volunteers rather than the communities visited is consistent with the view that providing volunteers with a good experience is a very high priority. But it is also the case that volunteers are easier to survey and that the benefits to the communities may appear to be obvious.

When sponsors do look at the impact on host communities, the main methods used are feedback from partner organization staff members in the form of interviews, surveys, and anecdotes and informal feedback or anecdotes from the host community. One in five indicated they use the very indirect approach of gathering feedback from volunteers to assess impact on the community, and some organizations referred to religious testimony as their method of evaluation.

More than 20 percent of organizational representatives indicated they do no evaluations, although some said it was something they “should do.” These data support the impression that there is very little systematic evaluation going on to determine whether short-term volunteering has the positive benefits for the health of poor communities that most sponsors intend.

Evaluation Challenges

"It’s a culture that values relationships and I knew it would have been really difficult for them to tell us, 'We didn’t like blah blah blah.'"

I recognize it is very difficult to do a good evaluation. To do it well in any environment requires a great deal of time, money, and skill. To undertake an evaluation in another country, where cultural and language differences may make even the most well-intentioned evaluation effort insufficiently useful and valid, is even more challenging. In addition, evaluation needs to be done over time, not just concurrent with or right after a program, if lasting changes are to be made from the results. This is almost impossible when an organization does not return to the same location and when patients are widely dispersed.

Margaret Perko began working in Uganda while a medical student at the University of Minnesota. The project she is part of provides medications to orphanages in Uganda and trains staff on their use. It includes research on whether the intervention reduces the number of clinic visits for illness.

Perko explained some of the challenges with carrying out the research dimension. “The problem with that, one, [the questionnaire is] in English, but it’s to be administered orally through either an English-speaking person or through a person who speaks Luganda, and a lot of the words in medicine are not actually the same. I work with a Ugandan physician who goes with us on all these trips, and even she can’t find the words in Luganda to translate to ask the people some of the questions, so that’s kind of been scary.”

A volunteer with a degree and experience in public health explained her approach to evaluation, which takes into account that people who receive services may be reluctant to criticize and consider it rude to express dissatisfaction. In evaluating a program training community health workers, she was able to use her own personal history as someone from a poor country in Latin America to gain greater understanding.

I did an evaluation on the structure and the format of the training itself, strengths and weaknesses. In this culture, which is very similar to my culture, it’s a culture that values relationships and I knew it would have been really difficult for them to tell us, “We didn’t like blah blah blah.” So we emphasized and emphasized and emphasized that by telling us where we fall short, you’re helping us. This has nothing to do with our relationship with you; we will not be hurt. We want to come back and we want to do a better job. I even said, “Promise that you will be truthful to us, you will not hurt our feelings, we need this!” And we just took the time to explain to them and emphasize why it was important that they tell us where we fell short so that the next time they’ll have a better training. We conveyed the point that this was for their benefit and it will not hurt us, it will not impact their relationship with us. I knew that would have been a barrier to them telling the truth ahead of time.

Despite these many challenges, some organizations and individuals are nevertheless taking on evaluation. They see it as necessary to improve their programs. For example, Timmy Global Health in Ecuador has been working to develop a set of clearly stated goals and objectives with metrics for tracking how close the group is to achieving them. The first step, identifying goals, is not always as easy as it might appear.

“I looked around,” explained Matt MacGregor, “and I said, if you asked a person in Timmy’s network, even on our board or in our staff to define what we do and why it’s important, everyone would answer slightly differently, so we said, why don’t we actually map this out and in a real, clear form say here’s what we do, and here’s why.” The result of these conversations was the development of a “logical framework (log frame)” that identifies the major goals and indicators for each. The categories of goals selected for focus in 2012–13 were referral systems, quality of short-term medical clinics, student engagement, and chronic care management. Matt’s explanation is worth quoting at length.

We picked a bunch of very formal indicators that we’re going to be tracking over the course of the next year that are all about patient satisfaction, referral system, proper communication with translators, things that are going to give us a much bigger sense of impact. A good proxy I think would be percentage of patients who consistently return. Most of the time, people are returning to something consistently if they like the product that they’re getting. On our referral system data, we know more or less the percentage of patients that have an identified problem in our clinic that show up at our referral partners. We’re going to start to track that systematically, because we really believe that one of the best parts of our programming is this referral system. Most of these patients face barriers to health access that would prevent them from getting those types of services or make it hard for them. If we’re at 67 percent, we should shoot for a target of 75 percent, and that should be the way that we measure our success internally as an organization. What we’re trying to do now is make it more systematic. So as we do more research on these individual pieces, we’re hoping to pull out one or two of the strongest indicators that would apply to each aspect so that we can create almost a dashboard of, look, 65 percent of patients said they were satisfied with our wait time. Our target this year is to improve it to 75 percent. What do we do? My argument is that evaluation has nothing to do with 100 percent. It simply has to do with what percentage you’re at, and what target you set, and whether you meet that target.

Organizations dedicated to improving the health of poor communities around the world need to take into account community members’ reports on what they see to be the benefits and costs in both the short and long term. They need to examine the actual outcomes and ask whether host communities are better off as a result of volunteer projects. And what types of programs produce the best outcomes? As Matt MacGregor says about Timmy Global Health’s efforts to measure the impact of its programs, “Is this perfect? Of course not. But it’s much better than simply referring a patient and hoping for the best.”

Matt is correct: “hoping for the best” is never the ideal strategy when volunteer organizations spend their time and money in poor communities. Documenting the actual improvements in host communities, the presumed beneficiaries of the trips, is difficult but also essential for being able to claim that these activities are truly valuable.

Modified excerpt from Hoping to Help by Judith N. Lasker, published by Cornell University Press. Copyright © 2016. All rights reserved.