(Photo by Engin Akyurt/Unsplash)

Private and public entities around the world, particularly in the health care and governance sectors, are developing and deploying a range of artificial intelligence (AI) systems in emergency response to COVID-19. Some of these systems work to track and predict its spread; others support medical response or help maintain social control. Indeed, AI systems can reduce strain on overwhelmed health care systems; help save lives by quickly diagnosing patients, and assessing health declines or progress; and limit the virus’s spread.

But there’s a problem: The algorithms driving these systems are human creations, and as such, they are subject to biases that can deepen societal inequities and pose risks to businesses and society more broadly. In this article, we look at data on the pandemic, share two recent applications of AI, and suggest a number of ways nonprofit and business leaders can help ensure that they develop, manage, and use transformative AI equitably and responsibly.

Rethinking Social Change in the Face of Coronavirus
Rethinking Social Change in the Face of Coronavirus
    In this series, SSIR will present insight from social change leaders around the globe to help organizations face the systemic, operational, and strategic challenges related to COVID-19 that will test the limits of their capabilities.

    The Problem With COVID-19 Data

    Using techinical frameworks, such as machine learning, AI systems use algorithms to make inferences from data about people. This includes demographic attributes, preferences, and likely future behaviors. To effectively serve a range of populations, AI systems must learn to make associations based on massive amounts of data that accurately reflect information across identities. However, the data they rely on is often rife with social and cultural biases. Data might not exist for certain populations, may exist but be poor quality for certain groups, and/or reflect inequities in society. As a result, algorithms can make inaccurate predictions and perpetuate social stereotypes and biases.

    Unfortunately, much of the data about COVID-19 that the US Center for Disease Control and Prevention (CDC) and others are collecting and tracking is incomplete and biased. COVID-19 infection rates, for example, have been subject to a “vast undercount,” by a factor of 50 or more. Medical data is reflecting only a subset of the population—in many cases, the affluent, white communities who have ready access to limited tests and expensive medical procedures. But there are other important data gaps too:

    • Data on risk and mortality is not sufficiently disaggregated by sex, race, or ethnicity. The CDC didn’t release a race and sex breakdown of COVID-19 cases and deaths until early April, and even then, it only pulled from parts of 14 states. Today, many states have data that shares cases and mortality by race, but gender is still limited, and there is no available sex-disaggregated mortality data.
    • Data for racial and ethnic groups is incomplete, and terms and labels are inconsistent. Infection and mortality data released by the CDC, while still infrequent and incomplete, paints a bleak picture around how COVID-19 disproportionately kills certain racial and ethnic groups. Alarming rates among black Americans are rooted in longstanding economic and health care inequalities, and the ambiguous racial/ethnic categorization of existing data further obscures disparities.
    • COVID-19 data tracking systems aren’t capturing data on immigrants and other marginalized populations. Immigrants are one of many communities of color hard-hit by COVID-19. Many are filling service positions in essential businesses that require them to interact with a large number of people daily, and are already at increased risk of complications or death from COVID-19 due to high rates of underlying chronic illnesses. Despite this, many are not getting tested for fear of getting deported. Sufficient data for transgender and non-binary individuals does not exist either—most state health officials are not collecting data on whether patients identify as LGBTQ—and transgender individuals are at greater risk given economic and social vulnerabilities. This lack of data on the most vulnerable isn’t just a problem in the United States—it’s often even greater in poorer countries.

    AI for COVID-19 Medical Response

    Some of the AI systems created to support COVID-19 medical response help diagnose and detect COVID-19 through basic online screening or analyzing chest images. Others, such as the forthcoming version of eCART, can help predict COVID-specific outcomes and inform clinical decisions. This is particularly useful for medical volunteers without pulmonary training, who must assess patients’ conditions and decide who needs help first. AI tech may also prove helpful in the search for a COVID-19 vaccine and other therapies. 

    However, the data gaps we mentioned earlier have major implications for medical AI systems and AI-enhanced vaccine trials. People react differently to viruses, vaccines, and treatments, as previous outbreaks like SARS and Ebola have illustrated. Data available on COVID-19 outside the United States, for example, shows that men and women face different fatality rates, and a recent research paper found that women patients admitted to the Wuhan Union Hospital had higher levels of COVID-19 antibodies than men. Given systemic inequities that worsen health outcomes for certain racial and ethnic groups, it’s equally important to understand COVID-19 health outcomes for different identities, as well as the intersectional implications.

    Algorithms that don’t account for existing inequities risk making inaccurate predictions—or worse. In 2019, a study found that the widely used Optum algorithm, which used health-care spending as a proxy to measure need, was biased against black Americans. It didn’t account for discrimination or lack of access, both of which lead to lower spending on health care by black Americans. Amid the COVID-19 crisis, AI systems that inform limited-resource allocations (such as who to put on a ventilator) must be careful not to inadvertently prioritize certain identities over others. While developers aim to make algorithms race-blind by excluding race as a metric, this can ignore or hide— rather than prevent—discrimination. For example, algorithms that inform clinical decisions may use proxies such as preexisting conditions. Diabetes is a preexisting condition linked to higher rates of COVID-19, and it has a higher incidence for black Americans. If an algorithm uses preexisting conditions but is blind to race, it can result in disproportionately prioritizing white Americans over black Americans.

    While some firms adhere to rigorous testing—conducting large validation studies prior to releasing products, for example—not all firms are thorough. Further, the decision-making processes of most AI algorithms are not transparent. This opens the door to inaccurate or discriminatory predictions for certain demographics, and thus poses immense risks to the individuals and practitioners using them.

    AI for COVID-19 Social Control

    Another recent application of AI is contact tracing, or tracking people who have come into contact with the virus to help contain it. By tracking user information such as health and location, and using AI-powered facial recognition, these tools can enforce social distancing and inform citizens of contact with positive cases. In China, users are assigned a coronavirus score, which impacts their access to public transportation, work, and school. And US government officials have begun raising the possibility of mass surveillance, collecting “anonymized, aggregate” user location data from tech giants like Facebook and Google to map the spread of COVID-19.

    But surveillance tools have ethical implications—again, particularly for marginalized populations. Using AI to decide who leaves their home could lead to a form of COVID-19 redlining, subjecting certain communities to greater enforcements. This calls to mind another AI model that results in higher surveillance of poor communities of color: predictive policing. In the United States, risk-assessment algorithms use criminal history information, but don’t take into account deep-rooted racial bias in the policing system, that black Americans are arrested more often for smaller crimes and that neighborhoods with high concentration of black Americans are more heavily patrolled. Black Americans end up overrepresented in the data, which then links to racially biased policing outcomes. Similarly, communities impacted by proposed surveillance systems would likely be poorer communities of color harder hit by COVID-19 for a variety of reasons linked to historical inequities and discrimination.

    It is not clear how or how long government agencies or other entities will use these types of AI tools. In China, tracking could stick around after the crisis, allowing Beijing authorities to monitor religious minorities, political dissidents, and other marginalized communities with a history of being over-surveilled. And although data collection in the United States will initially be anonymized and aggregated, there’s potential for misuse and de-anonymization in the future.

    Five Things Nonprofit and Business Leaders Can Do

    Various AI systems are proving incredibly valuable to tackling the pandemic, and others hold immense promise. But leaders must take care to develop, manage, and use this technology responsibly and equitably; the risks of discrimination and deepening inequality are simply unacceptable. Here are five actions to take now:

    1. Demand transparency and explanation of the AI system. First and foremost, leaders need to hold themselves accountable. Particularly with AI systems targeting medical response, it’s important that decision makers understand which groups are represented in the datasets and what the quality of that data is across different groups. Tools such as Datasheets for Datasets are useful for tracking information on dataset creators; the composition, sampling, and labeling process; and intended uses. Leaders whose organizations develop AI systems should also ask questions like: Whose opinions, priorities, and expertise are included in development, and whose are left out?

    2. Join and promote multidisciplinary ethics working groups or councils to inform response to COVID-19. This is already happening in Germany and can provide useful insights into how to respond to COVID-19, including using AI. Working groups are a way to bring together social scientists, philosophers, community leaders, and technical teams to discuss potential bias concerns and fairness tradeoffs, as well as solutions.

    3. Build partnerships to fill health-data gaps in ways that protect and empower local communities. Nonprofits and universities are especially well-positioned to work with disenfranchised communities and form community research partnerships. In San Francisco, for example, a coalition of citywide Latinx organizations partnered with UCSF to form a COVID-19 task force. The coalition launched a project that tested nearly 3,000 residents in predominantly Latinx neighborhoods to better understand how the virus spreads. The task force and its local volunteers integrated concerns of community members and provided extensive support services to people who tested positive.

    4. Advance research and innovation while emphasizing diversity and inclusion. Only a handful of tech companies and elite university labs develop most large-scale AI systems, and developers tend to be white, affluent, technically oriented, and male. Given that AI isn’t neutral and that technologies are a product of the context in which they are created, these systems often fail to meet the needs of different communities. Research initiatives like the recently launched Digital Transformation Institute, a collaborative effort to bring together tech companies and US research universities to fight COVID-19, must emphasize inclusion and justice (alongside innovation and efficiency), and prioritize multi-disciplinary and diverse teams. They can and should take advantage of tools like an AI Fairness Checklist in designing solutions.

    5. Resist the urge to prioritize efficiency at the cost of justice and equity. Leaders should rise to the challenge of not compromising justice and equity. In some cases, the question is not how best to develop or deploy an AI system, but whether the AI system should be built or used at all.

    As the pandemic continues to severely impact individuals, communities, and economies, nonprofit and business leaders must respond quickly—but not at the cost of heightening discrimination and inequality in the communities hardest hit by the pandemic. AI can help us improve medical response and minimize the spread of COVID-19, but using it wisely requires equity-fluent leadership and a long-term view. As Prashant Warier, CEO and co-founder of the AI company Qure.ai, put it, “Once people start using our algorithms, they never stop.”

    Support SSIR’s coverage of cross-sector solutions to global challenges. 
    Help us further the reach of innovative ideas. Donate today.

    Read more stories by Genevieve Smith & Ishita Rustagi.