(Image courtesy of Getty Images)

Three high-profile mass murders committed in recent years by white supremacists had at least one thing in common: a relatively fringe social media platform called 8kun (formerly 8chan) that has become a haven for white nationalists. The alleged El Paso, Texas, shooter who killed 23 people in 2019 posted an anti-immigrant manifesto to the platform prior to the attack. In it, he expressed support for the accused shooter in Christchurch, New Zealand, who killed 51 people in two mosques and also used 8kun. And before the 2019 synagogue shooting in Poway, California, the alleged gunman posted a link to his manifesto on 8kun, referencing the shooters in New Zealand and in the 2018 massacre of 11 Jewish worshipers at the Tree of Life synagogue in Pittsburgh.

These killing sprees all used a social media platform to spread hate. And unfortunately, although most social media users don’t frequent 8kun, hate-fueled violence isn’t limited to the darkest corners of the Internet. Many extremists use mainstream platforms rather than fringe services to communicate their message and recruit adherents. And because they exploit popular consumer products to push prejudice, we must confront it on the largest platforms that reach billions of people every day.

Reflections on the US Presidential Election and What’s Next for the Social Sector
Reflections on the US Presidential Election and What’s Next for the Social Sector
Following a landmark national election in the United States, we present a series of reflections on the outcome and its effects on civil society and the social sector.

Social media has been at the center of the storm for more than a decade, and its toxic potential reached new heights during the last presidential term. Whether you consider it the catalyst or just a conduit, the fact is that social media drives radicalization. It’s a font of conspiracy theories, a slow-burning acid weakening our foundations post after post, tweet after tweet, like after like. And the hate festering on social media inevitably targets the most vulnerable—particularly marginalized groups like religious, ethnic, and racial minorities, and members of the LGBTQ community.

We recently reached an ignominious inflection point: 2019 was the sixth-deadliest year on record for extremist-related violence in the last 50 years, and it’s clear that social media is playing a central role in the radicalization of domestic extremists. Platforms like Facebook, which employ algorithms designed to promote engagement and thus end up amplifying the most corrosive content, serve up a firehose of material that glorifies hate and violence.

Harassment based on protected identity characteristics—such as actual or perceived religion, race, ethnicity, national origin, gender identity, and sexual orientation—is also on the rise. A 2020 survey conducted by the Anti-Defamation League (ADL) just prior to the outbreak of the COVID-19 pandemic found that 44 percent of Americans had experienced online harassment. And perhaps unsurprisingly, 77 percent of those people reported that at least some of their harassment occurred on Facebook—far and away the largest and most profitable social media platform.

We cannot allow this to continue, yet neither the Trump Administration nor Congress have adequately dealt with the deadly intersection of hate and social media platforms. Some government leaders have proposed long-needed reforms, such as amending the Communications Decency Act, to drive political narratives instead of creating real change. And some good, well-intentioned legislation has unfortunately gone nowhere. It is imperative that the next administration and Congress change course and work together to proactively address this issue.

A New Mandate: Improve or Remove

If we want to significantly reduce the spread of hate and clamp down on cyber-harassment, civil rights and advocacy groups everywhere, and the new Biden Administration, must push social media platforms to take drastic steps—to improve or remove products that foster and amplify hate, and stop permitting the vast amplification of hateful speech on their platforms. In particular, social media companies need to:

1. Fix what’s broken. It’s a basic rule of business: Pull defective products from shelves. We see this in every other industry; when a consumer product sickens its customers, regulators take it out of distribution and require companies to fix them. We need to apply the same rules to applications like Facebook Groups that have amplified antisemitism, scaled racism, and launched destructive conspiracy movements like QAnon and the Proud Boys. And if Mark Zuckerberg and his engineers can’t improve Facebook Groups, we need to put it out to pasture permanently.

2. Hold hatemongers accountable and proactively moderate content. While the First Amendment protects private speech, it doesn’t require that private companies provide an unfettered platform for bigots to spread hate. Hatemongers take advantage of platforms, scaling and spreading their messages like wildfire. Some social media platforms already have clear terms of service banning hate speech, but not all platforms, including 8kun, see the need to moderate posts. This is where government can step in and require better oversight, accountability, and transparency so that social media companies are motivated to curb the overwhelming harassment, intimidation, and hate speech on their platforms.

To be clear: Everyone has a right to expression free from government regulation. But press outlets and social media aren’t public places; they’re private businesses. Both have an ethical obligation to society and a fiduciary responsibility to their shareholders. No newspaper is required to publish a particular article; most newspapers have editors who safeguard against libel or hate. Moreover, access to social media platforms is a not a legal right, it is a privilege. Abuse it, and you should lose it.  

Mounting Pressure Through Civil Society

Civil society and the public at large have a role to play in pressuring social media companies to take these steps. Earlier this year, for example, ADL and a coalition of like-minded civil rights groups and advocacy organizations announced a new campaign called Stop Hate for Profit, which called on companies and advertisers to pause their advertising on Facebook for the month of July. The goal was simple: Send Facebook a message to stop valuing profits over racism, antisemitism, and all forms of hate. For years, ADL worked behind the scenes raising our concerns with Facebook, often with partners who were doing the same—including Color of ChangeCommon SenseFree PressLULACMozillaNAACPNational Hispanic Media Center, and Sleeping Giants. These groups joined the campaign, and we quickly rallied the support of more than 1,200 companies, businesses, and nonprofits, including American companies like Ben and Jerry’s, Best Buy, Levi’s, Patagonia, REI, Starbucks, and Verizon; global brands like Bayer, Honda, Unilever, and Volkswagen; and an array of small businesses and mom-and-pop retail enterprises.

The effort ultimately resulted in a series of real concessions from Facebook—the kind of substantive changes it had failed to make in its first 15 years. Facebook created a new senior executive role focused on civil rights (though it has yet to fill this position); had a newfound willingness to participate in an audit of hateful content on the service; and took long-overdue action to remove violent white supremacist groups, armed militias, and hateful content including Holocaust denial. The company also recently began quietly reengineering its algorithm to address systemic bias that has plagued the experience of users who are part of marginalized communities.

Other public-led campaigns are having an impact as well. Following the Unite the Right Rally in Charlottesville in 2017, for example, the Change the Terms coalition gathered insights from experts on terrorism, human rights, and technology to better understand how hate operates online and how to stop it. From that, it generated a set of recommended corporate policies and terms of service for social media platforms and other Internet-based services that can help them avoid becoming places where extremism can take root. Meanwhile, The Real Oversight Facebook Board—which launched in September 2020, and includes academics, researchers, journalists, and civil rights leaders—drew attention to Facebook’s failure to launch its own oversight board and served as a public watchdog in the runup to the 2020 election. Within days of its announcement, Facebook finally began to roll out an actual oversight board.

Filling the Legislative Gaps

President-elect Joe Biden, Senate leaders Mitch McConnell and Chuck Schumer, House leaders Nancy Pelosi and Kevin McCarthy, and other officials also have an important part to play. These leaders need to put aside partisan differences, and work diligently and expeditiously to address online hate and extremism. This includes closing the gaps and loopholes in state and federal cybercrime, harassment, stalking, and hate crimes laws to address the severe misconduct pervading online spaces. In particular, they must:

1. Hold perpetrators accountable. Many people think that because abusive online conduct happens behind a screen, it does not cause real harm. But not only does it cause very real harm to victims, but contrary to what many believe, threatening someone on social media, stalking them online, or posting their information with the intent to commit a crime against them is not protected by the First Amendment. Despite this, legal recourse for victims and targets of these crimes is limited. One of the most well-known and groundbreaking cases, in which neo-Nazi website founder Andrew Anglin doxed and severely harassed Tanya Gersh because she was Jewish, highlights the antiquity of current laws related to online harassment. Lawmakers must fill gaps and eliminate loopholes in our legal system, including adding federal and state protections for doxing (posting someone’s private information as a form of punishment or revenge) and swatting (filing a false report of a crime in order to elicit a response from law enforcement). 

 2. Prioritize the fight against extremism. Congress must also work with independent extremism experts to protect vulnerable targets from becoming either victims of abuse or radicalized perpetrators of violence. Legislation like the National Commission on Online Platforms and Homeland Security Act, for example, would establish a commission to address online content that implicates national security concerns if passed. Other relevant legislation includes the Online Safety Modernization Act of 2017, which was introduced in the 115th Congress, and would increase federal protections for cybercrimes such as doxing, swatting, and other acts of digitally enabled abuse, and the Raising the Bar Act, which would attempt to reduce the amount of content related to terrorism on social media platforms. 

3. Reform Section 230 of the Communications Decency Act. Congress should increase oversight, accountability, and transparency for tech companies, including social media platforms and online gaming platforms. Among other things, it should amend Section 230 of the Communications Decency Act to make tech companies legally accountable in certain circumstances. This could include enacting measures such as the Protecting Americans from Dangerous Algorithms Act, which would prevent the use of algorithms to amplify hateful content, or aid and abet terrorism.

4. Include online gaming platforms in the conversation. ADL’s research shows that more than 80 percent of US online gamers have experienced harassment while gaming online. Given that online gaming platforms are the next frontier in digital social spaces (essentially becoming social media platforms), we need to better understand the influence of gaming on youth and adult players, and create more platform oversight. Some gaming platforms are beginning to step up, and ADL is currently working with the Fairplay Alliance to create common definitions of hate and harassment in games that will allow game companies to build more respectful and inclusive online gaming communities. However, we need to learn more about the extent of gaming platforms’ impact on hate and extremism, and government must include online gaming in conversations about tech reform.

Social media has shaped our society and fed deep divisions that appear poised to persist long after President Trump leaves the White House. Reforms to Facebook and Twitter alone will not stop the spread of hate, just as simply banning bigotry will not end it. Remember, the white supremacists who were radicalized and built a following on 8kun ultimately resorted to violence because they had an unfettered platform to freely imbibe their brand of hate and spread it globally to an audience of likeminded followers.

We must come together as a country, working from the bottom up and the top down, to say firmly that hatred, harassment, and bigotry are not acceptable. Civil society and advocacy groups need to continue to pressure Silicon Valley to step up, and the new Biden Administration and Congress must recognize that self-regulation has failed as a strategy to govern tech. Now is the time for collaboration and smart policy that curbs the excesses and ensures that big tech is accountable for its behavior. Only then will we see real progress

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Jonathan A. Greenblatt.