!-- Google tag (gtag.js) -->

Can ‘Deplatforming’ Effectively Tackle Hate Speech and Extremism?

While bans have shown to be useful in detoxifying online fora, their impact on radicalization itself may be limited.

January 28, 2021
Can ‘Deplatforming’ Effectively Tackle Hate Speech and Extremism?
SOURCE: FORBES

The violent insurrection at the United States (US) Capitol on January 6 left both the American democracy and (now former) President Donald Trump’s treasured internet presence in complete disarray. Just days after the incident, Facebook, Twitter, and others banned Trump from their platforms, and subsequently, Apple, Google, and Amazon Web Services (AWS) removed right-wing Twitter alternative Parler from their online stores, effectively shutting down the platform indefinitely.

The ‘deplatforming’—which refers to the permanent revocation of users’ access to social media platforms and other websites—of the former leader of the free world and his extremist followers has been branded as “censorship” and decried as an assault on free speech by many. This vitriolic reaction to the measures taken by tech and social media companies has instigated a discussion about whether these actions will merely create more division in an already fractured country. This begs the question as to whether deplatforming can really curb online hate speech, extremism, and the spread of mis- and disinformation?

The simple answer is yes, it can. Most of the world now communicates via social media (with nearly a third of the global population active on Facebook alone), and experts have noted the dangerous effects of this massive shift to virtual platforms. Everyone now has a microphone to express their opinions and beliefs, and given its expansive reach, the same technology that allows for democratic activists to mobilise against political oppression can now be used by hate groups to organise and recruit far beyond their usual core audiences. In such a scenario, muting the voices of those who spew hatred and intolerance can most definitely have a positive effect.

According to research by analytics firm Zignal labs, online misinformation about election fraud plummeted 73% in the week following Trump’s suspension from Twitter, with mentions about the topic dropping from 2.5 million to approximately 688,000. Hashtags and terms associated with the Capitol riot—including #FightforTrump and #HoldTheLine—fell by 95% or more. The findings highlighted the central role played by Trump, his allies, and influential followers in propagating lies about the election results on social media, and their removal from such sites certainly limited their reach. And this is not the only time this method has worked.

Radical extremists have also been subject to such measures in the past, which made recruitment difficult. A 2015 report by the Brookings Institution, for instance, found that ISIS “influencers” lost followers and clout after facing deplatforming, as they were forced to bounce around from platform to platform. After the systematic suspension of accounts began in September 2014, hashtags like #IslamicState and #ISIS dropped from about 40,000 tweets a day to fewer than 5,000 in about five months.

While bans have shown to be useful in detoxifying online fora, it’s worth noting that their impact on radicalisation itself may be limited. After Reddit banned several particularly virulent subreddits in 2015 for violating the platform’s anti-harassment policy, a study found that “more accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage-by at least 80 percent,” showing that the ban had been successful in diminishing hateful behaviour on the site. However, the report also noted that such measures may have “relocated” harmful extremist behaviours and messaging to different parts of the site or other platforms.

While the element of public shaming involved in removing people from sites may incentivise some to be(have) better, for the die-hard, alt-right users, it can amplify feelings of isolation, outrage, and in-group solidarity that led to radicalisation in the first place, says Jesse Morton, a former Al Qaeda recruiter who now works as an anti-extremist activist. Although their follower numbers might go down, “what you see is, you see those feelings of camaraderie, those feelings of community, those feelings of meaning and significance in the movement, as if you’re having an effect,” Morton told NBC News, adding, “And so, you feel emboldened. You see, you feel powerful.” Morton further noted that in such an environment, the idea of violence being the only recourse due to people being unable to express themselves could easily take hold.  

Similar trends were noted on Reddit when the company shut down toxic message boards (r/The_Donald and r/Incels) last year, wherein, although the users’ reach had been significantly affected, their migration to new, more niche and less moderated fora led to the community becoming “much more toxic”.

The good news, however, is that if an extremist’s social media hub is extremely obscure, a mainstream internet user stumbling upon it is that much less likely. This is primarily because platforms like Facebook and Twitter generally serve as gateways for regular users, who can then move into smaller, more niche platforms where those with less mainstream views may be inclined to congregate. However, if such people and groups are banned from the major platforms, it becomes very difficult to find these hard-to-access smaller fora.

Of course, this isn’t to say that deplatforming will single-handedly end hate speech and extremism across the world wide web. But, the banning of high-profile figures spewing hateful rhetoric is an important first step. The practice can serve as a serious shock to a thriving network, and the resulting disorientation can definitely slow the movement in the short run.

However, driving already radicalized people further into the shadows is also risky, because it can delegitimise widely trusted sources of information in the eyes of those affected, and push them even further into the extremes. According to Robert Gehl, an associate professor of communication and media studies at Louisiana Tech University, while deplatforming should be an important feature built into all social media platforms to boot racist, fascist, misogynist, or transphobic speakers, the decision to trigger the mechanism against someone should be transparent, and as democratic as possible—as is the case with Mastodon—so that individuals do not feel like they are being baselessly and viciously targeted by these so-called “self-righteous” platforms who do not agree with them.

This means that moving forward, just taking away extremists’ megaphones won’t be enough. In forming policy to address this issue in the long run, the focus will have to be on justice, harm reduction, and rehabilitation, to ensure that entire groups are not demonized or ostracized from society, which would only continue the vicious cycle of hate and extremism.

Author

Janhavi Apte

Former Senior Editor

Janhavi holds a B.A. in International Studies from FLAME and an M.A. in International Affairs from The George Washington University.