!-- Google tag (gtag.js) -->

After Twitter flagged two of his tweets as misinformation, US President Donald Trump spearheaded a political campaign to use his executive powers to change the laws regulating social media platforms. The order allows federal policymakers to enact new regulations and laws to strip social media companies of their immunity from liability provided under the Communications Decency Act. The executive order could lead to platforms such as Twitter being penalised for failing to moderate controversial content or take down content that violates its internal policies. Hence, the order endangers the freedom of speech and expression of users of social media across the world. Amid this executive order, a crucial question regarding the role and responsibilities of social media platforms has emerged: with social media platforms targeting disinformation and obstructing access to content that violates the terms and conditions of their websites, does their unfettered immunity on content moderation need to be reassessed?

 

Social media platforms are a type of Internet intermediary, in that they do not modify or create content, but merely facilitate the transmission of user-generated content. Accordingly, most democracies do not hold the platforms themselves liable for illegal posts–only their users. Therefore, the limited role of social media platforms in the content that it hosts is crucial for the continuation of such immunity.  That being said, the role of intermediaries in societies has evolved. Social media platforms now play a crucial role in information dissemination and public knowledge. Additionally, more and more platforms are actively taking down content that goes against their policies or terms and conditions.

Over the years, owing to the increasing influence of social media platforms on users’ access to information, the nature of their immunity is being reassessed by policymakers. Following several incidents of illegal content–such as child pornography and prostitution–being hosted on such platforms, the need to mandate a certain degree of moderation became imperative. Consequently, the immunity of intermediaries was eventually questioned by experts and government authorities, who pressurised social media platforms to regulate certain content on their websites. 

In accordance with these demands for greater accountability, several countries imposed varying degrees of liability upon social media platforms. For example, China and Thailand hold intermediaries liable for failing to moderate posts by users that violate the local laws. In both countries, there is no immunity granted to intermediaries for the content hosted on their platforms. Other countries adopted the less restrictive ‘notice and takedown’ method, wherein the intermediaries are required to take action following “government notifications, court orders or notices issued by private parties themselves.” An example of this is India, where, unless an intermediary defies a court order asking them to take down certain content, the platforms continue to enjoy immunity from any legal implications.

On the other hand, the United States, through Section 230– the very law that Trump is aiming to amend– recognises such platforms as hosts that merely transmit content that users create and thereby provides a “safe haven” to social media companies. Thus, it allows them to moderate harmful content without being subject to litigation. Additionally, under this model, failure to moderate content on their platforms also does not attract any liability.

Amid a wave of “fake news” on social media, we have arrived at yet another seminal moment for laws on intermediary liability. Recognising the dangers of the misuse of their platforms, internet intermediaries, including Facebook and Twitter, have employed independent fact-checkers to flag content that is considered as misinformation. The aim is to warn users of the questionable credibility of posts that fact-checkers believe to misinformed.

Disinformation campaigns have real-world impacts–they can incite violence and even influence elections. Thus, social media platforms now play a crucial role in limiting access to, and editing and moderating content on their platforms. They n
o longer act as mere intermediaries or hosts of information. While this was originally intended to prevent the dissemination of illegal material such as child pornography, it has emerged as a crucial tool to shape political discourse. Amid this evolution in the role and influence of social media platforms, there is a dire need to re-evaluate the extent of the immunity extended to such platforms. 


Also Read: Fake News and the Delhi Riots: The Self-Defeating Power of Misinformation


In accordance with this new reality, in his executive order, Trump highlighted the growing capabilities of social media platforms to influence the “interpretation of public events, to censor, delete or disappear information, and to control what people see”. Given his penchant for spewing loosely corroborated information, the justifiability of “Trump’s Tantrum” can be debated. However, a more pertinent point to address is the dangers of intermediaries misusing their moderating powers to manipulate political discourse. Under the garb of flagging content pursuant to their vaguely worded policies, social media platforms can theoretically arbitrarily use their ability to moderate and restrict information to stifle certain political ideologies.

For instance, to support his claim that Twitter selectively censored content on its website, Trump cited the example of Democrat Adman Schiff’s tweets on what the order describes as “the long-disproved Russian Collusion hoax”. The order said that the posts containing such misinformation remain unchecked by the platform. Regardless of the veracity of Trump’s protestations against colluding with Russia, even in the face of piles of evidence, he did inadvertently raise the issue of how unbridled power in the hands of social media platforms could theoretically lead to a skewed representation of events that leads to selective censorship against a particular political ideology. For example, last year, Twitter India saw demonstrations outside its office in New Delhi. The protestors accused the platform of targeting the right-wing ideology while remaining indifferent to disinformation generated by the left-leaning users, particularly members of the Indian National Congress. 

By giving private entities a free hand to moderate content, such controversies are bound to arise. While most countries detail guidelines that the platforms are required to follow, the execution is mostly discretionary. Given the growing reliance of political leaders on social media platforms to spread their message and of voters to use these platforms to gain access to information, it is vital to secure a balance between the freedom and protection of political speech and discourse, and the restriction of false, harmful information.

This balance can be achieved through the implementation of two solutions. The first is to disallow any censorship or moderation of content posted by political leaders, thereby avoiding any imbalances in political information on social media platforms. For example, Facebook’s fact-checking policy provides immunity to the original content published by political leaders. According to Facebook’s policy, political leaders include “candidates running for office, current officeholders and cabinet appointed, and political parties and their leaders”.


Also Read: Instagram's ‘Third-Party Fact Check’ Program: Enabling Misinformation


However, given the influence of political leaders and their growing contribution to the dissemination of disinformation, would such immunity really be beneficial for the public at large? For example, during the ongoing coronavirus pandemic, Brazilian President Jair Bolsonaro has advocated unproven treatments for COVID-19 on social media. In response,  Facebook, Google, YouTube and Twitter took down the posts, citing the interest of public health as a justification. Further, in India, the false information about special trains for migrant labourers spread by Vinay Dubey, a local leader of the National Congress Party, led to swarms of individuals in Mumbai assembling at a station, defying the lockdown, and putting the health of thousands at risk. Hence, exempting disinformation spread by political leaders through original posts could defeat the entire purpose of fact-checking on social media.

The second option is Trump’s solution of imposing liability on social media platforms that distort the public discourse by engaging in selective moderation to promote or stifle a political ideology. If Trump’s solution is implemented, it could cause social media platforms to excessively censor speech to avoid unnecessary litigation by erring on the side of caution.  This will, thereby, change the very nature of social media, as intermediaries will proactively filter out content that may prospectively attract liability. Consequently, it will result in a significant hindrance to users’ freedom of speech and expression, the very right that Trump’s executive order seeks to protect. In fact, his solution brings intermediary liability in the United States closer to the Chinese and Thai model, which has proven to cause significant obstacles in the freedom of speech for local users.

At the same time, however, this second solution also opens to door to social media platforms abandoning censorship of political content altogether, which potentially exposes the public to content that can incite violence or even shape the result of elections. 

While most liberal media houses and commentators vehemently argue against Trump’s executive order and highlight the harms of the imposition of excessive liability on intermediaries, it is essential to acknowledge that, with the increasing involvement of social media platforms in moderating content, the nature of this involvement has also evolved. Social media platforms appear to be stranded in the crossfire between political speech, moderation of misinformation, and freedom of speech and expression. Presented with a catch-22 situation, they are forced to choose between protecting public health and safety by targeting disinformation or defending users’ freedom of speech and expression. A careful balance must be sought between the two in order to successfully curb the new threat to crucial democratic principles across the world by re-evaluating the unregulated immunity that intermediaries currently enjoy.

Image Source: Los Angeles Times

Author

Erica Sharma

Executive Editor