By: Rachel May
Donald Trump’s presidency changed historical norms in too many ways to count. One of the most important was the use of social media for frequent, direct communication with American citizens, international counterparts, and members of his own administration. In the final stages of the Trump presidency the focus on social media heightened as Twitter banned the President’s accounts @realDonaldTrump and @POTUS from the platform, followed by similar bans by Facebook and other social media sites. The bans were said to be on the basis of the companies’ own policies, including policies against incitement of violence, raising many issues and questions about big tech’s proper rights and responsibilities in society today. President Trump and his allies have long complained about a Democratic bias among big tech companies. Following the bans, the volume of those complaints amplified. The inflammatory rhetoric resulting from these developments highlights the importance of social media in contemporary culture and clouds public understanding about the role and regulation of these tools.
The initial reaction to the social media bans was in significant part couched in constitutional language. There has been an uproar from Trump’s allies decrying the bans as illegal restrictions on the President’s first amendment right to free speech, guaranteed in the U.S. Constitution. There is no doubt that denying President Trump access to Twitter frustrated his preferred means of expression. Were that the test of the constitutional right, there would be a convincing free speech argument. But it is not. The rights enshrined in the American constitution are similar in nature and effect to those in the Charter of Rights and Freedoms under the Canadian constitution. Specifically, those rights guarantee the freedom to express opinions without censorship, interference and restraint by the government. The constitution does not, conversely, guarantee the right to make use of private (non-governmental) platforms. In the wake of the President’s claims of election fraud, and encouragement of actions to reverse the election results (with terrible consequences, including the storming of the Capitol), Twitter took action based on its own policies. His tweets were determined to have violated Twitter’s Glorification of Violence Policy, the company expressing concern that its platform would continue to be used to incite violence. Facebook cited parallel issues in imposing its own restrictions.
A second, much-discussed, aspect of the role of social media companies concerns their very nature, particularly from a regulatory perspective. Big tech firms have long insisted they are not media or communications enterprises. This is not merely a question of branding. There are powerful regulatory incentives for them to maintain their characterization as technology firms, or more particularly to avoid categorization as media or communication companies. Regulators, such as the Federal Communications Commission in the U.S. and the Canadian Radio-television and Communications Commission, have much broader authority over, and impose stricter rules and regulations, on media and communications businesses. Examples of such regulatory prescriptions can include requirements for certain types of content (like quotas for locally produced broadcasting), requirements for access, and price regulation given the deemed societal importance of such services. As long as Twitter, Facebook, and their peers, maintain that they are technology companies, their ability to manage content on their sites—including who can post on the forum and what content is allowed without oversight—will remain relatively unchecked.
This feeds directly into the “Section 230 issue” that has been the focus of recent public discourse around social media, in this case largely because President Trump himself repeatedly drew attention to it. Section 230 is a provision in the U.S. Communications Decency Act that protects interactive computer services, such as social media sites, from liability for content posted on their sites by users. The law, enacted in the early stages of the internet in the 1990s, has a clear benefit for tech firms. It permits them to provide a platform for the public, without liability for the statements of participants on those platforms. If President Trump succeeded in removing or limiting that protection, it likely would have harmed his own communication tactics, because those sites would presumably have been much more restrictive in relation to his own posts. Nevertheless, he clearly saw this as an appealing opportunity to retaliate against firms that he perceived as biased against him and his supporters.
The increasing importance of social media, the politicization of these issues in a polarized environment, the phenomenon of fake news, and the depth of confusion on these subjects all suggest that questions concerning the regulation of these media platforms will not be quickly or easily resolved. What is equally clear is that these questions will continue to be the subject of intense debate, ironically in many cases on social media platforms themselves.
Rachel May is a Master of Public Policy candidate at the University of Toronto’s Munk School of Global Affairs & Public Policy. She is interested in social policy particularly gender equity and examining structural and societal biases. Rachel holds a Bachelor of Arts in Psychology from the University of Western Ontario.