This thesis seeks to bring attention to the ways in which the effects of hate speech--specifically racialized hate speech--transcends digital platforms. It will begin by connecting the phenomenon of racialized hate speech on Facebook to specific psychological tendencies that the company consciously amplifies for its own financial benefit. The first chapter interrogates the common narrative that violent rhetoric indicates a flaw in the platform’s design, instead arguing that proliferation of such content is encouraged by Facebook’s algorithm. From there, the second chapter examines what happens when a technology giant leverages human psychology for corporate greed. A true worst-case scenario, the Rohingya genocide in Myanmar elucidates Facebook’s negligent behavior and illustrates the consequences of failing to proactively mitigate hate speech. Finally, the third chapter discusses existing and proposed efforts to regulate Facebook and similar platforms. As an issue that encompasses ethical dilemmas, policy predicaments, and business implications, reducing the prevalence of racialized hate speech on Facebook poses challenges for all regulatory actors, from the United Nations, to sovereign states, to the corporation itself. In the end, the most effective means of protecting human rights on digital networks may not rest upon the United Nations, nor individual nations, nor private corporations, but upon social media users themselves.
Herrick, Katherine, "Breaking Things: Origins and Consequences of Racialized Hate Speech on Facebook" (2022). International Studies Honors Projects. 39.
Communication Technology and New Media Commons, Gender, Race, Sexuality, and Ethnicity in Communication Commons, International and Area Studies Commons, Science and Technology Policy Commons, Social Justice Commons, Social Media Commons
© Copyright is owned by author of this document