Billions of people worldwide use social media sites like Facebook to connect with each other. In the United States, approximately seven out of ten people use social media. These websites serve as important forums for everything from news to political discussions to connecting with friends. However, anonymity online—and the failure of social media companies to enforce regulations on speech on their sites—can result in dangerous real-life consequences such as the organization of violent events and terrorism.
The question of how much social media companies should censor its users is complicated and controversial. Limiting what people can say on social media carries risks and at first glance might appear to be a violation of the First Amendment. For one thing, it can be hard to draw the line between dangerous opinions and opinions that merely express dissent. Another concern is with the way that companies would enforce these regulations; many use systems in which users can “report” others for inappropriate content, which can also result in the suppression of minority opinions.
However, it is becoming increasingly clear that social media companies need to censor hate speech and threats of violence more consistently and effectively. The violent events in Charlottesville in August and their aftermath reveal how white supremacy is gaining traction on social media sites. Moreover, with the failure of the government to respond effectively to violent white supremacy, it becomes the moral responsibility of social media companies to prevent the circulation of threats and plans for violent demonstrations.
Currently, many websites handle hate speech differently from communications from terrorist groups like ISIS. YouTube, Facebook, and Twitter all have rigorous policies for removing material related to terrorism. Facebook, for example, uses artificial intelligence extensively to identify posts that contain terror-related information. In contrast, threatening materials from white supremacists often slip through the cracks. Tech companies should commit themselves to improving the way that they censor and remove threatening material from white supremacist groups.
This is not to say that social media companies should ban all material related to white supremacy and alt-right stances. Instead, they should devote more resources to identifying threats and hate speech by white supremacists. For example, white supremacists often disguise their messages by using code words and “dog whistles.” If social media companies devoted more resources to combating hate speech and threats on their sites, they could more effectively identify these euphemisms and prevent the spread of violent and hateful ideas.
In fact, it’s important to note that social media companies are privately owned businesses, not platforms owned by the government. Although some people fear the possibility of these companies having totalitarian control over the Internet, by tightening their restrictions they will simply be upholding their regulations more consistently in the name of safety. This stronger reinforcement of the regulations will ultimately help the social media companies to fulfill their mission statements and goals: to help people come together and participate in open but safe discourse.
As always with free speech and censorship, there is a delicate balance to strike between the unfair suppression of ideas and the cultivation of a safe environment for everyone. But by devoting more resources to the identification and removal of violent messages–and not just those of white supremacists–social media companies will be taking a step in the right direction.