Should social media companies moderate their content, and if so, how can they best limit the spread of misinformation? On October 15, Twitter and Facebook limited the spread of a New York Post article, which claims that Hunter Biden introduced his father, Joe Biden, to an energy executive in Ukraine. Further, the article asserts that Joe Biden later pressed the Ukraine government to fire a prosecutor for investigating companies.
In response, Facebook decided to limit the spread of the article in question, but Twitter went further. Twitter prevented users from posting links to the New York Post article and pictures of the alleged emails that the New York Post provided (although these restrictions were lifted shortly after). These actions were in response to private information in the article, and the possibility that the emails provided may have been hacked. Twitter and Facebook’s actions, however, were met with anger from conservatives, who claimed that they were suppressing conservative voices. These accusations, accuracy of the New York Post article aside, brought up a problem which has been silently wreaking havoc: misinformation on social media.
Before looking at the spread of misinformation on social media, one must recognize that Twitter (and other social media giants) are not, as many have stated, a true public forum. In a public forum, a user can state whatever they want, however they choose to say it, and it cannot be censored. However, on Twitter, users must abide by the Terms of Service (ToS), and these terms clearly state: “We may also remove or refuse to distribute any Content on the Services, limit distribution or visibility of any Content on the service, suspend or terminate users, and reclaim usernames without liability to you.” As a result, Twitter reserves the right to control distribution of content on the platform, and therefore they are within their ToS to censor the New York Post article, or any other content.
Social media giants are often pulled in front of Congress for censorship and moderation of their websites. However, many lawmakers fail to realize that it is extremely difficult for social media companies to fight misinformation. Twitter, Facebook, and other social media companies have spent years developing extremely advanced algorithms with one purpose: to spread information as fast as possible. When misinformation is posted on social media, the algorithms begin to do their work, which is to spread content. The algorithms themselves can’t take down misinformation, and so when they spread information as they are built to do, they don’t discriminate between inaccurate and accurate posts. When Twitter finds a tweet which has been flagged, they have to undo the work of their own algorithm that has been spreading the tweet, and by then, much of the damage has already been done.
Due to the nature of the problem, it is one of the most prevalent in our digital age, and as such, it is essential that companies take steps to stop people from using their products in a harmful manner. One of the key problems preventing progress on this issue is a lack of cooperation between the government and social media companies. The government doesn’t have access to key algorithms and data that social media networks have, and social media networks face partisan backlash from lawmakers when they try to increase moderation, or decrease moderation. The solution to this problem is simple: more cooperation between government officials and social media companies.
Others responsible for the compounding of misinformation are recommendation engines and user interfaces. For example, when a user logs into Twitter or Facebook, an algorithm decides what to show them which best matches their interests. However, this is problematic, because these algorithms have no ethics filter, and sometimes lead people astray. For instance, Twitter’s algorithms are known to link those viewing pseudoscience posts to political conspiracies such as QAnon, even when users haven’t shown interest in politics. When these algorithms are implemented, they must be done so with filters which prevent faulty guidance, and they need to take into account the authenticity of a post. This can be looked into with numerous factors, including the past history of a poster, or the keywords in a post. If a user has a history of spreading conspiracies, algorithms shouldn’t feed that information immediately to other users even if they show interest in conspiracies.
In conclusion, misinformation on social media is a serious problem. The problem has been made all the more difficult to solve with the creation of highly advanced algorithms tailored to spread information as fast as possible, and as these algorithms will only get faster and more advanced, social media companies need to change their algorithms to more strictly filter misinformation or create other algorithms capable of finding misinformation at the same rate that it is being spread. Overall, with cooperation between the government and social media companies, careful moderation of algorithms, and swift action against problematic posts, the problem of misinformation on social media can be solved.