Disinformation Attacks
Julia Manetta
Social media has become a key fixture of all of our modern lives — it allows for connection, communication, and the consumption of information almost instantaneously. However, social media also has pernicious side effects: sites such as Twitter and Facebook have become the breeding grounds for disinformation. Disinformation is false content that intentionally aims to mislead and manipulate an audience (Buchanan 2020). Social media companies have done little to address this issue; in fact, their platforms encourage the creation and spread of disinformation by design.
Social media platforms have been engineered to be the perfect breeding grounds for disinformation and its dissemination. Social media companies rely on algorithm and Artificial Intelligence (AI) infrastructures to drive both user engagement and revenue on their platforms. Algorithms work by tailoring users’ feeds with high-engagement content: posts that have garnered high levels of attention and interaction will be included on other users’ timelines. Radical, sensationalized, and shocking content (like fake news and clickbait) have notably high rates of engagement, regardless of its veracity and the reliability of its source. As a result, disinformation has high engagement potential and easily spreads through social media channels with the help of algorithms (Heldt 2019). Algorithms also take into account users’ activity, pre-existing beliefs, and group identifications to provide personalized feeds that align with their viewpoints. Confirmation bias, echo chambers, and polarization is prevalent on these platforms as a result (Katyal 2019). Social media companies face a dilemma: their algorithm and AI infrastructures are necessary in order to generate revenue, but they are also key facilitators in the spread of disinformation. Ultimately, profit and the well-being of users are often at odds.
It may seem like a bit of an exaggeration to say disinformation puts the well-being of users at stake. However, disinformation has significant, real world consequences that should not be dismissed. Disinformation can sow radicalization and incite terrorist behavior in users (Unver 2017). This can be clearly seen with Pizzagate, a baseless conspiracy theory that went viral in alt-right circles on 4chan and Twitter. Hundreds who believed in the conspiracy theory sent threats to a pizzeria, and one man even fired an assault rifle inside the establishment (Unver 2017). The Pizzagate incident shows how disinformation can radicalize individuals, inspire terrorist behavior, and ultimately pose tangible threats to society. In addition to social media users, disinformation can impact the public at large. Disinformation can profoundly alter the political landscape of countries, as seen during the 2016 United States presidential election. Russian influence campaigns shaped the outcome of this election by using bots to spread disinformation throughout social media channels, discredit mainstream news content, and sow political polarization among the American public (Downes 2018). Russian influence in the 2016 election shows how foreign actors can exacerbate political polarization, manipulate public opinion, and corrupt democratic processes through disinformation attacks (Frederick 2019).
Clearly, disinformation has detrimental, real world consequences that should not be left unchecked. However, social media companies have done little to address disinformation on their platforms, as their revenue depends on an attention economy framework that encourages disinformation in the first place. These companies’ lack of response to disinformation and sole focus on profit is unethical and reprehensible; they leave millions of users susceptible to deceit and manipulation and put society at large at risk. The public — especially frequent users of social media — must be aware of the subversive and often subliminal threat of disinformation and call out social media companies for their inaction. We must demand change, such as reforms to the algorithm infrastructures and improvements to disinformation detection systems, to preserve our freedom of thought and safeguard ourselves against manipulation. Now more than ever, we users must put pressure on companies and demand solutions to today's digital disinformation crisis.
Sources
Buchanan, Tom. 2020. “Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation.” PLoS ONE 15(10) 1-33. Retrieved October 25, 2020 (10.1371/journal.pone.0239666).
Downes, Cathy. 2018. “Strategic Blind–Spots on Cyber Threats, Vectors and Campaigns.” The Cyber Defense Review 3(1): 79-104. Retrieved October 19, 2020 (www.jstor.org/stable/26427378)
Frederick, Kara. 2019. The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight. Center for a New American Security. Retrieved October 16, 2020 (https://www.jstor.org/stable/resrep20399)
Heldt, Amélie. 2019. “Let's Meet Halfway: Sharing New Responsibilities in a Digital Age.” Journal of Information Policy 9: 336-369. Retrieved October 16, 2020 (www.jstor.org/stable/10.5325/jinfopoli.9.2019.0336).
Katyal, Sonia. 2019. “Artifical Intelligence, Advertising, and Disinformation.” Advertising and Society Quarterly 20(4). Retrieved October 25, 2020 (doi:10.1353/asr.2019.0026).
Unver, H. 2017. “Digital Challenges to Democracy: Politics of Automation, Attention, and Engagement.” Journal of International Affairs Editorial Board 71(1): 127-146. Retrieved October 16, 2020 (https://www.jstor.org/stable/26494368)