4.1 Misinformation Today: The Internet

Sarah Gibbs

Broadcast Capability

“[F]ake news [and disinformation] ha[ve] been with us for a long time. And yet something has changed—gradually over the past decade, and then suddenly during the lead-up to the 2016 UK Brexit vote and US election.” —Cailin O’Connor & James Owen Weatherall; The Misinformation Age (2019)

What changed? Well, where once people who wanted to mislead the public had to shout to be heard over a mass of other voices, social media has now given online bad actors a megaphone. Communication is instant, international, and unlimited.

 

Consider the difference from when New York City newspapers were advocating for war with Spain at the end of the nineteenth century:

In 1898, when the New York World and New York Journal began agitating for war, they had large circulations. […] But their audience consisted almost exclusively of New Yorkers—and not even all New Yorkers, as the better-respected Times, Herald Tribune, and Sun also had wide readerships. Regional newspapers outside New York generally did not pick up the World and Journal articles calling for war with Spain. Although the stories surely influenced public opinion and likely contributed to the march toward war, their impact was limited by Gilded Age media technology. (O’Connor & Weatherall, 2019, p. 154)

 

No such limitations exist today, and there are few—if any—checks on the authority or credentials of people sharing information on social media. If I want to convince the world that the members of the Canadian Supreme Court have been replaced by cheese-eating space aliens from Neptune, all I have to do is start a Twitter account. My warnings about Gouda-scented extraterrestrial domination can circle the globe in seconds.

Economics

“The first fifty years of Silicon Valley, the industry made products: hardware, software sold to customers. Nice simple business. For the last ten years, the biggest companies of Silicon Valley have been in the business of selling their users.” Roger McNamee. Facebook (early investor); The Social Dilemma, 12:50.

Social media companies make money off our attention. How? When we engage with content online, we also engage with advertising. Marketing companies pay Facebook, Twitter, and Instagram to feature their ads. Aza Raskin, a former employee of Firefox and Mozilla Labs and the inventor of the “infinite scroll” states, “Because we don’t pay for the products we use, [because] advertisers pay for the products we use, advertisers are the customers. We’re the thing being sold.” (The Social Dilemma, 13:07).

For social media enterprises, the most important thing is that users see and respond to ads. Melodramatic, strange, or politically inflammatory content often gets the most attention, and therefore generates the most ad revenue. Essentially, the more extreme the news story, the better. YouTube has stated that videos made available via its recommendation algorithm account for over 70% of viewing time on the platform (Starr, 2020). Sensational videos get more “clicks,” and are therefore recommended more heavily and receive even more views; the cycle is self-reinforcing and extremist material circulates heavily. W. Lance Bennett and Steven Livingston note that “social media’s propensity to algorithmically push extremist content and to draw likeminded persons together with accounts unburdened by facts” (2020, vviii) has contributed significantly to increased consumption of disinformation and fake news.

According to Paul Starr (2020), until recently, social media companies “had no incentive to invest resources to identify disinformation, much less to block it” (p. 80). Profits outweighed ethics; disinformation paid well. Changes are in the works, however. As of May 2021, Facebook and Twitter enacted policies to limit the reach of influential users (i.e. high profile persons and/or those with large numbers of followers) who repeatedly circulate mis- or disinformation (Ovide, 2021a). Such users’ posts will feature less heavily in news feeds and accounts may be suspended for ongoing violations.

“Virality favors false and emotional messages.”

Paul Starr; “The Flooded Zone: How We Became More Vulnerable to Disinformation in the Digital Age” (2020)

 

Supplemental Video: YouTube Algorithms: How to Avoid the Rabbit Hole (https://www.pbslearningmedia.org/resource/youtube-algorithms-above-the-noise/youtube-algorithms-above-the-noise/). PBS.

 

Fringe Belief Reinforcement / Validation

So, I love Pacific Rim (2013), director Guillermo del Toro’s mash-up of Godzilla and Transformers. Is it a good movie? No. Not at all. Talking to regular people in the real world has assured me that it’s pretty terrible. If I happened, however, to find a website, Twitter feed, or Facebook group in which everyone (all ten members) believed that the film is a masterpiece, I might begin to think that all the Pacific Rim haters are deluded or perhaps even conspiring against me…

While online communities offer users considerable benefits, one of their downsides is that people with “fringe” beliefs can create spaces where their arguments go unchallenged by facts or evidence. Online communities are self-organizing and self-selecting, so the diversity of views and perspectives that characterize society “in real life” are rarely represented, and potentially anti-social or dangerous beliefs can take deeper root. Feeling that Pacific Rim is underappreciated is fairly harmless* (*film critics may disagree), but what about online communities whose beliefs center on hatred of particular political parties, countries or minorities, or who advocate violence? Online “fringe” groups are major sources of misinformation, disinformation, and fake news.

Activity