A REPORT PREPARED BY THE UNIVERSITY OF SOUTH FLORIDA FOR CYBER FLORIDA
Bots are Bad, Humans are Worse JULY 2021
Report Prepared by: Loni Hagen, PhD* Stephen Neely, PhD* Christina Eldredge, MD, PhD* Humayra Binte Qadir* *University of South Florida
Prepared For: Cyber Florida 4202 E. Fowler Ave. ISA 7020 Tampa FL, 33620
2 | Bots are Bad, Humans are Worse
Introduction This report examines the prevalence and spread of misinformation via social media in the early stages of the COVID-19 pandemic. Over the past 15 years, social media has grown into a primary source of news and information for many Americans (Greenwood et al. 2016; Kim et al. 2014; Mitchell et al. 2012)1, and its influence has been amplified in times of crisis when information-seeking behaviors increase (American Red Cross 2012; Lachlan et al. 2016; Rene 2016). Social media’s importance as a crisis communications platform has been increasingly evident in recent public emergencies, including natural disasters such as hurricanes and earthquakes (Guskin and Hitlin 2012; Stewart and Wilson 2016), as well as public health crises, such as the recent Ebola, Zika, and N1H1 outbreaks (Fung et al. 2014; Hagen et al. 2018; Merchant 2011). These trends have reached their zenith amidst the COVID-19 pandemic, which the World Health Organization (WHO) and others note “… is the first pandemic in history in which technology and social media are being used on a massive scale to keep people safe, informed, productive, and connected” (WHO et al. 2020). The purported benefits of social media as a crisis communications tool have been fairly intuitive. Government agencies and public health officials increasingly utilize social media to convey emergency
The same features that make social media a powerful vehicle for crisis communications also make it a potential source of chaos and disruption during public emergencies.
preparedness, mitigation, response, and recovery information to the public in large part because platforms such as Facebook and Twitter facilitate the rapid dissemination and retrieval of information, while also allowing for on-demand access to emergency updates as well as the multi-directional exchange of information between public leaders and members of affected communities (Hughes and Palen 2012; Merchant et al. 2011; Neely and Collins 2018). However, the same features that make social media a powerful vehicle for crisis communications also make it a potential source of chaos and disruption during public emergencies. As social media has become more prominent in crisis communications over recent years, many have cautioned against its potential to serve as a vehicle for the spread of inaccurate information, unverified rumors, and even malicious disinformation campaigns (i.e. Conrado et al 2016; Hughes and Palen 2012).
This “dark side” of social media – as some have termed it – has been evident in cases such as the 2013 Boston Marathon bombing, during which reported suspects were misidentified on social media, leading to both wrongful accusations and inefficient resource deployments (Henn 2013). In many cases, if not most, the dissemination of misinformation on social media is unintentional. For example, when Japan suffered a tsunami in 2011, platforms like Twitter were widely employed by victims to request help from first responders. However, in many cases these messages were later “retweeted”, making it difficult for first respondents to distinguish between current and outdated information (Acar and Muraki 2011). In
A complete list of references is available in Appendix F.
1
cyberflorida.org | 3
the case of public health emergencies, studies have shown Twitter in particular to be a potential source of misinformation during epidemics such as the Zika and Ebola outbreaks (Krittanawong et al. 2020; Miller et al. 2017; Oyeyemi et al. 2014). Early in the COVID-19 pandemic, commentators and scholars alike raised concerns over both the volume and at times dangerous nature of misinformation associated with the origins and treatment of the novel coronavirus (i.e. Kouzy et al. 2020; Suciu 2020). Some preliminary analysis has suggested that as much as 25% of the twitter content referencing COVID-19 in February of 2020 may have included various forms of misinformation (Kouzy et al. 2020). The World Health Organization (WHO) labeled this phenomenon an “infodemic”, while others warned against “a pandemic of misinformation” infecting social networks and spreading faster than the virus itself (Moran 2020). The WHO has noted that in the case of COVID-19, misinformation can “lead to poor observance of public health measures, thus reducing their effectiveness and endangering countries’ ability to stop the pandemic” (WHO et al. 2020). With these concerns in mind, this study has been supported by Cyber Florida to examine the prevalence and spread of misinformation in the initial phase of the Crisis and Emergency Risk Communications (CERC) cycle (CDC 2014). The research effort included three separate case-study analyses of discussion networks that emerged around specific COVID-related topics during March of 2020. Each case study included both a network analysis and a manual content analysis, and each examined a uniform set of metrics and characteristics around network structure, polarization, influential-ness, and content (i.e. mis/ disinformation). Particular attention is paid in this analysis to how early politicization of the COVID-19 pandemic may have affected the spread of misinformation. A summary of the research methods and findings is presented in this report, and a full accounting of each case-study is available in Appendices A – C, respectively.
4 | Bots are Bad, Humans are Worse
Data and Methods For the purposes of this analysis, we examined Twitter data posted between March 1st – March 31st, 2020. While the first confirmed case of COVID-19 in the United States was recorded in late January of 2020, the outbreak was not fully recognized as a crisis in the United States until March of 2020. On March 11th, the World Health Organization (WHO) labeled COVID-19 as a “pandemic”, and two days later President Donald Trump declared COVID-19 a “National Emergency”, opening the door for federal funding to be used in an effort to slow the spread of the virus. That same day, the Trump administration issued a travel ban on non-U.S. citizens traveling to the United States from Europe (AJMC 2020). Thus, March 1st – 31st coincides with the “Initial Phase” of crisis communications, as outlined in the CDC’s Crisis and Emergency Risk Communication plan (CDC 2014). According to the CDC (2014), “decisions in the initial phase have critical implications. There are few second chances to get communications right during this phase of a crisis” (CDC 2014, p. 4). For this reason, “When communicating in the initial phase of an emergency, it is important to present information that is simple, credible, accurate, consistent, and delivered on time” (CDC 2014, p. 4). CDC’s guidance highlights the importance of these earliest communications, as they lay the foundation for the information environment in which subsequent stages of the crisis communications process will take place. While the CERC framework is primarily tailored to emergency management professionals and organizations, its basic tenants can easily be applied to evaluate the broader information environment found on social media platforms such as Twitter during a public health crisis.
Data Collection Using data initially provided by the University of Southern California2, we collected a total of 34,405,963 tweets from 39,622,354 unique Twitter accounts during March of 2020. From this data set we extracted a random sample of 1 million tweets for analysis (ME = 0.13 at 99% CI). This sample was then queried using keywords to create unique data files for each of the case-studies analyzed below.
Case-Studies Using the initial sample of 1 million tweets, we identified keyword queries for several prominent topics of discussion during the initial phase of the COVID-19 pandemic. From these, we focused our analysis on three specific topics/case-studies: 1. The Efficacy of Masks and Face-Coverings 2. Potential Treatments for COVID-19 3. Conspiracy Theories Regarding the Origin of COVID-19
As part of a larger data collection effort, we collected Tweets between January 21st and July 14th of 2020 using Twitter ID’s made public by researchers at the University of Southern California. For more information about this data collection effort, please visit: https://github.com/echen102/COVID-19-TweetIDs. USC researchers initially created the data set using their own queries which are available at https://github.com/echen102/COVID-19-TweetIDs/blob/master/keywords.txt 2
Twitter data were collected for a variety of conspiracy theories, including popularized theories involving Bill Gates, the influence of 5G networks on COVID spread, and the intentional creation of the COVID-19 virus as a bioweapon. However, the keyword queries and data quality did not allow for a reliable analysis of these data in most instances. 3
cyberflorida.org | 5
These topics were intentionally chosen to include two discussions about medical guidance/content and one non-medical/political discussion. The specific topics were chosen based on the volume of tweets returned by the keyword queries, as well as the resulting data quality and its suitability for analysis.3 A more complete summary of these case-studies – including the keyword queries – is provided in Appendices A – C, respectively.
Data Analysis For each case-study, we (1) mapped the network structure, (2) identified the most influential actors in the discussion network, and (3) conducted a manual content analysis of tweets. Additionally, in the case of masks and face-coverings, we also conducted a “bot analysis” to identify potential social bot activity and malicious disinformation efforts. Network Structure: Like other social media applications, discussion networks on Twitter form spontaneously as individuals interact with one another through behaviors such as “following”, “replying/retweeting”, “mentioning” and “liking”. Discussion networks tend to divide naturally into clusters (or “communities”) based on existing social connections as well as the development of hubs around influential actors and thought leaders (based on the topic being discussed). These clusters can be identified and visually depicted by observing patterns of interactions with the aid of modularity algorithms. These tools help to identify densely connected clusters of nodes (i.e. actors) and distinguish them from more sparsely connected nodes outside of the cluster. For this analysis, we created network maps for each of the case-studies in order to better understand the structure of the emerging discussion networks. This – coupled with manual content analysis – allowed us to identify factors such as the degree of political polarization in the discussion networks, the extent to which networks relied on subject-matter-experts/thought leaders, and the amount of information sharing that occurred across diverse communities (Smith et al. 2014). Influential Actors: Along with mapping network structures, it is important to identify and understand the most influential actors within a discussion network and its subordinate community clusters. Influential actors play a critical role in shaping the nature, content, and tone of communications within a discussion network, and they convey valuable information about the authenticity and reliability of a discussion network. For this analysis we used a PageRank algorithm to automatically identify the most influential actors in each discussion network. Prior research has identified PageRank as a reliable indicator of the trust that a user has among other members of the network (Caverlee et al. 2008; Gimenez-Garcia et al. 2016). Content Analysis: After mapping the network structures and identifying the most influential actors in each network, we drew a random sample of 300 tweets from each case-study for manual content analysis. Content analysis was conducted by a team that included medical experts and information scholars to ensure technical accuracy. Through the content analysis, an emergent coding scheme was used to identify the major themes/topics discussed in each
6 | Bots are Bad, Humans are Worse
network, as well as the type and frequency of mis/disinformation circulated. Content analysis was also used to classify the influential actors in each case-study so that we could examine how content and dis/misinformation varied across different categories of actors Bot Detection and Analysis: In order to detect potential bot activity within the discussion network, we utilized the Botometer Application Programming Interface (API). Botometer is one of the most popular bot detection algorithms in use today. It employs over 1,100 Twitter features (including user-, friends-, network-, temporal-, content-, and language-based attributes) to estimate the probability of a Twitter account being a social bot (Varol et al. 2017). Botometer suggests that an account with a Botometer score of 0.5 or higher is likely to be a social bot (OSoMe & Indiana University 2020). The accuracy of Botometer was reported at approximately 86% by Varol et al. (2017) and Wojcik et al. (2018). A more detailed and technical description of the network analysis techniques utilized in the study is provided in Appendix D of this report.
Summary of Findings A concise summary of the major findings from each case-study is provided in this section. A complete accounting of each case-study analysis (including full charts and tables) is available in Appendices A – C of this report.
Case-Study 1: Masks and Face-Coverings • In the case of masks and face-coverings, there was no evidence of malicious disinformation efforts or widespread bot activity4. The most influential accounts in this discussion network were identifiable, and in most cases prominent individuals/organizations, suggesting that bot activity did not significantly impact the discussion of masks and face-coverings during the early stages of the pandemic (Table 1). Notably, in discussion networks that are heavily influenced by bot activity, it’s not uncommon for nearly half (or more) of the most influential accounts to be social bots due to their manipulation of the metrics that determine PageRank (Hagen et al. 2020). • In the initial phase of the crisis communications cycle, a considerable amount of messaging from public health organizations, healthcare providers, and media outlets appears to have been rushed, premature, and underdeveloped. Much of this earlier messaging was later changed or retracted as guidance evolved, but this led to a confusing and contradictory information environment over the long-term.
It is important to note that when discussing bot activity in the context of this analysis, our focus was on the influential-ness of social bots, not their presence in the discussion network. Social bots are known to be present throughout discussion networks in many cases, though our analysis showed that in this case their presence did not heavily influence the network’s structure or the metrics used to identify influential actors. 4
cyberflorida.org | 7
Table 1. Most Influential Actors by PageRank ID
Category
Location
PageRank
@tedlieu
Political (U.S. Congress)
USA
0.024043
@realDonaldTrump
Political (U.S. President)
USA
0.018735
@spectatorindex
Media Org/Journalist
International
0.012699
@OH_mes2
Individual (Verifiable)
International
0.011515
@DrDenaGrayson
Medial Doctor (Political)
USA
0.011094
@BNODesk
Media Org/Journalist
International
0.010469
@RealJamesWoods
Celebrity (Political)
USA
0.008612
@charliekirk11
Political (Commentator)
USA
0.006729
@mitchellvii
Individual (Unverifiable)
USA
0.005995
@NorbertElekes
Entrepreneur
International
0.005851
Source: Twitter API
• The discussion network surrounding masks and face-coverings became highly politicized early on, even prior to the Trump administration taking a more controversial stance on masks in April of 2020. Figure 1 shows that neither political cluster (Democrats = purple; Republicans = green) were operating in the same network area as the most prominent medical or domestic media sources (blue). While the Democratic network cluster (heavily influenced by Congressman Ted Lieu of California) was more connected to mainstream media and expert medical sources, both political clusters were holding their own discussions, largely ignoring one another and only tangentially engaging with non-partisan sources of information and expertise. • The U.S. discussion network was considerably more politicized and less reliant on subject-matter-ex-
Table 2. Most Influential Accounts by Category and Location (N=20)
pertise than the broader, international discussion network. In the case of the United States, 60% of the
U.S. Based Accounts
International Accounts
most influential accounts belonged to political actors
Media Organizations/Journalist
1
5
(i.e. public office holders, political activists, etc.),
Medical Doctors/Organizations
1
2
while an additional 20% were attachedto actors/
Political Actors/Activists
6
1
medical experts who had taken a decidedly political
Entrepreneurs
0
1
stance. Only one U.S. based account among the 20
Celebrities
1
0
Suspended Accounts/Unverifiable
1
0
most influential actors belonged to a media outlet (@CNN), compared to 50% of the most influential international accounts (See Table 2).
Source: Twitter API
8 | Bots are Bad, Humans are Worse
Figure 1 Network Map, COVID-19 Masks and Face-Coverings
Case-Study 2: Potential Treatments • While the pace and immediacy of the modern information environment is believed to reinforce the “rush to publish” mentality among many organizations (Bauchner 2017; Redden 2020), this case-study highlights the importance of proper information vetting in the initial phase of the crisis communications cycle. n Although
hydroxychloroquine has no proven efficacy in treating COVID-19 (Kuperschmidt
2020; Penn Medicine 2020), it was widely touted as an effective treatment in some
cyberflorida.org | 9
network clusters based on misinterpretations and/or over-exuberant interpretations of preliminary, “small-N” research studies. n The
content analysis also revealed widespread dissemination of misinformation related to
the anti-inflammatory medication, ibuprofen. This misinformation was often originated by healthcare providers and public health organizations and was promptly retracted, but only after being widely recirculated throughout the network (Nunneley et al. 2020). • While the overall network structure was in some regards consistent with expectations for a global news-story, the network map did reveal significant politicization around the topic of hydroxychloroquine. n Consistent
Figure 2 Network Map, COVID-19 Treatment and Cures
with this network structure,
we found that the spread of misinformation related to hydroxychloroquine was largely contained within isolated (and often politicized) network clusters. Consistent with the “echo-chamber” hypothesis (i.e. Sunstein 2017), this demonstrates how misinformation can spread unchallenged within a polarized network.
10 | Bots are Bad, Humans are Worse
Figure 3 Amplified Network Map: Originators of misinformation (blue dots), Retweeters of misinformation (red dots); (R) Zoomed out network map for reference
Table 3. Most Influential Actors by PageRank Twitter ID
Page Rank
Category
Source: Twitter API
@MichaelCoudrey
0.023979
Businessman/Investor
@philbak1
0.014876
Businessman/Investor
@EdselSalvana
0.012301
Medical Doctor
@CNN
0.01198
News Outlet
Note: High Page Rank indicates a high level of trust and influentialness of the account in the network
@rsamaan
0.010269
Medical Doctor
@AdamMilstein
0.009978
Businessman/Investor
@paulsperry_
0.009728
Journalist
@Aco98rain
0.007618
Individual Account
@IngrahamAngle
0.007482
Journalist
@ JamesTodaroMD
0.007434
Medical Doctor
cyberflorida.org | 11
• While the pace and immediacy of the modern information environment is believed to reinforce the “rush to publish” mentality among many organizations (Bauchner 2017; Redden 2020), this case-study highlights the importance of proper information vetting in the initial phase of the crisis communications cycle.
Case-Study 3: Conspiracy Theories The third and final case-study examined the early spread of conspiracy theories surrounding the origins of the virus. While a number of conspiracy theories became popularized in the initial phase of the pandemic, we queried keywords aimed specifically at those theories which accused U.S. Army and CIA personnel of carrying the virus to China in late 20195. The most prominent and specific of these stories claimed that Maatje Benassi, a U.S. Army reservist, was “patient zero” who carried the virus to China when competing in the Military World Games (hosted in Wuhan, China during October of 2019). According to media reports, the claim was initially promoted by conspiracy theorist George Webb to his 100,000 plus YouTube subscribers (O’Sullivan 2020). While the claim did not gain widespread attention in the mainstream American media, it was picked up by Chinese state-run media in an attempt to insinuate that U.S. personnel may have been responsible for the initial outbreak of the virus in Wuhan (Patterson 2020). In this case-study we examined the extent to which this and similar conspiracy theories were circulated in the initial phase of the pandemic, the
Despite the often-insidious nature of this messaging, there was no evidence that social bots were influential in the spread of these early conspiracy theories regarding COVID-19
nature of the dis/misinformation observed, and the types of actors and Twitter accounts through which it was disseminated. • Despite the often-insidious nature of this messaging, there was no evidence that social bots were influential in the spread of these early conspiracy theories regarding COVID-19 (at least in the context of Twitter, during March of 2020. In some cases these theories may have been more widely circulated on other platforms, such as YouTube). • Several conspiracy theories were circulated in the initial stage of the crisis communication cycle, but these did not gain widespread attention, and they appear to have been primarily contained to smaller, fringe community clusters.
Twitter data were collected for a variety of conspiracy theories, including popularized theories involving Bill Gates, the influence of 5G networks on COVID spread, and the intentional creation of the COVID-19 virus as a bioweapon. However, the keyword queries and data quality did not allow for a reliable analysis of these data in most instances. 5
12 | Bots are Bad, Humans are Worse
Figure 4 Network Map, COVID-19 U.S. Army Conspiracy Theory
n The
largest community clusters formed around journalists and media personalities, such
as @kylegriffin1 (MSNBC) and @carlquintanilla (CNBC). These community clusters were focused on/discussing legitimate news stories that were captured by the keyword queries, such as a recently uncovered CIA hacking operation and the U.S. military’s response to the coronavirus. The discussion of conspiracy theories was primarily isolated to some of the smaller clusters and isolates, with relatively little connection to other actors in the network. • Aside from comments by U.S. Congressman Lee Zeldin of New York, there does not appear to have been an “official” attempt or organized strategy to counter this misinformation on the part of public officials or the media. n Most
messages contradicting these conspiratorial claims took the form of politicized
commentary from individual users, and in many cases, these were parlayed into additional or counter conspiracy theories.
cyberflorida.org | 13
Conclusions 1.
The findings highlight several areas of opportunity for public health organizations and officials to improve upon their use of social media in the initial stages of a public health crisis. In the case of COVID-19, the early information environment was marked by (1) inadequate information vetting, (2) contradictory information from medical experts and media outlets, and (3) the noticeable lack of an official, organized campaign to counter and correct misinformation. While this is consistent with the common informational challenges of a nascent public emergency (CDC 2014), understanding these challenges and learning from these shortcomings can help public agencies and officials to better respond in future crises. • Public health organizations and other officials should always remember the crucial role they play in informing the public and take caution to ensure that they are delivering accurate, consistent information on social media. • Always remember that once information is shared on social media, it’s too late to be recanted. When sharing information about the COVID-19 pandemic or other critical topics, public organizations and officials should clearly communicate to the public that it is an evolving situation and guidelines may change as new information becomes available.
2. The findings also highlight the potential value of individual actors as thought-leaders and informational hubs during public health crises. Community clusters were much more likely to spontaneously form around individual medical experts and/or media personalities than they were to form around public health organizations or media outlets. This is consistent with the conclusions of prior research, which has found individuals to be more trusted and influential information brokers in public health discussion networks (i.e. Hagen et al. 2018; Whelan et al. 2011). We speculate that this may be due in part to the flexibility and responsiveness afforded individual actors/account holders, relative to organizations that are subject to a bureaucratic hierarchy when engaging with the public. • Regardless of whether information on social media comes from an individual or a public organization, consider your source before trusting and/or resharing the content. Evaluate whether the source (and the sources they used) have a history of providing honest, credible information. If not, find a more reliable source to confirm the information. • Double-check and analyze your sources before resharing their content. Search for other social media accounts to find more information about them, do they have a political or religious point of view that might give them a biased reason for sharing that particular information? • Search for supporting information to confirm the credibility of the information that you’re receiving from an individual or organization on social media. If you can’t find another reliable source to confirm the information, or the original source isn’t credible, refrain from sharing it.
14 | Bots are Bad, Humans are Worse
3. While the early information environment surrounding COVID-19 suffered from a wide-range of misinformation, we did not find evidence of influential bot activity or widely effective disinformation campaigns (at least in the context of Twitter during March of 2020). This is consistent with prior research, which has suggested that public health networks are generally not subject to the same level of bot influence as is often seen in more political networks (i.e. Hagen et al. 2018; Hagen et al. 2020). The global nature of the COVID-19 pandemic may also have played a role in limiting well-orchestrated malicious activities. • When getting important information from social media, look at the source’s account activity and history for bot-like behavior. Genuine accounts generally have several interests and post content from a variety of sources. • With urgent topics like COVID-19, always take the time to research a claim before resharing. Even if the information appears to come from a trustworthy source, it’s best to confirm before spreading potential misinformation or disinformation. When in doubt, navigate directly to the source to confirm the claim.
4. Even in the case of medical topics, the discussion networks surrounding COVID-19 became highly politicized during the initial phase of the pandemic. This may be due in part to the Trump administration’s initial reactions to the pandemic and may also reflect broader trends toward homophily and polarization in the United States (Bishop 2009; Pew Research Center 2019). In either case, the findings suggest that from the outset of the pandemic, partisan groups were operating in decidedly different information environments, which is likely to further exacerbate polarization and political deadlock. The data appear to support the “echo-chamber” hypothesis, which suggests that polarized discussion networks can hasten and facilitate the unchecked spread of misinformation, potentially leading to radicalization (Sunstein 2017). . • Claims made on social media can often be taken out of context and redesigned to support a specific cause or belief. When searching for information on social media, take the time to verify the information from a variety of trusted news sources to ensure that you are getting unbiased, accurate information. • Even if the source is a public figure or trusted organization, it’s always best to double-check before resharing to prevent the spread of potentially biased or politically/religiously influenced claims.
cyberflorida.org | 15
Appendix A: Case-Study 1, Masks and Face-Coverings Between March 1st and 31st, the topic of masks and face-coverings was widely discussed, particularly as initial concerns were being raised over the supply and availability of personal protective equipment (PPE) for first responders and medical service providers (Jacobs et al. 2020; WHO 2020). While these data predate a more contentious political debate that began in April of 2020 (Reuters 2020), they do include initial discussions and deliberations over the efficacy of masks and face-coverings in preventing the spread of COVID-19, providing some insights into how the initial information environment (vis-à-vis social media) may have shaped and influenced subsequent political and medical discussions. In this case-study we examined early discussions surrounding the use of masks and face-coverings, including a comparison of content and influential actors in the United States vs. internationally. In particular, we focused on the early spread of information which would later be found inconsistent with prevailing medical guidance on the use of masks and face-coverings. Keywords used to build the initial data set for this case-study included the following queries: [‘mask’, ‘face mask’, ‘n95’, ‘cloth’, ‘fabrik’, ‘fabrick’, ‘fabric’,’medical mask’, ‘medical-mask’, ‘face-mask’, ‘musk’, ‘face coverings’,’face-covering’, ‘face covering’, ‘cloth-covering’, ‘face-cover’, ‘face cover’,’bandana’, ‘t-shirt face mask’, ‘ t-shirt mask’, ‘t shirt mask’, ‘tshirt mask’,’tshirt-mask’, ‘reusable face mask’, ‘surgical-mask’, ‘surgical mask’, ‘n90’, ‘respirator mask’,’respiratory-mask’,’respirator-mask’,’gas mask’, ‘n99’, ‘ffp1’, ‘ffp2’, ‘ffp3’, ‘3m’] From our initial sample of 1 million tweets, a total of 26,523 tweets were identified for analysis in this case-study using the queries above.
Network Structure In the case of COVID-19, interpreting network structures can be challenging given the global nature of the conversation, which obfuscates the shape that purely domestic discussion networks might take (i.e. Hagen et al. 2020). However, taken together, Figure’s A1 and A2 below provide several key insights into the spontaneous network that formed around the discussion of masks and face coverings in March of 2020. The network reveals four major community clusters (or modularity classes). These are distinguished in Figure A1 by the colors purple, blue, black, and green. The purple cluster represents a domestic (i.e. U.S.) left-wing political community, densely connected and populated by influential actors such as Congressman Ted Lieu of California, Dr. Dena Grayson (a medical expert and former Democratic House candidate), and Scott Dworkin (@funder), who is cofounder of the Democratic Coalition Against Trump. In contrast, the green cluster represents a domestic right-wing political community, centering primarily around President Donald Trump, but also featuring other influential actors such as James Woods (a conservative, pro-Trump actor) and Charlie Kirk (a well-known conservative political activist). The blue and black clusters represent discussion communities that have emerged around various media outlets, with the blue being more domestic (i.e. CNN, Reuters, etc.) and the black being more international (i.e. BNO News, Netherlands).
16 | Bots are Bad, Humans are Worse
Drawing from the Pew Research Center’s “conversational archetypes” (Smith 2014), this network is best described as a combination of a “Broadcast Network” and “Polarized Crowd” discussion network. “Broadcast networks” typically take on a “hub and spoke” pattern, with media outlets and personalities taking a central role in the network and broadcasting content to distinct network communities. We clearly see elements of this network structure in Figure A1. However, the presence of two distinct political clusters with limited connectivity also suggest elements of a “Polarized Crowd”, which is a network featuring two large, densely connected groups with little communication between them (Smith et al. 2014). This is made obvious by the disconnected nature of the purple (left-wing) and green (right-wing) community clusters. This suggests that the discussion over masks and face coverings became politicized early on, even prior to the more controversial stance espoused by the Trump administration in April of 2020. This may stem in part from early disagreements on the issue of masks, though to a degree it may also simply reflect the already polarized nature of the American political landscape (Bishop 2009; Pew Research Center 2019). In either case, the implications of this network structure are significant. Smith et al. (2014) note that in the case of “Polarized Crowds”, “…there is little conversation between these groups despite the fact that they are focused on the same topic” (p. 4). They go on to note that “Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags” (p. 4). In other words, the network depicts media organizations broadcasting content to more independent audiences, while Democrats and Republicans, liberals and conservatives, peel-off into more isolated network clusters. This finding is consistent with the “echo-chamber” hypothesis (i.e. Sunstein 2001; 2017), which argues that the sorting and selection mechanisms provided by social media platforms results in the formation of homophilous network clusters that echo and reinforce existing beliefs while isolating themselves from counter-points or diverse opinions. In this instance, it should be noted that the left-wing (or Democratic) network cluster is more densely connected to the media broadcast portion of the network than the right-wing (or Republican) cluster, though the amplification of the network provided in Figure A2 shows that it too is a distinct network cluster.
cyberflorida.org | 17
Figure A1 Network Map, COVID-19 Masks and Face-Coverings
18 | Bots are Bad, Humans are Worse
Figure A2 Network Map (Magnified), COVID-19 Masks and Face-Coverings
Influential Actors Using the PageRank metric discussed above, we identified the 20 most “influential” actors in the discussion network. Table A1 provides a summary of the Top 10. The results suggest a mix of political and media actors, as well as some individual accounts/users. The findings on influential actors are consistent with the network structure observed above, though noticeably absent from the top 10 list are medical organizations and healthcare providers. In total, only 3 of the 20 most influential actors were classified as medical organizations/experts. (A complete list of the top 20 accounts by PageRank is available in Appendix B). Among the 20 most influential actors, 10 were identified as U.S. based accounts, while 10 were identified as being located outside of the U.S. Table A2 provides a comparison of the top 20 accounts by PageRank based on geographic locale. The results show a sharp contrast between the most influential U.S. and international actors. In the case of the United States, 60% of the most influential accounts were attached to political actors (i.e. public office holders, political activists, etc.), while an additional 20% were attached to actors/medical experts who had taken a decidedly politicized stance. Only one U.S. based account in the top 20 belonged to a media outlet (@CNN, #14). In contrast, half of the most influential international accounts were attached to media organizations and/or journalists, while only one account belonged to a political actor (Narendra Modi, Prime Minister
cyberflorida.org | 19
Table A1. Most Influential Actors by PageRank ID
Category
Location
PageRank
@tedlieu
Political (U.S. Congress)
USA
0.024043
@realDonaldTrump
Political (U.S. President)
USA
0.018735
@spectatorindex
Media Org/Journalist
International
0.012699
@OH_mes2
Individual (Verifiable)
International
0.011515
@DrDenaGrayson
Medial Doctor (Political)
USA
0.011094
@BNODesk
Media Org/Journalist
International
0.010469
@RealJamesWoods
Celebrity (Political
USA
0.008612
@charliekirk11
Political (Commentator)
USA
0.006729
@mitchellvii
Individual (Unverifiable)
USA
0.005995
@NorbertElekes
Entrepreneur
International
0.005851
Source: Twitter API
of India). In both cases, medical organizations/experts were not particularly influential in the early discussion network
Table A2 Highest PageRank Accounts by Category and Location U.S. Based Accounts
International Accounts
Media Organizations/Journalist
1
5
U.S. context the discussion over masks and face-coverings
Medical Organizations/Experts
1
2
became highly political at the outset, with media outlets and
Political Actors/Activists
6
1
medical experts often being unable to effectively facilitate
Entrepreneur
0
1
the flow of objective information given the partisan nature
Actor/Celebrity
1
0
and structure of the discussion network.
Suspended Accounts/Unverifiable
1
0
surrounding masks and facial coverings. These findings are again consistent with the observed polarization of the discussion network above, and they suggest that in the
While one account in the Top 20 was suspended, the majority of accounts were identifiable and verifiable users. This suggests that there was not highly influential bot activity in the discussion network (at least in the initial stage of the pandemic).
Source: Twitter API
20 | Bots are Bad, Humans are Worse
Content Analysis A random sample of 300 tweets drawn from this discussion network suggested a largely unhelpful, if not counter-productive, information environment in the initial stage of the pandemic. The preponderance of tweets drawn from these keyword queries (71%) included tangential references to masks but did not contain information regarding their efficacy in slowing or preventing the spread of COVID-19. For example, the most widely circulated tweet (retweeted 56 times in this 300-tweet sample) stated that “RT @Pog_llins: 900 people get Coronavirus and the whole world wants to wear surgical mask, 30 million people have AIDS but still nobody wants to wear a condom”. The second most common topic included news posts and political comments about Florida Congressman Matt Gaetz, who made early headlines by wearing a gas mask on the floor of the U.S. Congress. (RT @washingtonpost: Resident dies in Rep. Matt Gaetz’s district, days after congressman made light of epidemic with massive gas mask). The remaining 29% of tweets were evenly split among those advising against (13%) and in favor (12%) of widespread mask use. Information provided by both medical experts and media outlets was contradictory, with many individuals/organizations proving to be early proliferators of information which they would later contradict. For example, CNN contributor Dr. Sanjay Gupta, who would later become an outspoken proponent of widespread mask use (Gupta 2020), offered different guidance in the initial phase of the pandemic: (RT @drsanjaygupta: Lots of questions about masks. Here is the difference between a surgical mask and a N95 respirator. Neither are necessary for healthy people unless you are a healthcare worker. #coronavirus @cnn https://t.co/VLf38nU83x). For the purpose of this case-study, it is important note that misinformation was defined and classified using Benkler et al’s (2018) definition: “Communication of false information without
Using current medical and public health guidance, 39 of 300 tweets were classified as “misinformation.”
intent to deceive, manipulate, or otherwise obtain an outcome” (p. 37). We found no evidence of malicious disinformation (i.e. “Dissemination of explicitly false or misleading information”). Furthermore, misinformation was classified using current guidance on the use of masks and face coverings (CDC 2020); at the time that these tweets were initially disseminated, there was no consistent guidance from the medical community on this topic. While classifying these initial tweets as “misinformation” may seem unfair given the lack of official guidance at the time, we determined that it was important to understand the nature and spread of inaccurate information during the initial phase of the crisis, as it lays the groundwork for subsequent communications and shapes the long-term information environment. The CERC framework underscores this point and emphasizes the importance of accurate and consistent messaging in the initial stage of a crisis, noting that “There are few second chances to get communications right during this phase of a crisis” (CDC 2014, p. 4).
Using current medical and public health guidance, 39 of 300 tweets (13%) were classified as “misinformation” (Table A3). Just over half of these (51.3%) indicated that only individuals who were sick should wear masks, while roughly a quarter (23.1%) went further and suggested that masks were not at all efficacious in preventing the spread of COVID-19. Some examples include:
cyberflorida.org | 21
RT @WHO: When to use mask
😷 • If you are healthy, you only need to wear a mask if you
are taking care of a person with suspected #coronavirus infection. • Wear a mask if you are coughing or sneezing. More https://t.co/4odGgqxAKP #COVID19 https://t.co/1aM8MyaSmF RT @DrAmalinaBakri: Yes, if you are well, wearing a mask does not really protect you from COVID-19. You only need to wear it if you have symptoms to prevent transmission to other people or if you have to deal/take care of people who have symptoms e.g. healthcare staff. https://t.co/buec7xbWUr It is again important to note that these communications were not spreading information known to be incorrect at the time. However, the information is now known to be inconsistent with current guidance (i.e. CDC 2020) and is thus classified as misinformation due to its influence on the information environment during the initial phase of the pandemic. Table A3 Summary of Content Analysis Frequency
Percent (Group Total)
Misinformation (Anti-Mask Tweets)
39
13.0%
Only Required if Sick
20
51.3
Inefficacious
9
23.1
Only N-95 Masks Work
4
10.2
Commentary/Protest
6
15.4
Corrective Information (Pro-Mask Tweets)
36
12.0%
Encouragement
19
52.8
Guidance
7
19.4
Correcting Misinformation
6
16.7
News
2
5.5
Other
2
5.5
Not Applicable
214
71.0%
AIDS Comment
56
26.2
News
43
20.1
Supply
41
19.2
Other
74
34.5
Unclear
11
4.0%
TOTAL
300
100
22 | Bots are Bad, Humans are Worse
While a similar number of tweets (n=36, 12%) advocated in favor of masks, very few of these (only 16.7%) constituted direct attempts to counter incorrect messaging about their efficacy. More than half of these messages (52.8%) were simply individuals encouraging others to wear a mask (i.e. RT @ sudhirchaudhary: On my way to #dubai for the @WIONews Global Summit. Armed with a mask to beat the #coronavirus. Precaution is better than panic. Stay safe. See you soon.
😷 https://t.co/
OceewiFfDK). In many cases, these came from international (non-U.S. based) accounts. About a quarter of these tweets (19.4%) offered guidance on the correct way to wear a mask, though in many cases they did not directly advocate their use or counter prevailing misinformation (i.e. RT @evankirstel:
😷 Wearing a face mask does help if you do it properly. Seto Wing Hong of Hong Kong University demonstrates the correct way to wear a face mask #CoronaVirusChallenge #COVID19 #COVID😷19 #FridayThoughts #coronavirus https://t.co/euHLmwNmav).
In order to better understand the spread of misinformation during the initial phase of the crisis, we identified the sources of these messages using Twitter data (and additional research where needed). When possible, we endeavored to identify both the source of the original tweet as well as the retweeter, as both may send “cues” as to the authority and reliability of the information. As Table A4 shows, much of the content that would later be identified as misinformation was initially circulated by public health organizations, such as the WHO (20.5%) and identifiable healthcare professionals (20.5%). For example, one commonly retweeted message from the World Health Organization (WHO) read as follows: RT @ WHO: When to use mask
😷 • If you are healthy, you only need to wear a mask if you are taking
care of a person with suspected #coronavirus infection. • Wear a mask if you are coughing or sneezing. More https://t.co/4odGgqxAKP #COVID19 https://t.co/1aM8MyaSmF Domestic media outlets/actors also played a considerable role in disseminating this early misinformation (15.4%). For example, along with the aforementioned tweet from Dr. Sanjay Gupta at CNN, tweets like this (from New York Magazine) were common in the initial phase of the pandemic: RT @NYMag: At best, buying a mask is unnecessary caution. At worst, it’s contributing to paranoia — and a supply shortage among people who actually need them https://t.co/4ahBLtH1Rq Table A4 Misinformation by Type of Source Frequency
Percentage of Misinformation
WHO
8
20.5
Domestic Media
6
15.4
International Media
1
2.6
Individual Accounts
17
43.6
Healthcare Professionals
8
20.5
Politicians
2
5.1
Note: Percentages may exceed 100% as both the originator and retweeter were identified when possible.
cyberflorida.org | 23
Further analysis of the “corrective information” (Table A5) shows that much of the encouragement to wear masks came from individual/unidentifiable users (44.4%), though there was a demonstrably greater effort to promote masks among international media outlets (16.7%) than U.S. based media outlets (8.3%). At least in these early stages of the pandemic, there does not appear to have been any well-organized or collective effort to correct misinformation related to the efficacy of masks and face-coverings. This is likely due in large part to the newness of the pandemic and the lack of guidance available from public health officials. It’s also noteworthy that in the early stage of the pandemic, significant concerns were raised over disruptions of the medical supply chain and the availability of PPE to healthcare professionals (Jacobs et al. 2020; WHO 2020). These concerns may have factored into the early information environment. Table A5 Corrective Information by Type of Source Frequency
Percentage of Corrective Information
For those who may have proactively
Academic
3
8.3
sought out information regarding the
CDC
2
5.6
Domestic Media
3
8.3
efficacy of masks and face-coverings
International Media
6
16.7
Celebrity (International)
3
8.3
embedded in a considerable amount
Individual Accounts
16
44.4
of “noise” and political sniping.
Healthcare Professionals
3
8.3
on Twitter, this guidance was
Note: Percentages may exceed 100% as both the originator and retweeter were identified when possible.
Collectively, the content analysis revealed a disjointed and confusing information environment, with conflicting guidance being circulated by various medical, media, and political actors. In many cases these same actors would later offer correct guidance, though by then it stood in conflict with their earlier messaging, creating unnecessary credibility concerns. For those who may have proactively sought out information regarding the efficacy of masks and face-coverings on Twitter, this guidance was embedded in a considerable amount of “noise” and political sniping. This is largely consistent with the observed challenges of the initial phase of a crisis communications cycle (CDC 2014) as well as the polarized nature of the discussion network. Nonetheless, the data suggest that there is substantial room for crisis communicators and public health professionals to improve their use of social media in the initial stages of a public emergency.
24 | Bots are Bad, Humans are Worse
Bot Detection and Analysis We used the Botometer Application Programming Interface (API) to automatically calculate bot-likelihood scores for each Twitter account in this case-study. Botometer provides “bot likelihood scores” that indicate the probability of a Twitter account being a social bot (Varol et al. 2017). Research suggests that a Botometer score of 0.5 or higher indicates a likely social bot (OSoMe & Indiana University 2020). For this analysis, Botometer flagged below 5% of Twitter accounts as likely bots, which is significantly lower than the percentage of bots found in a previous study of political discussions on Twitter, in which about 23% of accounts were identified as likely bots (Hagen et al. 2020). A before- and-after bot removal analysis showed that bot activities in this particular COVID 19 discussion did not have significant effects on the network structure or measurement of influential-ness6.
Major Takeaways and Conclusions • The list of influential accounts (as measured by PageRank) shows no indications of a malicious disinformation campaign or highly influential bot activity, nor did a before-and-after analysis of the network once likely social bots were removed. • The discussion network surrounding masks and face-coverings became highly politicized early on, even prior to the Trump administration’s controversial stance on masks. n Neither influential
political group (Republicans/Democrats) was operating in the
same network cluster as major medical or domestic media sources. n This
finding may be influenced in part by the prominence of the gas mask story
involving Florida Congressman Matt Gaetz. n The
U.S. discussion network appears to be more politicized and less reliant on
subject-matter-expertise than the broader, international network. • A considerable amount of messaging from public health organizations, healthcare providers, and media outlets appears to have been rushed, premature, and undeveloped. This led to a confusing and contradictory information environment over the long-term. • During the initial phase of the pandemic, there does not appear to have been an effectively organized and concerted effort on the part of public health organizations or the media to counter and correct early misinformation on social media.
It is important to note that when discussing bot activity in the context of this analysis, our focus was on the influential-ness of social bots, not their presence in the discussion network. Social bots are known to be present throughout discussion networks in many cases, though our analysis showed that in this case their presence did not heavily influence the network’s structure or the metrics used to identify influential actors. 6
cyberflorida.org | 25
Appendix B: Case-Study 2, Potential Treatments By mid-March, widespread discussions had begun over the potential efficacy of several medications in treating COVID-19. In many cases these discussions were prompted by anecdotal evidence and/or unofficial, “small-N” studies that suggested potential benefits (Alexander et al. 2020; Chowdhury et al. 2020). Among the most commonly discussed potential treatments was hydroxychloroquine (and a family of related medications), which was touted by President Trump as a potential breakthrough in treating the virus. On March 21st, President Trump cited a later disconfirmed study when tweeting that “Hydroxychloroquine and Azithromycin, taken together, have a real chance to be one of the biggest game changers in the history of medicine.” Approximately a week later, and at President Trump’s behest, the FDA approved the emergency use of hydroxychloroquine as a treatment for COVID-19 (Solender 2020), though to date the drug has no proven efficacy in treating the novel coronavirus (Kupferschmidt 2020; Penn Medicine 2020). At the same time, concerns were also being raised over potential adverse effects from the popular anti-inflammatory medication ibuprofen (Moore et al. 2020). In this case-study we examined initial discussions about these treatments, including the sources and spread of early misinformation about the efficacy of several medications. Keywords used to build the initial data set for this case-study included the following queries: [‘rx’, ‘malaria rx’, ‘hydroxy’,’hydroxychloroquine’, ‘chloroquine’,’zithromax’, ‘azithromycin’, ‘hydroxee’,’chloroqueene’,’chloroquene’, ‘arbidol’,’remdesivir’, ‘remdesiveer’,’steroids’, ’shuanghuanglian’, ‘ibuprofen’,’ibuprofeen’,’iboprofein’] From our initial sample of 1 million tweets, a total of 2,029 tweets were identified for analysis in this case-study using the queries above.
Network Structure Figure B1 provides a visual map of the network associated with discussions of potential treatments for COVID-19 during March of 2020. Utilizing Pew’s “community archetypes”, the network can be best described as a hybrid example of the “Polarized Crowd” and “Community Cluster” network structures. The polarized crowd structure is evident from the hourglass shape of the discussion network (Figure B1) and is perhaps unsurprising given the early politicization of the discussion over potential treatments, particularly hydroxychloroquine (Qiu 2020). However, it is also noteworthy that the polar ends of the network are not as dense or tightly clustered as in some polarized crowd structures. Rather, within them we see several distinct “Community Clusters”. This is also unsurprising in this context. Smith et al. (2014) point out that “…Community Cluster conversations look like bazaars with multiple centers of activity” and that these structures are common in cases such as the COVID-19 pandemic, noting that “Global news stories often attract coverage from many news outlets, each with its own following. That creates a collection of medium-sized groups – and a fair number of isolates” (p. 3).
26 | Bots are Bad, Humans are Worse
Figure B1 Network Map, COVID-19 Treatment and Cures
cyberflorida.org | 27
Influential Actors Among the most influential actors in the discussion network (as measured by PageRank) we found a mix of news outlets and journalists (n=8), medical doctors (n=4), prominent businessmen/investors (n=3), and political actors (n=2) (see Table B1). Only two (2) of the 20 most influential accounts belonged to private citizens (i.e. non media or political personalities). Both of these accounts were active and verified as legitimate users. These data suggest a somewhat more reputable and reliable information environment surrounding the discussion of potential treatments than was seen in the case of masks and face-coverings, with 12 of the top 20 accounts belonging to medical experts and media outlets/actors7. The verifiable nature of the most influential actors also indicates that despite the politicization of the discussion network, none of the most influential accounts were suspected social bots. This is a welcome and somewhat surprising finding, as it is not uncommon to find highly influential bot accounts to be active in politicized discussion networks. For example, in a previous study of the Trump/Russia investigation, Hagen et al. (2020) found that as many as 65% of accounts with the highest PageRank in some communities were either suspected bots or suspended accounts. Table B2 below provides a summary of the top 10 accounts by PageRank in this discussion network. A more complete list is available in Appendix E of this report. Table B1 Top 20 Page Rank by Category
Table B2 Most Influential Actors by PageRank
Category
Frequency
Twitter ID
Page Rank
Category
Businessman/Investor
3
@MichaelCoudrey
0.023979
Businessman/Investor
Individual Account
2
@philbak1
0.014876
Businessman/Investor
Medical Doctor
4
@EdselSalvana
0.012301
Medical Doctor
News Outlet/Journalist
8
@CNN
0.01198
News Outlet
Political Actor
2
@rsamaan
0.010269
Medical Doctor
Social Media Personality
1
@AdamMilstein
0.009978
Businessman/Investor
@paulsperry_
0.009728
Journalist
@Aco98rain
0.007618
Individual Account
@IngrahamAngle
0.007482
Journalist
@ JamesTodaroMD
0.007434
Medical Doctor
Source: Twitter API
Source: Twitter API
Note that we do not distinguish between news and opinion journalists for the purposes of this analysis. We acknowledge that this distinction may have significant implications, particularly in understanding the politicization of the discussion network. 7
28 | Bots are Bad, Humans are Worse
The lack of observed bot activity in this instance may stem in part from the acute and global nature of the COVID-19 pandemic, which provided neither the time nor the incentive for adversarial actors to engage in “active measures”. In either case, the observed influence on the part of subject-matter-experts and reputable institutions suggests a potentially richer and more reliable information environment than is sometimes found in politicized discussion networks. Additionally, the prominence of “individual accounts” among the most influential actors (i.e. individual personalities rather than institutional accounts) underscores findings from previous network analyses (i.e. Hagen et al. 2018; Whelan et al. 2011), namely that social networks often favor individual personalities and expertise over organizational credibility. For example, social media users are more likely to follow a high-profile doctor or journalist than they are to follow the “official” account of the public health agency or media outlet for whom those individuals work. For crisis communications specialists, this suggests that effective communication networks may be better developed around individual subject-matter-experts as opposed to organizational accounts.
Content Analysis In analyzing a random sample of tweets (n=300) from this network, we focused specifically on the type and spread of misinformation related to potential treatments. Two predominant threads emerged from this analysis, including (1) misinformation about the efficacy of the malaria medication hydroxychloroquine (and the related family of treatments) for treating COVID-19 and (2) misinformation about the dangers of taking the anti-inflammatory ibuprofen if infected with COVID-19. In total, 48 tweets from the 300-tweet sample (16%) referenced hydroxychloroquine or a related medication. Table B3 breaks these into three common subcategories, including (1) misinformation indicating that these medications could “cure” COVID-19, (2) false hope about the general efficacy of these medications in treating COVID-19, and (3) misinformation about political threats against doctors prescribing these medications. Table B3 Types of Misinformation Related to Hydroxychloroquine Frequency
Percent of Misinformation
Percent of Total Sample
Demonstrated Cure
7
14.6
2.3
False Hope (based on limited data)
35
72.9
11.7
Prescribing Doctors Threatened
6
12.5
2.0
TOTAL
48
100
16.0
Source: Twitter API
Taken together, 42 of the 48 tweets (87.5%) incorrectly suggested/argued that hydroxychloroquine could either cure or effectively treat COVID-19. However, there is currently no medical support for the drug’s efficacy as a COVID-19 treatment (Kupferschmidt 2020; Penn Medicine 2002). In many cases, the false-hope advanced in these messages appears to have been based on misinterpretations and overstatements of “small-N” research studies (Alexander et al. 2002). For example, one user retweeted
cyberflorida.org | 29
Figure B2 (L) Amplified Network Map: Originators of misinformation (blue dots), Retweeters of misinformation (red dots); (R) Zoomed out network map for reference
RT@gatewaypundit: HUGE! Results from Breaking Chloroquine Study Show 100% Cure Rate for Patients Infected with the Coronavirus https://t.co/7m0MvOM5UK @JoeHoft @RiganoESQ @TuckerCarlson @RealDonaldTrump. Another user more boldly asserted: A REAL #CURE is Here!... now! #HydroxyChloroquine is Being Used in Worldwide Hospitals NOW!... without #Prescriptions or #Trials = It Works! #Death projection numbers... will Drop!!! you can #Bank on It! https://t.co/zIaVpt5XEk
30 | Bots are Bad, Humans are Worse
These messages are coded as misinformation rather than disinformation because while demonstrably false, they do not contain indicators of malicious intent. They appear to stem primarily from poor analysis, wishful thinking, and errant political cues. It is noteworthy that over half of these tweets (52.4%) contained at least some political messaging, which is consistent with the polarized crowd network structure as well as the politicization of the subject matter early on in the pandemic. For example, RT @Rparkerscience: Michigan Man with Coronavirus Has Near-Death Experience - Is Saved by Hydroxychloroquine Treatment... Then UNLOADS on Liberal Gov. for Denying Life-Saving Drug to the Sick https://t.co/g4YEgts76N via @gatewaypundit Figure B2 provides an amplified view of the discussion network and depicts the spread of misinformation throughout. In this case, the originators of these tweets are identified by blue dots, while retweeters are identified by red. The light blue edges depict the spread of misinformation throughout the network. The data show that misinformation related to the efficacy of hydroxychloroquine was isolated to and circulated within only the top portion of the polarized crowd structure (most heavily influenced by several individual accounts, often with a “Pro-Trump” slant). These communications, by and large, did not reach the lower portion of the polarized crowd structure (more heavily influenced by medical experts and media outlets). This is a critical observation, as Smith et al. (2014) note that “Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags” (p. 4). This suggests that in the case of a polarized crowd, misinformation circulated in one network cluster may go largely unaddressed and un-countered by potentially corrective information from another network cluster. Again, this finding is consistent with the “echo-chamber” hypothesis (i.e. Sunstein 2001; 2017), which argues that the sorting and selection mechanisms provided by social media platforms results in the formation of homophilous network clusters that echo and reinforce existing beliefs while isolating themselves from counter-points or diverse opinions. Sunstein (2017) warns that under these conditions, isolated network clusters are prone to widespread acceptance of misinformation precisely because it is amplified and unchallenged. He goes on to suggest that these environments also create the conditions under which political extremism can thrive. Along with widespread misinformation about the efficacy of hydroxychloroquine as a treatment for COVID-19, we also found a significant example of misinformation related to the popular OTC medication, ibuprofen. This thread of misinformation began when a theoretical article published in The Lancet suggested that ibuprofen might cause an acceleration in the spread of COVID-19 (Fang et al. 2020). This prompted French health officials and the World Health Organization (WHO) to issue warnings to those with suspected cases of COVID-19 not to take ibuprofen to treat headaches and fever. This guidance was quickly retracted by the WHO due to a lack of corroborating evidence (Xu 2020), but only after the initial warning was widely disseminated throughout the social network. For example: RT @gmanews: Avoid taking ibuprofen for COVID-19 symptoms, says World Health Organization https://t.co/UDC6nztdb2 If you think you have Covid-19 DO NOT TAKE IBUPROFEN, there is a correlation between the pill, the disease and those who are dying from the two. Covid-19 thrives of this pill, retweet to save lives.
cyberflorida.org | 31
Along with widespread misinformation about the efficacy of hydroxychloroquine as a treatment for COVID-19, we also found a significant example of misinformation related to the popular OTC medication, ibuprofen.
Using the keyword queries identified for this case study, we found 12 instances of this message being circulated in a random sample of 300 tweets (4.0%). The alacrity with which this misinformation spread highlights the potential of social networks to rapidly disseminate both factual and false information, as well as the importance of proper information vetting on the part of public health officials and crisis communicators. While the nature of misinformation involved in this instance may be less consequential than in some other cases, the conflicting guidance created widespread confusion, leading one health news outlet to note that “The World Health Organization (WHO) has changed its stance on taking ibuprofen if you have COVID-19, but people are still scratching their heads over what they should take if or when they contract the virus” (Fischer 2020).
Major Takeaways and Conclusions • The overall structure of the discussion network was consistent with expectations for a global news story, though sharp political polarization was also evident, particularly in the case of hydroxychloroquine. • The influential actors in this network underscore findings from prior research, which suggests that individual user accounts are more effective thought leaders and network hubs than organizational accounts. • The cluster-specific/contained spread of misinformation related to hydroxychloroquine demonstrates how misinformation can spread unchallenged inside of a polarized network. • The widespread dissemination of misinformation related to ibuprofen and COVID-19 demonstrates the importance of proper information vetting in the initial phase of a crisis communications cycle.
32 | Bots are Bad, Humans are Worse
Appendix C: Case-Study 3, Conspiracy Theories The third and final case-study considered a non-medical aspect of the COVID-19 pandemic, namely the early spread of conspiracy theories surrounding the origins of the virus. While a number of conspiracy theories became popularized in the initial phase of the pandemic, we chose to focus on theories that accused U.S. Army and CIA personnel of carrying the virus to China in late 20198. The most prominent and specific of these stories claimed that Maatje Benassi, a U.S. Army reservist, was “patient zero” who carried the virus to China when competing in the Military World Games (hosted in Wuhan, China during October of 2019). According to media reports, the claim was initially promoted by conspiracy theorist George Webb to his 100,000 plus YouTube subscribers (O’Sullivan 2020). While the claim did not gain widespread attention in the mainstream American media, it was picked up by Chinese state-run media in an attempt to insinuate that U.S. personnel may have been responsible for the initial outbreak of the virus in Wuhan (Patterson 2020). In this case-study we examined the extent to which this and similar conspiracy theories were circulated in the initial phase of the pandemic, the nature of the dis/misinformation observed, and the types of actors and Twitter accounts through which it was disseminated. Keywords used to build the initial data set for this case-study included the following queries: [‘cia’, ‘maatje benassi’, ‘us military’,’soldiers brought’, ‘american military’,’american soldiers’, ‘us soldiers’] From our initial sample of 1 million tweets, a total of 432 tweets were identified for analysis in this case-study using the queries above.
Network Structure As shown in Figure C1, this discussion network took on a “Community Cluster” form, which is unsurprising given the global nature of the discussion. The largest clusters formed around journalists and media personalities, such as @kylegriffin1 (MSNBC) and @carlquintanilla (CNBC). These network clusters were focused on legitimate news stories that were captured by the keyword queries, such as a recently uncovered CIA hacking operation and the U.S. military’s response to the coronavirus. Discussion of conspiracy theories were isolated to some of the smaller clusters and isolates, with relatively little connection to other actors in the network. The network structure suggests that while these theories may have been amplified amongst members of the community clusters in which they circulated, they were not distributed or discussed widely outside of those clusters. As noted above, Sunstein (2017) has warned against the potential for these unchecked echo-chambers to foster extremism.
Twitter data were collected for a variety of conspiracy theories, including popularized theories involving Bill Gates, the influence of 5G networks on COVID spread, and the intentional creation of the COVID-19 virus as a bioweapon. However, the keyword queries and data quality did not allow for a reliable analysis of these data in most instances. 8
cyberflorida.org | 33
Influential Actors The most influential actors in this discussion network (as measured by PageRank) include a significantly larger number of individual and/or unidentifiable accounts (see Table C1). In total, eight (8) of the top 20 accounts fell into this category, while another eight (8) were classified as media/journalists and 4 were classified as political accounts (Table C2). This is consistent with expectations for a keyword query focused on active conspiracy theories.
Figure C1 Network Map, COVID-19 U.S. Army Conspiracy Theory
Content Analysis From a random sample of 300 tweets, we identified 37 as “spreading” misinformation (12.3%). These were far less common in the discussion network than more general news reports about military response and readiness to the coronavirus pandemic, as well as stories about a reported hacking effort by the CIA. (i.e. RT @kylegriffin1: Defense Secretary Mark Esper has urged American military commanders overseas not to make any decisions related to the coronavirus that might surprise the White House or run afoul of Trump’s messaging on the virus, American officials said. https://t.co/ErYerjXPeL).
34 | Bots are Bad, Humans are Worse
Table C2 Top 20 Page Rank by Category
Table C1 Most Influential Actors by PageRank Twitter ID
Category
PageRank
Category
Frequency
esaagar
Political Activist (Conservative)
0.029638
News Outlet/Journalist
8
kylegriffin1
Media Organization/Journalist
0.027598
Political Accounts
4
carlquintanilla
Media Organization/Journalist
0.024578
Individual Accounts
6
TribulationThe
Individual Account/Fiction Author
0.007157
Industry
1
IWF
Political Activist
0.006291
Unclear
1
TeamTrump
Political Campaign Account
0.00557
Social Media Personality
1
PacdWeu
Industry
0.00547
EenaRuffini
Media Organization/Journalist
0.00543
ImtiazMadmood
Individual Account
0.005298
Table C3 Categories of Misinformation
PressTV
Media Organization/Journalist
0.005115
Category
Frequency
World Military Games
8
Bio-Weapon/Chinese Lab
15
Miscellaneous
14
Source: Twitter API
Source: Twitter API
Source: Twitter API
Of the 37 messages identified as spreading misinformation, eight (8) directly referenced the aforementioned theory that U.S. military personnel spread the virus during the World Military Games hosted in Wuhan in 2019 (Table C3). RT @cynthiamckinney: Japan report says corona virus diseases originated in the US, taken to China by US military that participated in Military Games played in Wuhan; and some of the US flu deaths are actually coronavirus deaths?? https://t.co/OKh35IYAnG RT @WhoareyouBO: .@GeorgWebb Patient Zero: Sgt. 1st Class Maatje Benassi, a security officer at Fort Belvoir Community Hospital and a member of the U.S. Armed Forces Cycling Team, pulls out in front during the women’s road race event of the 2019 CISM Military World Games in Wuhan, China, Oct. 20. https://t.co/M28ZaxYxHO Another 32 tweets in the discussion network countered these claims, including several that quoted New York Congressman Lee Zeldin: RT @RepLeeZeldin: This is such a disgusting take by a Spokesperson for China’s govt, desperately trying to blame the US military for a pandemic that started in their own country. This Chinese propaganda is a LIE! China should accept responsibility for the devastation it caused globally w/COVID-19. https://t.co/kjC1Fj4E6Y However, in most cases these replies took the form of political commentary and did not directly address the story in a factual manner: RT@TimMurtaugh: China has tried to pin the virus on the American military and now the American media wants @realDonaldTrump to stop identifying China as the place of origin. Astounding. https://t.co/4CWUXw54QP
cyberflorida.org | 35
Another commonly cited conspiracy theory claimed that the virus was created as a bioweapon and/or created in a Chinese Lab. We identified 15 messages making such claims: @RealJamesWoods @Marita_1010 The coronavirus is CCPvirus. It’s Chinese Communist Party’s bio-weapon targeting democratic countries, especially the United States. Now CCP is fabricating the narrative that coronavirus comes from US CIA... https://t.co/tMSmjpkqWM One frequently recirculated tweet in this group appears to have been a potentially bot generated/amplified message intended to further establish this narrative: @MyDearestDenise Novel Coronavirus is from my TEAM. Spectre> masks/Suits/Background. I need the Government to pick me up. CIA knows. Longer I wait the worse it gets. Pick me up, it gets better. I leave, I think it might mutate and everybody starts dropping. Hear the sound> https://t.co/nPLx6CLMb2 https://t.co/JQay9BZQzS Other miscellaneous messages claimed that the virus was created/and or being weaponized by the United States, Israel, and Russia, while others implicated actors such as the CIA and Microsoft founder, Bill Gates. For example: @GregRubini According to CIA Agent Robert Steele , Israel _È_Î_È_± released into China with the hopes of starting a war between the two countries. https://t.co/zywRZvNh6E While these conspiracy related tweets made up more than 10% of the discussion network’s overall messaging, our content analysis revealed that none of these messages was sourced or retweeted by one of the network’s 20 most influential actors. This underscores the fringe nature of these theories and suggests that they were not among the most influential messages circulated in the network. Additionally, the total sample drawn from these keyword queries included only 432 tweets from the initial sample of 1 million, indicating that these theories are a small portion of a relatively minor topic when compared to the larger COVID-19 discussion network. Collectively, the results suggest that while these messages are troubling and insidious in many cases, they were not influential or widely received in the early stages of the pandemic. With that said, it should be reiterated that even in small network clusters, unchallenged disinformation such as this does have the potential to foster small pockets of extremism (Sunstein 2017).
Takeaways and Conclusions • A broad array of conspiracy theories was circulated in the initial stage of the pandemic, but these did not gain widespread attention and appear to have been primarily contained to small, fringe communities within the network. • Aside from comments made by NY Congressman Lee Zeldin, there does not appear to have been an organized or concerted strategy to counter this misinformation on the part of public officials or the media. • Messages contradicting these theories generally took the form of politicized commentary from individual users, and these were often parlayed into additional conspiracy theories. • Despite the insidious, political nature of this messaging, we did not find evidence of heavily influential bot activity in these discussions9. It is important to note that when discussing bot activity in the context of this analysis, our focus was on the influential-ness of social bots, not their presence in the discussion network. Social bots are known to be present throughout discussion networks in many cases, though our analysis showed that in this case their presence did not heavily influence the network’s structure or the metrics used to identify influential actors. 9
36 | Bots are Bad, Humans are Worse
Appendix D: Case-Study Data Analysis Methods For each of the four case-studies, we (1) examined network structure, (2) identified the most influential actors in the discussion network, and (3) conducted a manual content analysis. For case-study 1 (masks and face-coverings) we also examined potential bot activity.
Network Structure Like other social media applications, discussion networks on Twitter form spontaneously as individuals interact with one another through behaviors such as “following”, “replying/retweeting”, “mentioning” and “liking”. Discussion networks tend to divide naturally into clusters (or “communities”) based on existing social connections as well as the development of hubs around influential actors and thought leaders (based on the topic being discussed). These clusters can be identified and visually depicted by observing patterns of interactions with the aid of modularity algorithms. (Modularity is a measure of network structure, which groups and divides nodes into “modules” [or clusters] based on the density of interactions within the network). These tools help to identify densely connected clusters of nodes (i.e. actors) and distinguish them from more sparsely connected nodes outside of the cluster. For this analysis, we created network maps for each of the case-studies in order to better understand the structure of the emerging discussion networks. This allowed us to identify factors such as the degree of political polarization in the discussion networks, the extent to which they relied on experts/thought leaders, and the amount of information sharing that occurred across diverse communities (Smith et al. 2014). Consistent with prior research (Boyd et al. 2010; Hagen et al. 2020), we used “retweets” to define network connections. For network visualization and analysis, we used Gephi (Version 0.9.1), an open-source software package (Bastian et al. 2009).
Influential Actors Along with mapping network structures, it is important to identify and understand the most influential actors within a discussion network and its subordinate community clusters. Influential actors play a critical role in shaping the nature, content, and tone of communications within a discussion network, and they convey valuable information about the authenticity and reliability of a discussion network. For this analysis we used PageRank to identify the most influential actors in each discussion network. Prior research has identified PageRank as a reliable indicator of the trust that a user has among other members of the network (Caverlee et al. 2008; Gimenez-Garcia et al. 2016). PageRank is a variant of eigenvector centrality, formulated by Page and Brin (1998), which indicates the reliability or trusted-ness of a node (Caverlee et al. 2008; Giménez-Garcıa et al. 2016). In web searches, a website is considered to be highly endorsed if it has high number of incoming links by other important pages. For example, when two nodes have an equal number of in-links, the node with incoming links from more “important” nodes have larger PageRank. A simplified formula of PageRank is as follows (Page et al. 1999, p. 4):
cyberflorida.org | 37
PR is PageRank of a web page u. Bu is the set of pages pointing to (in-links) u; v is all web pages contained in Bu; N(v) is number of links from page v; c is a factor used for normalization in order to keep the total rank of all the pages to be constant (Page et al. 1999). A webpage has high rank when the sum of the ranks of its in-links is high. Similarly, in our dataset, a node with a high PageRank is highly endorsed by others because its content is frequently recirculated by important nodes.
Content Analysis After mapping the network structures and identifying the most influential actors in each network, we drew a random sample of 300 tweets from each case-study for manual content analysis. Content analysis was conducted by a team of medical and information/communications experts to ensure technical accuracy. An emergent coding scheme was used to identify the major themes/topics discussed in each network, as well as the type and frequency of mis/disinformation circulated. Content analysis was also used to classify the influential actors in each case-study so that we could examine how content and dis/misinformation varied across different categories of actors.
Bot Detection and Analysis In order to detect potential bot activity within the discussion network, we utilized the Botometer Application Programming Interface (API) to automatically tag the likelihood of a twitter account being a social-bot. Botometer is one of the most popular bot detection algorithms; it uses over 1100 Twitter features (user-, friends-, network-, temporal-, content-, and language-based features) to train the algorithm (Varol et al., 2017). Botometer provides “bot likelihood scores” that indicate the probability of a Twitter account being a social bot (Varol et al. 2017). Research suggests that a Botometer score of 0.5 or higher is likely to indicate a social bot (OSoMe & Indiana University 2020). The accuracy of Botometer was reported at approximately 86% by Varol et al. (2017) and Wojcik et al. (2018). The network analysis was conducted using a Twitter account as a node, and a retweet relation as an edge. When account-A retweets a tweet created by account-B, B is a course node, and A is a target node. The direction of an edge is B -> A. We selected all “source” nodes with higher than 6 edges and ran Botometer. This is done to decrease the total number of nodes that is run against Botometer because it cannot handle excessive loads. When we selected all nodes with source node sizes above 6, we ran 40,279 nodes (which included 69,503 edges) against Botometer. Using the Botometer results, we dropped all the “bot-likely” nodes (that is >0.5 cap_english score). This left us a total of 1,508 source nodes (which included 2,625 edges) for network analysis.
38 | Bots are Bad, Humans are Worse
Appendix E: Full Tables (Influential Actors) Table E1 Top 20 Influential Actors, Potential Treatments
ID
Category
Country
PageRank
tedlieu
Political Actor
USA
0.024043
realDonaldTrump
Political Actor
USA
0.018735
spectatorindex
Media Org/Journalist
International
0.012699
OH_mes2
Individual Account
International
0.011515
DrDenaGrayson
Medial Doctor
USA
0.011094
BNODesk
Media Org/Journalist
International
0.010469
RealJamesWoods
Actor/Celebrity
USA
0.008612
charliekirk11
Political Activist (Conservative)
USA
0.006729
mitchellvii
Suspended/Unverifiable
n/a
0.005995
NorbertElekes
Entrepreneur
International
0.005851
SteveGuest
Political Activist (Conservative)
USA
0.005584
ANI
Media Org/Journalist
International
0.003717
WHO
Health Organization
International
0.00371
CNN
Media Org/Journalist
USA
0.003595
funder
Political Activist (Liberal)
USA
0.003384
narendramodi
Political Actor (Prime Minister of India)
International
0.003344
TheDemCoalition
Political Activist (Liberal)
USA
0.003234
Reuters
Media Org/Journalist
International
0.003221
DrTedros
Director WHO
International
0.003177
ani_digital
Media Org/Journalist
International
0.003171
Source: Twitter API
cyberflorida.org | 39
Table E2 Top 20 Influential Actors, Potential Treatments
Twitter ID
Description
Classification
PageRank
MichaelCoudrey
Entrepreneur. Activist. Investor. CEO: http://YukoSocial.com Social Media For Politicians & Organizations. Media Requests: Michael@PharosInvestmentGroup.com
Businessman/Investor
0.023979
philbak1
Global macro & capital markets. Pod: https://podcasts.apple. com/us/podcast/the-phil-bak-podcast/id1511530324
Businessman/Investor
0.014876
EdselSalvana
Infectious Diseases Physician, Molecular Biologist, Manila Bulletin Columnist
Medical Doctor
0.012301
CNN
It’s our job to #GoThere & tell the most difficult stories. Join us! For more breaking news updates follow
News Outlet/Journalist
0.01198
rsamaan
#preventive #cardiologist. #author https://elsevier.com/ books/dietary-fiber-for-the-prevention-of-cardiovascular-dis- Medical Doctor ease/samaan/978-0-12-805130-6
0.010269
AdamMilstein
Co-founder, Adam & Gila Milstein Foundation | American Advocate & Philanthropist RT’s Are Not Endorsements http:// Businessman/Investor facebook.com/adammilsteinCP
0.009978
paulsperry_
Former D.C. bureau chief for Investor’s Business Daily, Hoover Institution media fellow, author of several books, including News Outlet/Journalist bestseller INFILTRATION
0.009728
Aco98Rain
Different mindset different energy.
Individual Account
0.007618
IngrahamAngle
Mom, author, host, The Ingraham Angle, 10p ET
News Outlet/Journalist
0.007482
JamesTodaroMD
Medical Degree, Columbia University. COVID-19 research. Not medical advice.
Medical Doctor
0.007434
everydaypewpew
I’m not getting involved, I’m just here to enjoy myself
Individual Account
0.007263
cnni
Breaking news from around the world, plus business, style, travel, sport and entertainment. We #gothere.
News Outlet/Journalist
0.006923
HeidiNBC
NBC News Correspondent covering politics/government ethics “Prezbella” Heidi.Przybyla@nbcuni.com Insta:
News Outlet/Journalist
0.006742
eugenegu
Founder and CEO of http://CoolQuit.com. With our team of Stanford trained doctors, we’re fighting the coronavirus pandemic and tobacco addiction through telemedicine.
Medical Doctor
0.005897
EmmaKinery
National political reporter covering 2020 for Bloomberg
News Outlet/Journalist
0.005435
Source: Twitter API
Continues on Page 40
40 | Bots are Bad, Humans are Worse
Table E2 Top 20 Influential Actors, Potential Treatments | Continued from Page 39
Twitter ID
Description
Classification
PageRank
RudyGiuliani
Listen to the Common Sense podcast through the link below. https://rudygiulianics.com/
Political Actor
0.005344
scrowder
http://Youtube.com/StevenCrowder for all you need to know. Social Media Personality Join #MugClub. Instagram: louderwithcrowder
0.00439
kayleighmcenany
Fmr National Press Secretary for @RealDonaldTrump 2020 campaign Fmr@GOP spox. Harvard Law JD. Georgetown. Oxford. Previvor. Wife of @GilmartinSean. Phil 4:6
Political Actor
0.004019
brithume
Sr. Political Analyst, Fox News Channel. Arguments welcome. Name callers & verbal abusers blocked.
News Outlet/Journalist
0.004017
jsolomonReports
John Solomon is an award-winning investigative journalist and founder of Just the News. He has worked at AP, WaPo, TWT, and The Hill.
News Outlet/Journalist
0.00387
Source: Twitter API Table E3 Top 20 Influential Actors, U.S. Army/CIA
Twitter ID
Category
PageRank
esaagar
Political Activist (Conservative)
0.029638
kylegriffin1
Media Organization/Journalist
0.027598
carlquintanilla
Media Organization/Journalist
0.024578
TribulationThe
Fiction Author
0.007157
IWF
Political Activist
0.006291
TeamTrump
Political Campaign Account
0.00557
PacdWeu
Industry
0.00547
EenaRuffini
Media Organization/Journalist
0.00543
ImtiazMadmood
Individual Account
0.005298
PressTV
Media Organization/Journalist
0.005115
_whitneywebb
Media Organization/Journalist
0.004982
blackorbird
Unclear
0.00497
CNN
Media Organization/Journalist
0.004655
kim
Individual Account
0.00445
RepLeeZeldin
Political Actor
0.004435
chenweihua
Media Organization/Journalist
0.004246
caitoz
Individual Account
0.003822
Michell83095148
Individual Account
0.003731
Mickeyinblack
Individual Account
0.003578
TheHackersNews
Media Organization/Journalist
0.003557
Source: Twitter API
cyberflorida.org | 41
Appendix F: References Acar, Adam and Yuya Muraki. (2011) “Twitter for Crisis Communication: Lessons Learned from Japan’s Tsunami Disaster.”
International Journal of Web Based Communities 7(3): 392-402.
AJMC (2020). “A Timeline of COVID-19 developments in 2020”. American Journal of Managed Care, available at
https://www.ajmc.com/view/a-timeline-of-covid19-developments-in-2020
Alexander, Paul Elias; Debono, Victoria Borg; Mammen, Manoj J.; Iorio, Alfonso; Aryla, Komal; Deng, Dianna; Brocard, Eva; and
Waleed Alhazzani. (2020) “COVID-19 coronavirus research has overall low methodological quality thus far: case in
point for chloroquine/hydroxychloroquine”. Journal of Clinical Epidemiology 123: 120-126.
American Red Cross. (2012) “Social Media in Disasters and Emergencies”. The American Red Cross, July 10, 2012. Bastian, M; Heymann, S.; and Jacomy, M. (2009). “Gephi: An open source software for exploring and manipulating networks”.
International Conference on Web and Social Media, 8, 361-362.
Bauchner, Howard. (2017). “The Rush to Publication: An Editorial and Scientific Mistake”. Journal of the American Medical Association. Editorial: September 30, 2020. Available at https://jamanetwork.com/journals/jama/fullarticle/2654797 Benkler, Y., Faris, R., and H. Roberts. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press: New York, NY. Bishop, Bill. 2009. The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart. Mariner Books: New York, NY. Boyd, D., Golder, S., & Lotan, G. (2010). Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. System Sciences
(HICSS), 2010 43rd Hawaii International Conference On, 1–10. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.
jsp?arnumber=5428313 Caverlee, J., Liu, L., & Webb, S. (2008). Socialtrust: Tamper-resilient trust establishment in online communities. Proceedings of the
8th ACM/IEEE-CS Joint Conference on Digital Libraries, 104–114. Retrieved from http://dl.acm.org/citation.cfm?id=1378908
Centers for Disease Control and Prevention (CDC). (2020). “How to Protect Yourself and Others”. Available at
https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html
Centers for Disease Control and Prevention (CDC). (2014). “CERC: Crisis communication plans”. U.S. Department of Health and
Human Services: Centers for Disease Control and Prevention. Available at https://emergency.cdc.gov/cerc/ppt/CERC_
Crisis_Communication_Plans.pdf Chowdhury, S.; Rathod, J.; and Gernsheimer, J. (2020). “A Rapid Systematic Review of Clinical Trials Utilizing Chloroquine and
Hydroxychloroquine as a Treatment for COVID-19”. Academic Emergency Medicince, May 29, 2020. Available at
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7267507/
Conrado, Silvia Planella, Karen Neville, Simon Woodworth, and Sheila O’Riordan. (2016) “Managing Social Media Uncertainty to
Support the Decision Making Process During Emergencies.” Journal of Decision Systems 25(1): 171-181.
42 | Bots are Bad, Humans are Worse
Fang, L., Karakiulakis, G. and M. Roth. “Are patients with hypertension and diabetes mellitus at increased risk for COVID-19 infeciton?” The Lancet, March 11, 2020. Available at https://www.thelancet.com/journals/lanres/article/PIIS2213- 2600(20)30116-8/fulltext Fischer, K. (2020). “Here’s what we know about Ibuprofen and COVID-19”. Healthline. March 20, 2020. Available at https://www. healthline.com/health-news/what-to-know-about-ibuprofen-and-covid-19#Evidence-lacking Fung, I. C.-H., Tse, Z. T. H., Cheung, C.-N., Miu, A. S., & Fu, K.-W. (2014). Ebola and the social media. The Lancet, 384, 2207.
doi:10.1016/S0140-6736(14)62418 -1
Giménez-Garcıa, J. M., Thakkar, H., & Zimmermann, A. (2016). Assessing Trust with PageRank in the Web of Data. PROFILES 2016
3rd International Workshop on Dataset PROFIling and FEderated Search for Linked Data. Retrieved from
http://ceur-ws.org/Vol-1597/PROFILES2016_paper5.pdf
Greenwood, S., Perrin, A., & Duggan, M. (2016, November 11). Social media update 2016. Pew Research Center. Available at
https://www.pewresearch.org/internet/2016/11/11/social-media-update-2016/
Gupta, Sanjay. (2020). “ ‘I don’t know what it takes at this point’: Gupta vents about lack of mask wearing”. CNN Health, September 4,
2020. Available at https://www.cnn.com/videos/health/2020/09/04/coronavirus-death-toll-projection-gupta-new
day-vpx.cnn Guskin, Emily and Paul Hitlin. (2012) “Hurricane Sandy and Twitter”. The Pew Research Center, November 6, 2012,
http://www.journalism.org/2012/11/06/hurricane-sandy-and-twitter/
Hagen, L., Neely, S., Keller, T., Scharf, R. and Vazquez, F.E. (2020) Rise of the Machines? Examining the Influence of Social Bots on
a Political Discussion Network. Social Science Computer Review. (Forthcoming).
Henn, Steve. (2013). “Social Media’s Rush to Judgment in the Boston Bombings.” NPR, April 23, 2013, http://www.npr.org/ sections/alltechconsidered/2013/04/23/178556269/Social-Medias-Rush-To-Judgment-In-The-Boston-Bombings Hughes, Amanda L. and Leysia Palen. (2012) “The Evolving Role of the Public Information Officer: An Examination of Social Media in
Emergency Management.” Journal of Homeland Security and Emergency Management 9(1): 1-20.
Jacobs, Andrew; Richtel, Matt; and Baker, Mike. (2020). “At War with No Ammo: Doctors Say Shortage of Protective Gear is Dire”. New York Times, March 19, 2020. Available at https://www.nytimes.com/2020/03/19/health/ coronavirus-masks-shortage.html Joseph, Andrew. (2020). “CDC: Some Americans are misusing cleaning products — including drinking them — in effort to kill coronavirus”. Stat News, June 5, 2020. Available at https://www.statnews.com/2020/06/05/ cdc-misusing-bleach-try-kill-coronavirus/ Kim, K., Sin, S. J., & Tsai, T. (2014). Individual differences in social media use for information seeking. The Journal of Academic Librarianship, 40(2), 171–178. Kouzy, Ramez;… Baddour, Khalil. (2020). “Coronavirus Goes Viral: Quantifying the COVID-19 Misinformation Epidemic on Twitter”.
Cureus March 13, 2020. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7152572/
cyberflorida.org | 43
Krittanawong, Chayakrit; Narasimhan, Bharat; Virk, Hafeez Ul Hassan; Hahn, Joshua; Wang, Zhen; and Tang, W.H. Wilson.
“Misinformation Dissemination in Twitter in the COVID-19 Era”. American Journal of Medicine, (In Press) Available at
https://www.amjmed.com/article/S0002-9343(20)30686-0/fulltext
Kupferschmidt, Kai. (2020). “Three big studies dim hopes that hydroxychloroquine can treat or prevent COVID-19”. Science Magazine, June 9, 2020. Available at: https://www.sciencemag.org/news/2020/06/three-big-studies-dim hopes-hydroxychloroquine-can-treat-or-prevent-covid-19 Lachlan, Kenneth A., Patric R. Spence, Xialing Lin, Kristy Najarian, and Maria Del Greco. (2016) “Social Media and Crisis Management:
CERC, Search Strategies, and Twitter Content.” Computers in Human Behavior 54: 647-652.
Merchant, Raina M., Stacy Elmer, and Nicole Lurie. (2011) “Integrating Social Media into Emergency-Preparedness Efforts.”
New England Journal of Medicine 365(4): 289-291.
Miller M, Banerjee T, Muppalla R, Romine W, Sheth A: What are people tweeting about Zika? An exploratory study concerning its
symptoms, treatment, transmission, and prevention. JMIR Public Health Surveill. 2017, https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC5495967/. Mitchell, A., Rosenstiel, T., & Christian, L. (2012). What Facebook and Twitter mean for news. The Pew Research Center.
http://www.pewresearch.org/2012/03/19/state-of-the-news-media-2012/
Moore, Nicholas; Carleton, Bruce; Blin, Patrick; Bosco-Levy, Pauline; and Cecile Droz. (2020). “Does Ibuprofen worsen COVID-19?”.
Nature: Public Health Emergency Collection. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7287029/
Moran, Patrick. (2020). “Social Media: A Pandemic of Misinformation”. American Journal of Medicine. June 27, 2020. Available at
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7320252/
Neely, S.R. and Collins, M. (2018) Social Media and Crisis Communications: A Survey of Local Governments in Florida. Journal of
Homeland Security and Emergency Management, 15(1): 1-13.
Nunneley, Chloe E.; Kumar, Vinayak; and Salzman, Sony. (2020). “Experts Debate Safety of Ibuprofen for COVID-19”. ABC News. March
19, 2020. Available at https://abcnews.go.com/Health/experts-debate-safety-ibuprofen-covid-19/story?id=69663495
OSoMe, & Indiana University. (2020). Botometer by OSoMe. https://botometer.iuni.iu.edu Page, L., Brin, S., Motwani, R., & Winograd, T.
(1999). The PageRank citation ranking: Bringing order to the web. http://ilpubs.stanford.edu:8090/422
O’Sullivan, Donnie. (2020). “Exclusive: She’s been falsely accused of starting the pandemic. Her life has been turned upside down”.
CNN, April 27, 2020. Available at https://www.cnn.com/2020/04/27/tech/coronavirus-conspiracy-theory/index.html
Oyeyemi SO, Gabarron E, and Wynn R. (2014) “Ebola, Twitter, and misinformation: a dangerous combination?” BMJ. Patterson, Dan. (2020). “Trolls are spreading conspiracy theories that a U.S. Army reservist is “COVID-19 patient zero.” China is amplifying that disinformation.” CBS News, April 30, 2020. Available at https://www.cbsnews.com/news/coronavirus patient-zero-china-trolls/
44 | Bots are Bad, Humans are Worse
Penn Medicine News. (2020). “Hydroxychloroquine No More Effective Than Placebo in Preventing COVID-19”. Penn Medicine News. September 30, 2020. Available at https://www.pennmedicine.org/news/news-releases/2020/september/ hydroxychloroquine-no-more-effective-than-placebo-in-preventing-covid19 Pew Research Center. (2019). “Partisan Antipathy: More Intense, More Personal”. October 10, 2019. Available at
https://www.pewresearch.org/politics/2019/10/10/partisan-antipathy-more-intense-more-personal/
Qiu, Linda. (2020). “Fact Check: Trump’s inaccurate claims on hydroxychloroquine”. New York Times, May 21, 2020. Available at
https://www.nytimes.com/2020/05/21/us/politics/trump-fact-check-hydroxychloroquine-coronavirus-.html
Redden, Elizabeth. (2020). “Rush to Publish Risks Undermining COVID-19 Research”. Insider Higher Ed. June 8, 2020. Available at
https://www.insidehighered.com/news/2020/06/08/fast-pace-scientific-publishing-covid-comes-problems
Rene, Peter Lyn. (2016) “The Influence of Social Media on Emergency Management.” PA Times, January 22, 2016,
http://patimes.org/influence-social-media-emergency-management/
Reuters Staff. (2020). TIMELINE – “Masks and the future of the virus: Trump in his own words”. July 21, 2020. Available at
https://in.reuters.com/article/instant-article/idUSKCN24M31K
Solender, Andrew. (2020). “All the times Trump has promoted hydroxychloroquine”. Forbes Magazine, May 22, 2020. Available at https://www.forbes.com/sites/andrewsolender/2020/05/22/all-the-times-trump-promoted hydroxychloroquine/#104f32954643 Stewart, Margaret C., and B. Gail Wilson. (2016) “The Dynamic Role of Social Media During Hurricane #Sandy: An Introduction to
the STREMII Model to Weather the Storm of the Crisis Lifecycle.” Computers in Human Behavior 54: 639-646.
Suciu, Peter. (2020). “COVID-19 Misinformation Remains Difficult to Stop on Social Media”. Forbes. April, 17, 2020. Available at
https://www.forbes.com/sites/petersuciu/2020/04/17/covid-19-misinformation-remains-difficult-to-stop-on-
social-media/#3feb14fc4819 Sunstein, Cass R. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton University Press: Princeton, NJ. Sunstein, Cass R. 2001. Republic.com. Princeton University Press: Princeton, NJ. Weixel, Nathaniel. (2020) CDC: Poisonings from cleaners, disinfectants rose sharply in March. The Hill, April 20, 2020. Available at
https://thehill.com/policy/healthcare/493725-cdc-poisonings-from-cleaners-disinfectants-rose-sharply-in-march
Whelan, E.; Golden, W.; and Donnellan, B. (2011). “Digitising the R&D social network: Revisiting the technological gatekeeper”.
Information Systems Journal, 23, 197-218.
World Health Organization (WHO). (2020). “Shortage of personal protective equipment endangering health workers worldwide”. News Release, March 3, 2020. Available at https://www.who.int/news-room/detail/03-03-2020-shortage-of-personal protective-equipment-endangering-health-workers-worldwide
cyberflorida.org | 45
World Health Organization (WHO) et al. (2020). “Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating
the harm from misinformation and disinformation. Joint statement by WHO, UN, UNICEF, UNDP, UNESCO, UNAIDS, ITU, UN Global
Pulse, and IFRC. September 23, 2020. Available at https://www.who.int/news/item/23-09-2020-managing-the-covid-19 infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation Varol, O., Ferrara, E., Davis, C. A., Menczer, F., & Flammini, A. (2017). Online Human-Bot Interactions: Detection, Estimation, and
Characterization. Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017),
280–289. Wojcik, S., Messing, S., Smith, A., Rainie, L., & Hitlin, P. (2018, April 9). Bots in the Twittersphere. Pew Research Center: Internet,
Science & Tech. http://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/
Xu, Juna. (2020). “WHO retracts their recommendation to not take ibuprofen for COVID-19 symptoms”. Body and Soul, March 20,
2020. Available at https://www.bodyandsoul.com.au/health/health-news/who-officially-recommends-to-not-taking-
ibuprofen-for-covid19-symptoms/news-story/7feffabb62238de08f718faff3d14603
C Y B E R F LO R I D A . O R G | 8 1 3 - 9 74 -2 6 0 4 | 4 2 0 2 E . F O W L E R AV E . , TA M PA , F L 3 3 6 2 0