Promoting and countering misinformation during Australia's 2019-2020
bushfires: A case study of polarisation
- URL: http://arxiv.org/abs/2201.03153v1
- Date: Mon, 10 Jan 2022 03:44:31 GMT
- Title: Promoting and countering misinformation during Australia's 2019-2020
bushfires: A case study of polarisation
- Authors: Derek Weber and Lucia Falzon and Lewis Mitchell and Mehwish Nasim
- Abstract summary: misinformation blaming arson resurfaced on Twitter during Australia's unprecedented bushfires.
We study Twitter communities spreading this misinformation during the population-level event, and investigate the role of online communities and bots.
We find that Supporters promoted misinformation by engaging others directly with replies and mentions using hashtags and links to external sources.
We speculate that the communication strategies observed here could be discoverable in other misinformation-related discussions and could inform counter-strategies.
- Score: 0.11470070927586014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During Australia's unprecedented bushfires in 2019-2020, misinformation
blaming arson resurfaced on Twitter using #ArsonEmergency. The extent to which
bots were responsible for disseminating and amplifying this misinformation has
received scrutiny in the media and academic research. Here we study Twitter
communities spreading this misinformation during the population-level event,
and investigate the role of online communities and bots. Our in-depth
investigation of the dynamics of the discussion uses a phased approach --
before and after reporting of bots promoting the hashtag was broadcast by the
mainstream media. Though we did not find many bots, the most bot-like accounts
were social bots, which present as genuine humans. Further, we distilled
meaningful quantitative differences between two polarised communities in the
Twitter discussion, resulting in the following insights. First, Supporters of
the arson narrative promoted misinformation by engaging others directly with
replies and mentions using hashtags and links to external sources. In response,
Opposers retweeted fact-based articles and official information. Second,
Supporters were embedded throughout their interaction networks, but Opposers
obtained high centrality more efficiently despite their peripheral positions.
By the last phase, Opposers and unaffiliated accounts appeared to coordinate,
potentially reaching a broader audience. Finally, unaffiliated accounts shared
the same URLs as Opposers over Supporters by a ratio of 9:1 in the last phase,
having shared mostly Supporter URLs in the first phase. This foiled Supporters'
efforts, highlighting the value of exposing misinformation campaigns. We
speculate that the communication strategies observed here could be discoverable
in other misinformation-related discussions and could inform
counter-strategies.
Related papers
- Retweets Amplify the Echo Chamber Effect [7.684402388805108]
We reconstruct the retweet graph and quantify its impact on the measures of echo chambers and exposure.
We show that retweeted accounts share systematically more polarized content.
Our results suggest that studies relying on the retweet graphs overestimate the echo chamber effects and exposure to polarized information.
arXiv Detail & Related papers (2022-11-29T18:51:54Z) - Predicting Hate Intensity of Twitter Conversation Threads [26.190359413890537]
We propose DRAGNET++, which aims to predict the intensity of hatred that a tweet can bring in through its reply chain in the future.
It uses the semantic and propagating structure of the tweet threads to maximize the contextual information leading up to and the fall of hate intensity at each subsequent tweet.
We show that DRAGNET++ outperforms all the state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-06-16T18:51:36Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Characterizing Retweet Bots: The Case of Black Market Accounts [3.0254442724635173]
We characterize retweet bots that have been uncovered by purchasing retweets from the black market.
We detect whether they are fake or genuine accounts involved in inauthentic activities.
We also analyze their differences from human-controlled accounts.
arXiv Detail & Related papers (2021-12-04T15:52:46Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Capitol (Pat)riots: A comparative study of Twitter and Parler [37.277566049536]
On 6 January 2021, a mob of right-wing conservatives stormed the USA Capitol Hill interrupting the session of congress certifying 2020 Presidential election results.
Immediately after the start of the event, posts related to the riots started to trend on social media.
Our report presents a contrast between the trending content on Parler and Twitter around the time of riots.
arXiv Detail & Related papers (2021-01-18T07:46:14Z) - Characterizing the roles of bots during the COVID-19 infodemic on
Twitter [1.776746672434207]
An infodemic is an emerging phenomenon caused by an overabundance of information online.
We examined the roles of bots in the case of the COVID-19 infodemic and the diffusion of non-credible information.
arXiv Detail & Related papers (2020-11-12T08:04:32Z) - Understanding the Hoarding Behaviors during the COVID-19 Pandemic using
Large Scale Social Media Data [77.34726150561087]
We analyze the hoarding and anti-hoarding patterns of over 42,000 unique Twitter users in the United States from March 1 to April 30, 2020.
We find the percentage of females in both hoarding and anti-hoarding groups is higher than that of the general Twitter user population.
The LIWC anxiety mean for the hoarding-related tweets is significantly higher than the baseline Twitter anxiety mean.
arXiv Detail & Related papers (2020-10-15T16:02:25Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.