Why Do People Spread Disinformation on Social Media? The Role of Social Identity and Perceived Morality

Joyner, Laura 2023. Why Do People Spread Disinformation on Social Media? The Role of Social Identity and Perceived Morality. PhD thesis University of Westminster School of Social Sciences https://doi.org/10.34737/w702w

TitleWhy Do People Spread Disinformation on Social Media? The Role of Social Identity and Perceived Morality
TypePhD thesis
AuthorsJoyner, Laura
Abstract

“Disinformation” is false or misleading information that is deliberately created or spread. In recent years, social media platforms have been used to rapidly disseminate disinformation for personal, political, and financial gain, or by those wishing to cause harm. Yet how far disinformation will digitally spread is also dependent on whether other users interact with it, regardless of whether they know it is inaccurate or not (i.e. “misinformation”).
A series of five studies within the present thesis has therefore sought to understand if social media users are more likely to amplify the spread of misinformation and disinformation that allows them to express their identity and beliefs. It also investigated whether misinformation and disinformation are morally evaluated in the context of social identity, and whether any identity-related adjustments can help explain intentions to spread the content further.
Study 1 used a correlational design to explore whether degree of belief consistency influenced intentions to digitally interact (like, share privately, share publicly) with misinformation. Participants (N = 218) were presented with a series of 12 misinformation posts about the UK Government’s handling of the COVID-19 pandemic (framed either “favourably” or “unfavourably”) and misinformation about the risks of the COVID-19 virus (framed to either “minimise” or “maximise” risk). Related beliefs were also measured (i.e. trust in the UK Government’s handling of the pandemic and perceived risk of COVID-19). Greater belief consistency predicted increased intentions to interact with misinformation. After informing participants the content was inaccurate, the degree of belief consistency also predicted the moral acceptability of spreading disinformation. The findings suggest that users may be more lenient towards and more likely to amplify the spread of inaccurate content when it is consistent with their beliefs about an issue.
Study 2 examined whether users would be more morally lenient towards misinformation or disinformation that may allow them to make favourable comparisons of their ingroup. London-based Conservative and Labour voters (N = 206) were recruited in the run up to the London mayoral elections in 2021. An experimental 2x2 between-groups design was employed, where participants were shown a social media post featuring fabricated information which either supported or undermined their own or the opposition party. Participants were more morally lenient towards spreading misinformation and disinformation that could help their ingroup (i.e. supported their ingroup or undermined an outgroup). However, exploratory analysis suggests that biased moral judgements of disinformation may have been driven by Conservative voters only.
A new scale was then tested and developed within study 3 which incorporated digital actions that can potentially help reduce the wider spread of a post, as well as those which may amplify it further. This study replicated study one with the new scale (N = 251) and showed that degree of belief consistency predicted the likelihood of contributing to the onwards spread of misinformation. It was also found that users may be more morally lenient towards spreading belief-consistent misinformation, and that such leniency can help explain (but does not entirely mediate) the relationship between belief consistency and spread.
Study 4 used a 2x2 between-groups design to test the effect of message framing (i.e. positive or negative) and fact-check tags on moral evaluations of misinformation and understand how any moral leniency may influence intentions to spread. Supporters of 5 English Premier League teams (N = 262) were recruited and shown inaccurate posts about their own team. Moral judgments were again biased in favour of the ingroup, even when participants were aware the information was untrue, and helped to explain increased intentions to spread the content further. Participants also provided written explanations to support their responses which were analysed against the Extended Moral Foundations Dictionary. The computational text analysis indicated that engagement with “fairness” related values differed across the four conditions. Specifically, participants were least likely to consider fairness when presented with positively framed ingroup misinformation, and this reduced consideration of fairness was related to increased moral acceptance of posts generally. Moreover, despite the content being unrelated to politics, political asymmetry was again observed in moral judgements of ingroup supporting disinformation. The findings indicated that politically left-leaning participants may have been more likely than others to consider fairness when making evaluations of identity-affirming disinformation.
Finally, two moral reframing interventions were developed in study 5 which aimed to help reduce intentions to spread identity-affirming misinformation. These appeals framed the sharing of unverified content as violations of individualising moral values (fairness, harm) or violations of binding moral values (loyalty, authority, sanctity) and were tested alongside a pre-existing accuracy nudge intervention in a 3x2x2 between-groups design. Democrat and Republican voters (N = 508) were recruited in the run up to the 2022 US mid-term elections and shown political misinformation that positively compared their own party to the opposition. Both moral appeals were more effective at reducing moral acceptability and intentions to spread misinformation than pre-existing accuracy nudge interventions, but only in Democrat voters. The findings indicate that accuracy nudges may dissuade strong identifiers from amplifying misinformation further but have no influence on moral evaluations. In contrast, any reduced intentions to spread misinformation after viewing a moral appeal may be explained by adjustments to the perceived moral acceptability of spreading the content further.
Together, this research demonstrates that moral evaluations of misinformation and disinformation are situational and change in relation to the viewer’s social identity. The present thesis also provides insight into the role of moral cognition in influencing decisions to spread misinformation and disinformation. It also may help explain why certain users may appear more susceptible to spreading inaccurate content generally.

Year2023
File
File Access Level
Open (open metadata and files)
ProjectWhy Do People Spread Disinformation on Social Media? The Role of Social Identity and Perceived Morality
PublisherUniversity of Westminster
Publication dates
Published17 Nov 2023
Digital Object Identifier (DOI)https://doi.org/10.34737/w702w

Related outputs

Preprint: Individual differences in sharing false political information on social media: deliberate and accidental sharing, motivations and positive schizotypy
Tom Buchanan, Rotem Perach, Deborah Husbands, Amber Tout, Ekaterina Kostyuk, James Kempley and Laura Joyner 2024. Preprint: Individual differences in sharing false political information on social media: deliberate and accidental sharing, motivations and positive schizotypy. OSF Preprints. https://doi.org/10.31219/osf.io/hg9qb

Moral leniency towards belief-consistent disinformation may help explain its spread on social media
Joyner, L., Buchanan, T. and Yetkili, O. 2023. Moral leniency towards belief-consistent disinformation may help explain its spread on social media. PLoS ONE. 18 (3) e0281777. https://doi.org/10.1371/journal.pone.0281777

The Online Behaviour Taxonomy: A conceptual framework to understand behaviour in computer-mediated communication
Kaye, L.K., Rousaki, A., Joyner, L.C., Barrett, L.A.F. and Orchard, L.J. 2022. The Online Behaviour Taxonomy: A conceptual framework to understand behaviour in computer-mediated communication. Computers in Human Behavior. 137 107443. https://doi.org/10.1016/j.chb.2022.107443

Permalink - https://westminsterresearch.westminster.ac.uk/item/w702w/why-do-people-spread-disinformation-on-social-media-the-role-of-social-identity-and-perceived-morality


Share this

Usage statistics

346 total views
369 total downloads
These values cover views and downloads from WestminsterResearch and are for the period from September 2nd 2018, when this repository was created.