TY - JOUR
T1 - Breaking the Silence
T2 - Investigating Which Types of Moderation Reduce Negative Effects of Sexist Social Media Content
AU - Sasse, Julia
AU - Grossklags, Jens
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/10/4
Y1 - 2023/10/4
N2 - Sexist content is widespread on social media and can reduce women's psychological well-being and their willingness to participate in online discourse, making it a societal issue. To counter these effects, social media platforms employ moderators. To date, little is known about the effectiveness of different forms of moderation in creating a safe space and their acceptance, in particular from the perspective of women as members of the targeted group and users in general (rather than perpetrators). In this research, we propose that some common forms of moderation can be systematized along two facets of visibility, namely visibility of sexist content and of counterspeech. In an online experiment (N = 839), we manipulated these two facets and tested how they shaped social norms, feelings of safety, and intent to participate, as well as fairness, trustworthiness, and efficacy evaluations. In line with our predictions, deletion of sexist content - i.e., its invisibility - and (public) counterspeech - i.e., its visibility - against visible sexist content contributed to creating a safe space. Looking at the underlying psychological mechanism, we found that these effects were largely driven by changes in what was perceived normative in the presented context. Interestingly, deletion of sexist content was judged as less fair than counterspeech against visible sexist content. Our research contributes to a growing body of literature that highlights the importance of norms in creating safer online environments and provides practical implications for moderators for selecting actions that can be effective and accepted.
AB - Sexist content is widespread on social media and can reduce women's psychological well-being and their willingness to participate in online discourse, making it a societal issue. To counter these effects, social media platforms employ moderators. To date, little is known about the effectiveness of different forms of moderation in creating a safe space and their acceptance, in particular from the perspective of women as members of the targeted group and users in general (rather than perpetrators). In this research, we propose that some common forms of moderation can be systematized along two facets of visibility, namely visibility of sexist content and of counterspeech. In an online experiment (N = 839), we manipulated these two facets and tested how they shaped social norms, feelings of safety, and intent to participate, as well as fairness, trustworthiness, and efficacy evaluations. In line with our predictions, deletion of sexist content - i.e., its invisibility - and (public) counterspeech - i.e., its visibility - against visible sexist content contributed to creating a safe space. Looking at the underlying psychological mechanism, we found that these effects were largely driven by changes in what was perceived normative in the presented context. Interestingly, deletion of sexist content was judged as less fair than counterspeech against visible sexist content. Our research contributes to a growing body of literature that highlights the importance of norms in creating safer online environments and provides practical implications for moderators for selecting actions that can be effective and accepted.
KW - behavior change
KW - computer mediated communication
KW - gender and identity
KW - quantitative methods
KW - social media and online communities
KW - social networking site design and use
UR - http://www.scopus.com/inward/record.url?scp=85174493675&partnerID=8YFLogxK
U2 - 10.1145/3610176
DO - 10.1145/3610176
M3 - Article
AN - SCOPUS:85174493675
SN - 2573-0142
VL - 7
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
IS - CSCW2
M1 - 3610176
ER -