TY - JOUR
T1 - Who Can Say What?
T2 - Testing the Impact of Interpersonal Mechanisms and Gender on Fairness Evaluations of Content Moderation
AU - Weber, Ina
AU - Gonçalves, João
AU - Da Silva, Marisa Torres
AU - Masullo, Gina M.
AU - Hofhuis, Joep
N1 - info:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F05021%2F2020/PT#
info:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDP%2F05021%2F2020/PT#
UIDB/05021/2020
UIDP/05021/2020
PY - 2024/10
Y1 - 2024/10
N2 - Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
AB - Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
KW - Content moderation
KW - Free speech
KW - Gender stereotypes
KW - Online hate
KW - Social influence
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=nova_api&SrcAuth=WosAPI&KeyUT=WOS:001363672800001&DestLinkType=FullRecord&DestApp=WOS_CPL
UR - http://www.scopus.com/inward/record.url?scp=85210383556&partnerID=8YFLogxK
U2 - 10.1177/20563051241286702
DO - 10.1177/20563051241286702
M3 - Article
SN - 2056-3051
VL - 10
SP - 1
EP - 15
JO - SOCIAL MEDIA + SOCIETY
JF - SOCIAL MEDIA + SOCIETY
IS - 4
ER -