Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01xg94hs40f
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorMayer, Jonathan-
dc.contributor.authorLi, Jiayang-
dc.date.accessioned2019-09-04T17:45:47Z-
dc.date.available2019-09-04T17:45:47Z-
dc.date.created2019-05-06-
dc.date.issued2019-09-04-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01xg94hs40f-
dc.description.abstractContent moderation governs speech in the digital sphere, thereby controlling users’ free flow of expression online. As Internet platforms continuously grow in size, they use both automated systems and moderators to review and remove inappropriate text, photos, videos, and other forms of media shared online with respect to company guidelines. However, such guidelines have areas of ambiguity that require human moderators to apply their personal judgments, which research has shown inherently involves personal biases. My research investigates if people exhibit this inherent bias in moderating flagged social media activities. I specifically focus on Facebook and the company’s Community Standards. I use two separate surveys of Princeton students to see how they rate sample posts against these Standards and to gather their demographic information. I have found that students of different races, genders, and ideological leanings evaluate online content differently when presented with the same Facebook post shared by different Facebook users. This difference varies by the user’s as well as the students’ own demographics. I aim to conduct a study to compare these students’ biases with Facebook’s content moderators’, should they exist. As content moderation is progressing towards a more automated process with machine learning and artificial intelligence, my results show that this bias has to be resolved before moderation is completely automated. These results can then be applied to further developments in content moderation and also progress the current literature on free speech theory and technology policy that have arisen for online content providers today.  en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleWho Polices the Online Police? Measuring User Profile Bias in Online Content Moderationen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2019en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid960848018-
pu.certificateProgram in Technology & Society, Technology Tracken_US
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
LI-JIAYANG-THESIS.pdf15.53 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.