Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp012f75rc09f
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorChetty, Marshini
dc.contributor.authorMathur, Arunesh
dc.contributor.otherComputer Science Department
dc.date.accessioned2020-11-20T05:59:39Z-
dc.date.available2020-11-20T05:59:39Z-
dc.date.issued2020
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp012f75rc09f-
dc.description.abstractPowerful and otherwise trustworthy actors on the web gain from manipulating users and pushing them into making sub-optimal decisions. While prior work has documented examples of such manipulative practices, we lack a systematic understanding of their characteristics and their prevalence on the web. Building up this knowledge can lead to solutions that protect individuals and society from their harms. In this dissertation, I focus on manipulative practices that manifest in the user interface. I first describe the attributes of manipulative user interfaces. I show that these interfaces engineer users' choice architectures by either modifying the information available to users, or by modifying the set of choices available to users---eliminating and suppressing choices that disadvantage the manipulator. I then present the core contribution of this dissertation: automated methods that combine web automation and machine learning to identify manipulative interfaces at scale on the web. Using these methods, I conduct three measurements. First, I examine the extent to which content creators fail to disclose their endorsements on social media---misleading users into believing they are viewing unbiased, non-advertising content. Collecting and analyzing a dataset of 500K YouTube videos and 2 million Pinterest pins, I discover that ~90% of these endorsements go undisclosed. Second, I quantify the prevalence of dark patterns on shopping websites. Analyzing data I collected from 11K shopping websites, I discover 1,818 dark patterns on 1,254 websites that mislead, deceive, or coerce users into making more purchases or disclosing more information than they would otherwise. Finally, I quantify the prevalence of dark patterns and clickbait in political emails. Collecting and analyzing a dataset of over 100K emails from U.S. political campaigns and organizations in the 2020 election cycle, I find that that ~40% of emails sent by the median campaign/organization contain these manipulative interfaces. I conclude with how the lessons learned from these measurements can be used to build technical defenses and to lay out policy recommendations to mitigate the spread of these interfaces.
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>
dc.subject.classificationComputer science
dc.titleIdentifying and measuring manipulative user interfaces at scale on the web
dc.typeAcademic dissertations (Ph.D.)
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Mathur_princeton_0181D_13506.pdf7.36 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.