Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp010r9676829
Title: Multimodal Hate Speech Recognition
Authors: Johnson, Kyle
Advisors: Fellbaum, Christiane D
Department: Electrical Engineering
Class Year: 2021
Abstract: In the past few years, social media platforms have taken a more active role in moderating the content on their platforms to combat hate speech. Although online platforms have traditionally relied on human moderators to respond to user-reported violations, today they increasingly make use of automatic systems to flag policy-violating content and proactively remove it. Usually, these systems are unimodal: they process text and images separately. However, multimodal systems, which jointly process text and images, are much more accurate in identifying hate speech. In this paper, we propose two new visio-linguistic models to classify hate speech. We evaluate our models on a dataset of internet memes containing hate speech published by Facebook. Our models outperform previously employed unimodal models and other state-of-the-art multimodal language models.
URI: http://arks.princeton.edu/ark:/88435/dsp010r9676829
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Electrical and Computer Engineering, 1932-2023

Files in This Item:
File Description SizeFormat 
JOHNSON-KYLE-THESIS.pdf980.99 kBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.