Please use this identifier to cite or link to this item:
|Title:||Improving Sound Separation and Localization Using Audio-Visual Scene Analysis|
|Abstract:||This paper details the design of a self-supervised model for sound separation and localization by capitalizing upon the natural correspondence between the audio and visual modalities of videos. Because of the temporal alignment between auditory and visual components, our deep learning-based approach allows us to leverage this synchronization in jointly fusing these signals to simultaneously learn the tasks of separation and localization. For every pixel region in a video, a binary mask is predicted and then overlaid on a spectrogram representation of the input audio to estimate the inferred sound coming from that region. To train our neural network, we employ a mix-and-separate framework to synthetically create training data from our dataset of stabilized videos. High performance was achieved from our joint audio-visual model, asserting the success of our proposed architecture in separating and localizing sound in videos.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Computer Science, 1988-2020|
Files in This Item:
|YOON-PHILLIP-THESIS.pdf||2.33 MB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.