Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01tx31qm09c
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKulkarni, Sanjeev-
dc.contributor.advisorXiao, Jianxiong-
dc.contributor.authorXu, Pingmei-
dc.contributor.otherElectrical Engineering Department-
dc.date.accessioned2016-03-29T20:31:07Z-
dc.date.available2016-03-29T20:31:07Z-
dc.date.issued2016-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01tx31qm09c-
dc.description.abstractAn understanding of how the human visual system works is essential for many applications in computer vision, computer graphics, computational photography, psychology, sociology, and human-computer-interaction. To provide the research community with access to easier, cheaper eye tracking data for developing and evaluating computational models for human visual attention, this thesis introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on a crowd-sourcing platform. By using this tool, we also provide a benchmark data set to quantitatively compare existing and future models for saliency prediction. To explore where people look while performing complicated tasks in an interactive environment, we introduce a method to synthesize user interface layouts, present a computational model to predict users' spatio-temporal visual attention for graphical user interfaces, and show that our model outperforms existing methods. In addition, we explore how visual stimuli affect brain signals extracted by fMRI. Our tool for crowd-sourced eye tracking, a large data set for scene image saliency, models for user interface layouts synthesis and visual attention prediction and study for visual stimuli driven change of brain connectivity should be useful resources for future researchers to create more powerful computational models for human visual attention.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: http://catalog.princeton.edu/-
dc.subjecteye tracking-
dc.subjectlasso-
dc.subjectuser interface design-
dc.subjectvisual attention-
dc.subject.classificationElectrical engineering-
dc.subject.classificationComputer science-
dc.titleUNDERSTANDING AND PREDICTING HUMAN VISUAL ATTENTION-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Xu_princeton_0181D_11642.pdf82.76 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.