Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp011544bp214
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorRusinkiewicz, Szymon Men_US
dc.contributor.authorLuo, Linjieen_US
dc.contributor.otherComputer Science Departmenten_US
dc.date.accessioned2013-09-16T17:27:10Z-
dc.date.available2013-09-16T17:27:10Z-
dc.date.issued2013en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp011544bp214-
dc.description.abstractHair is one of human's most distinctive features and one important component in digital human models. However, capturing high quality hair models from real hairstyles remains difficult because of the challenges arising from hair's unique characteristics: the view-dependent specular appearance, the geometric complexity and the high variability of real hairstyles. In this thesis, we address these challenges towards the goal of accurate, robust and structure-aware hair capture. We first propose an orientation-based matching metric to replace conventional color-based one for multi-view stereo reconstruction of hair. Our key insight is that while color appearance is view-dependent due to hair's specularity, orientation is more robust across views. Orientation similarity also identifies homogeneous hair structures that enable structure-aware aggregation along the structural continuities. Compared to color-based methods, our method minimizes the reconstruction artifacts due to specularity and faithfully recovers detailed hair structures in the reconstruction results. Next, we introduce a system with more flexible capture setup that requires only 8 camera views to capture complete hairstyles. Our key insight is that strand is a better aggregation unit for robust stereo matching against ambiguities in wide-baseline setups because it models hair's characteristic strand-like structural continuity. The reconstruction is driven by the strand-based refinement that optimizes a set of 3D strands for cross-view orientation consistency and iteratively refines the reonstructed shape from the visual hull. We are able to reconstruct complete hair models for a variety of hairstyles with an accuracy about 3mm evaluated on synthetic datasets. Finally, we propose a method that reconstructs coherent and plausible wisps aware of the underlying hair structures from a set of input images. The system first discovers locally coherent wisp structures and then uses a novel graph data structure to reason about both the connectivity and directions of the local wisp structures in a global optimization. The wisps are then completed and used to synthesize hair strands which are robust against occlusion and missing data and plausible for animation and simulation. We show reconstruction results for a variety of complex hairstyles including curly, wispy, and messy hair.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subject3D reconstructionen_US
dc.subjectcomputer graphicsen_US
dc.subjecthair captureen_US
dc.subjectmulti-view stereoen_US
dc.subject.classificationComputer scienceen_US
dc.titleAccurate, Robust and Structure-Aware Hair Captureen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Luo_princeton_0181D_10645.pdf39.53 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.