Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp011544bp214
Title: Accurate, Robust and Structure-Aware Hair Capture
Authors: Luo, Linjie
Advisors: Rusinkiewicz, Szymon M
Contributors: Computer Science Department
Keywords: 3D reconstruction
computer graphics
hair capture
multi-view stereo
Subjects: Computer science
Issue Date: 2013
Publisher: Princeton, NJ : Princeton University
Abstract: Hair is one of human's most distinctive features and one important component in digital human models. However, capturing high quality hair models from real hairstyles remains difficult because of the challenges arising from hair's unique characteristics: the view-dependent specular appearance, the geometric complexity and the high variability of real hairstyles. In this thesis, we address these challenges towards the goal of accurate, robust and structure-aware hair capture. We first propose an orientation-based matching metric to replace conventional color-based one for multi-view stereo reconstruction of hair. Our key insight is that while color appearance is view-dependent due to hair's specularity, orientation is more robust across views. Orientation similarity also identifies homogeneous hair structures that enable structure-aware aggregation along the structural continuities. Compared to color-based methods, our method minimizes the reconstruction artifacts due to specularity and faithfully recovers detailed hair structures in the reconstruction results. Next, we introduce a system with more flexible capture setup that requires only 8 camera views to capture complete hairstyles. Our key insight is that strand is a better aggregation unit for robust stereo matching against ambiguities in wide-baseline setups because it models hair's characteristic strand-like structural continuity. The reconstruction is driven by the strand-based refinement that optimizes a set of 3D strands for cross-view orientation consistency and iteratively refines the reonstructed shape from the visual hull. We are able to reconstruct complete hair models for a variety of hairstyles with an accuracy about 3mm evaluated on synthetic datasets. Finally, we propose a method that reconstructs coherent and plausible wisps aware of the underlying hair structures from a set of input images. The system first discovers locally coherent wisp structures and then uses a novel graph data structure to reason about both the connectivity and directions of the local wisp structures in a global optimization. The wisps are then completed and used to synthesize hair strands which are robust against occlusion and missing data and plausible for animation and simulation. We show reconstruction results for a variety of complex hairstyles including curly, wispy, and messy hair.
URI: http://arks.princeton.edu/ark:/88435/dsp011544bp214
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Luo_princeton_0181D_10645.pdf39.53 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.