Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01st74cs71m
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorFunkhouser, Thomasen_US
dc.contributor.authorBoyko, Aleksey Sergeyen_US
dc.contributor.otherComputer Science Departmenten_US
dc.date.accessioned2015-03-26T14:30:41Z-
dc.date.available2015-03-26T14:30:41Z-
dc.date.issued2015en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01st74cs71m-
dc.description.abstractCollecting massive 3D scans of real world environments has become a common practice for many private companies and government agencies. This data represents the real world accurately and densely enough to provide impressive visualizations. However, these scans are merely points. To truly tap into the potential that such a precise digital depiction of the world offers, these scans need to be annotated in terms of objects these points represent. Manually annotating this data is very costly. Existing machine-aided methods report high accuracies for object localization and segmentation. However, the central task of annotation, proper label assignment, is still a challenging task for these approaches. The goal of this work is to design an interface that streamlines the process of labeling objects in large 3D point clouds. Since automatic methods are inaccurate and manual annotation is tedious, this work assumes the necessity of every object's label being verified by the annotator, yet puts the effort required from the user to accomplish the task without loss of accuracy at the center stage. Inspired by work done in related fields of image, video and text annotation, techniques used in machine learning, and perceptual psychology, this work offers and evaluates three interaction models and annotation interfaces for object labeling in 3D LiDAR scans. The first interface leaves the control over the annotation session in the user's hands and offers additional tools, such as online prediction updates, group selection and filtering, to increase the throughput of the information flow from the user to the machine. In the second interface, the non-essential yet time consuming tasks (e.g., scene navigation, selection decisions) are relayed onto a machine by employing an active learning approach to diminish user fatigue and distraction by these non-essential tasks. Finally, a third hybrid approach is proposed---a group active interface. It queries objects in groups that are easy to understand and label together thus aiming to achieve advantages offered by either of the first two interfaces. Empirical evaluation of this approach indicates an improvement by a factor of 1.7 in annotation time compared to other methods discussed without loss in accuracy.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subjectannotation interfaceen_US
dc.subjectefficient annotationen_US
dc.subjectgroup active learningen_US
dc.subjectlidaren_US
dc.subjectobject annotationen_US
dc.subjectpoint cloudsen_US
dc.subject.classificationComputer scienceen_US
dc.titleEfficient interfaces for accurate annotation of 3D point cloudsen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Boyko_princeton_0181D_11252.pdf23.18 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.