Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp011544bs147
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorVerma, Naveen
dc.contributor.authorPattnaik, Akash
dc.date.accessioned2020-10-02T21:30:22Z-
dc.date.available2020-10-02T21:30:22Z-
dc.date.created2020-05-03
dc.date.issued2020-10-02-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp011544bs147-
dc.description.abstractHuman activity recognition has been a breakthrough with implications in smart environments, healthcare, and global security. Previous work has relied on either remote sensing such as images and videos or Physically Integrated sensing including location and biometrics to design predictive models for human activity. Even when multiple sensor modalities are employed in the same space, modern human activity recognition methods lack meaningful ways to integrate sensor modalities while preserving each respective sensor's structured data for enhanced performance. In this study we consider a sub-class of human activity detection problems based on human-object interactions and use a simulated home environment dataset with vision data in the form of 400-by-400 pixel images and physically integrated (PI) sensing data from 2-dimensional real-world coordinates of humans and key objects. By combining both modalities, we propose a sensor fusion approach for deep learning. We diverge from previous work in attempting to preserve spatial qualities from both modalities in the model training and creation. We present numerous approaches for Physically Integrated sensor information representation as spatial maps and train a fusion-based deep convolutional neural network model that predicts human activity with more than 90% accuracy with 6.96 times fewer data required than a model trained only on image data. We also show that the optimal insertion point of PI spatial maps is prior to the last convolutional layer of a neural network only and this model trains similarly well as inserting PI maps at all hidden convolutional layers.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleInvestigating Sensor Fusion Models for Human Activity Recognition
dc.typePrinceton University Senior Theses
pu.date.classyear2020
pu.departmentElectrical Engineering
pu.pdf.coverpageSeniorThesisCoverPage
pu.contributor.authorid961198753
pu.certificateApplications of Computing Program
Appears in Collections:Electrical Engineering, 1932-2020

Files in This Item:
File Description SizeFormat 
PATTNAIK-AKASH-THESIS.pdf2.7 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.