Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01z603r1262
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorDaw, Nathaniel-
dc.contributor.authorKim, Ji-Sung-
dc.date.accessioned2019-07-24T18:16:10Z-
dc.date.available2020-07-01T09:19:17Z-
dc.date.created2019-05-13-
dc.date.issued2019-07-24-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01z603r1262-
dc.description.abstractIn 2016, Wang et al. and Duan et al. demonstrated that meta-learning can emerge in deep neural networks by showing that recurrent neural networks (RNNs) can be trained to implement reinforcement learning (RL) strategies for different families of tasks. This introduction of deep meta-reinforcement learning (deep meta-RL) has had significant implications not only in the field of machine learning but also in the natural domains of neuroscience and behavioral psychology. Psychologists and neuroscientists are primarily concerned about how meta-learning works in the brain and thus are interested in analyzing interpretable computational models of behavior and cognition. As a result, in order to maximize the utility of deep meta-RL to the fields of neuroscience and behavioral psychology, it is important to attain a mechanistic understanding of the learning strategy learned by deep meta-RL agents (and implemented by the underlying RNNs). In general, understanding the behavior and mechanisms of deep neural networks has been an open problem in the field of machine learning. Although deep neural networks are notoriously difficult to interpret (and even moreso for RNNs), we make tangible progress towards characterizing the underlying RNNs which implement the RL algorithms learned by deep meta-RL agents (meta-RNNs) on a family of bandit tasks. We demonstrate certain learning properties exhibited by these meta-learned RL strategies and elucidate the structure of the hidden state space used by the meta-RNNs. We also show that simple linear approximations (in the form of linear state machines) can be derived from deep meta-RL agents and achieve a high degree of replication accuracy. Our work has important implications as a bridge between demonstrating that meta-learning strategies can be implemented by computational models (e.g., deep neural networks), and applying these computational models to understanding how meta-learning works in the human brain.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleCharacterization of learning algorithms learned by deep meta-reinforcement learning agentsen_US
dc.typePrinceton University Senior Theses-
pu.embargo.terms2020-07-01-
pu.date.classyear2019en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
dc.rights.accessRightsWalk-in Access. This thesis can only be viewed on computer terminals at the <a href=http://mudd.princeton.edu>Mudd Manuscript Library</a>.-
pu.contributor.authorid960824030-
pu.mudd.walkinyesen_US
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
KIM-JI-SUNG-THESIS.pdf2.96 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.