Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp0100000285r
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBraverman, Mark-
dc.contributor.authorWitten, Paul-
dc.date.accessioned2019-07-24T19:43:59Z-
dc.date.available2019-07-24T19:43:59Z-
dc.date.created2019-05-06-
dc.date.issued2019-07-24-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp0100000285r-
dc.description.abstractReinforcement Learning has become an important approach to developing Artificial Intelligence for games and systems. Using reinforcement learning we can train agents for various environments without relying on human intuition or strategies. Many of the most important Artificial Intelligence programs that are being developed today rely on reinforcement learning, most notably the context of perfect information games. This paper seeks to apply reinforcement learning to games of imperfect information, like poker, where reinforcement learning has had less positive results. Specifically, this paper will use a Deep Q Network, a reinforcement learning algorithm, to train an agent to play a simple poker-like game. Through this process we will train the agent to naturally learn to play a game at the Nash Equilibrium strategy. With the lack of an intrinsically optimal strategy in imperfect information games, the Nash Equilibrium represents a potential best strategy as it offers a guaranteed worst case performance as it cannot be exploited by opponents.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleLearning the Nash Equilibrium Through Reinforcement in Imperfect Information Settingsen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2019en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid961168145-
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
WITTEN-PAUL-THESIS.pdf1.16 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.