Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp0100000285r
Title: | Learning the Nash Equilibrium Through Reinforcement in Imperfect Information Settings |
Authors: | Witten, Paul |
Advisors: | Braverman, Mark |
Department: | Computer Science |
Class Year: | 2019 |
Abstract: | Reinforcement Learning has become an important approach to developing Artificial Intelligence for games and systems. Using reinforcement learning we can train agents for various environments without relying on human intuition or strategies. Many of the most important Artificial Intelligence programs that are being developed today rely on reinforcement learning, most notably the context of perfect information games. This paper seeks to apply reinforcement learning to games of imperfect information, like poker, where reinforcement learning has had less positive results. Specifically, this paper will use a Deep Q Network, a reinforcement learning algorithm, to train an agent to play a simple poker-like game. Through this process we will train the agent to naturally learn to play a game at the Nash Equilibrium strategy. With the lack of an intrinsically optimal strategy in imperfect information games, the Nash Equilibrium represents a potential best strategy as it offers a guaranteed worst case performance as it cannot be exploited by opponents. |
URI: | http://arks.princeton.edu/ark:/88435/dsp0100000285r |
Type of Material: | Princeton University Senior Theses |
Language: | en |
Appears in Collections: | Computer Science, 1988-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
WITTEN-PAUL-THESIS.pdf | 1.16 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.