Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01n009w504r
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Houck, Andrew | - |
dc.contributor.author | McFadden, Kraig | - |
dc.date.accessioned | 2018-08-20T15:19:54Z | - |
dc.date.available | 2018-08-20T15:19:54Z | - |
dc.date.created | 2018-05-07 | - |
dc.date.issued | 2018-08-20 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01n009w504r | - |
dc.description.abstract | The purpose of this work is to develop a novel game artificial intelligence (AI) system. The current, standard AI system for action games – behavior trees – has been in use for over a decade, and there are calls for innovation coming from players and developers alike. The new system must meet four goals – adaptability to allow for natural game progression, simplicity regarding both content and code, computational feasibility so it can be run in a commercial game on consumer computers, and above all, it must provide believability, engagement, and fun for the player, which is summed up as “fun factor”. To meet these four goals of adaptability, simplicity, computational feasibility and fun factor, a machine learning model has been selected to control AI agents in the game. Specifically, a neural network was selected to model AI agent behavior. While neural networks have been shown to be able to approximate any function arbitrarily well, there are practical considerations and limitations, including how best to train the network. The bulk of the engineering work in this thesis was directed towards answering that question. After settling on a feasible network architecture, the network was trained using genetic algorithms, reinforcement learning, and a third learning algorithm that was a hybrid of those two. Data was collected during the training process to see how agents improved according to the fitness function applied to them, and data was also taken at regular intervals during the training to see how agents performed against control opponents (that is, opponents that behaved deterministically). Also, player testing was conducted to determine player perceptions of the AI agents, their believability, difficulty, and fun factor. Overall, it was determined that each training method produced agents that were capable of learning and improving in fitness, but the genetic algorithm and hybrid approaches to training produced AI agents that were the most enjoyable to play against. Additionally, the genetic algorithm was quite simple to implement compared to the other two because it does not require backpropagation, and that reduced the computational power required to run it. All things considered, a neural network trained with a genetic algorithm seems the most promising for controlling NPC AI enemies in an action game. | en_US |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en | en_US |
dc.title | Making a Better Opponent: Machine Learning for Video Games | en_US |
dc.type | Princeton University Senior Theses | - |
pu.date.classyear | 2018 | en_US |
pu.department | Electrical Engineering | en_US |
pu.pdf.coverpage | SeniorThesisCoverPage | - |
pu.contributor.authorid | 961026040 | - |
pu.certificate | Applications of Computing Program | en_US |
Appears in Collections: | Electrical Engineering, 1932-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
MCFADDEN-KRAIG-THESIS.pdf | 1.71 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.