Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01n009w504r
Title: Making a Better Opponent: Machine Learning for Video Games
Authors: McFadden, Kraig
Advisors: Houck, Andrew
Department: Electrical Engineering
Certificate Program: Applications of Computing Program
Class Year: 2018
Abstract: The purpose of this work is to develop a novel game artificial intelligence (AI) system. The current, standard AI system for action games – behavior trees – has been in use for over a decade, and there are calls for innovation coming from players and developers alike. The new system must meet four goals – adaptability to allow for natural game progression, simplicity regarding both content and code, computational feasibility so it can be run in a commercial game on consumer computers, and above all, it must provide believability, engagement, and fun for the player, which is summed up as “fun factor”. To meet these four goals of adaptability, simplicity, computational feasibility and fun factor, a machine learning model has been selected to control AI agents in the game. Specifically, a neural network was selected to model AI agent behavior. While neural networks have been shown to be able to approximate any function arbitrarily well, there are practical considerations and limitations, including how best to train the network. The bulk of the engineering work in this thesis was directed towards answering that question. After settling on a feasible network architecture, the network was trained using genetic algorithms, reinforcement learning, and a third learning algorithm that was a hybrid of those two. Data was collected during the training process to see how agents improved according to the fitness function applied to them, and data was also taken at regular intervals during the training to see how agents performed against control opponents (that is, opponents that behaved deterministically). Also, player testing was conducted to determine player perceptions of the AI agents, their believability, difficulty, and fun factor. Overall, it was determined that each training method produced agents that were capable of learning and improving in fitness, but the genetic algorithm and hybrid approaches to training produced AI agents that were the most enjoyable to play against. Additionally, the genetic algorithm was quite simple to implement compared to the other two because it does not require backpropagation, and that reduced the computational power required to run it. All things considered, a neural network trained with a genetic algorithm seems the most promising for controlling NPC AI enemies in an action game.
URI: http://arks.princeton.edu/ark:/88435/dsp01n009w504r
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Electrical Engineering, 1932-2020

Files in This Item:
File Description SizeFormat 
MCFADDEN-KRAIG-THESIS.pdf1.71 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.