Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp0100000304q
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Soner, Mete | |
dc.contributor.author | Kim, Russell | |
dc.date.accessioned | 2020-09-30T14:18:16Z | - |
dc.date.available | 2020-09-30T14:18:16Z | - |
dc.date.created | 2020-05-05 | |
dc.date.issued | 2020-09-30 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp0100000304q | - |
dc.description.abstract | Options pricing is a notoriously tricky subject to maneuver, both mathematically and empirically. How- ever, the rise of reinforcement learning in recent times has opened up a new avenue to pricing options through machine learning. As a result, this paper explores and extends the discrete-time option pricing model by Halperin [11]. More specifically, we focus on the off-policy algorithm well known as the Q-Learning algorithm. In order to transform the original problem of pricing a Black Scholes based derivative, we stray from the academic continuous-time limit and use a discretized version of the classic Black Scholes model in order to construct a risk-neutral Markov Decision Process. Here, the option price of the MDP is an optimal Q-function. Pricing and hedging an option when the stock dynamics - such as the volatility - are unknown is now possible through the use of the Q-Learning algorithm. The model itself does not make any guesses or assumptions about the structure of the data it is given, and is guaranteed to asymptotically converge to the optimal solution given enough data and time. Empirical results show that the pricing of European options is very close to the classical Black-Scholes results, although there are some notable differences and error deviations when controlling for different factors. This model can also be easily extended to practically any type of financial option, given an easily tractable or estimated payoff function. Extensive numerical testing is done, and specific recommendations of further research are given. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.title | A Reinforcement Learning Based Approach to Pricing and Hedging Financial Derivatives | |
dc.type | Princeton University Senior Theses | |
pu.date.classyear | 2020 | |
pu.department | Operations Research and Financial Engineering | |
pu.pdf.coverpage | SeniorThesisCoverPage | |
pu.contributor.authorid | 961165476 | |
Appears in Collections: | Operations Research and Financial Engineering, 2000-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
KIM-RUSSELL-THESIS.pdf | 1.53 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.