Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01hq37vn61b
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Powell, Warren B. | en_US |
dc.contributor.author | Scott, Warren Robert | en_US |
dc.contributor.other | Operations Research and Financial Engineering Department | en_US |
dc.date.accessioned | 2012-08-01T19:34:08Z | - |
dc.date.available | 2012-08-01T19:34:08Z | - |
dc.date.issued | 2012 | en_US |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01hq37vn61b | - |
dc.description.abstract | We describe an adaptation of the knowledge gradient, originally developed for discrete ranking and selection problems, to the problem of calibrating continuous parameters for the purpose of tuning a simulator. The knowledge gradient for continuous parameters uses a continuous approximation of the expected value of a single measurement to guide the choice of where to collect information next. We show how to find the parameter setting that maximizes the expected value of a measurement by optimizing a continuous but nonconcave surface. We compare the method to sequential kriging for a series of test surfaces, and then demonstrate its performance in the calibration of an expensive industrial simulator. We next describe an energy storage problem which combines energy from wind and the grid along with a battery to meet a stochastic load. We formulate the problem as an infinite horizon Markov decision process. We first discretize the state space and action space on a simplfied version of the problem to get optimal solutions using exact value iteration. We then evaluate several approximate policy iteration algorithms and evaluate their performance. We show that Bellman error minimization with instrumental variables is equivalent to projected Bellman error minimization, previously believed to be two different policy evaluation algorithms. Furthermore, we provide a convergence proof for Bellman error minimization with instrumental variables under certain assumptions. We compare approximate policy iteration and direct policy search on the simplified benchmark problems along with the full continuous problems. Finally, we describe a portfolio selection method for choosing virtual electricity contracts in the PJM electricity markets, contracts whose payoffs depend on the difference between the day-ahead and real-time locational marginal electricity prices in PJM. We propose an errors-in-variables factor model which is an extension of the classical capital asset pricing model. We show how the model can be used to estimate the covariance matrix of the returns of assets. For US equities and PJM virtual contracts, we show the benefits of the portfolios produced with the new covariance estimation method. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Princeton, NJ : Princeton University | en_US |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a> | en_US |
dc.subject.classification | Operations research | en_US |
dc.title | Energy Storage Applications of the Knowledge Gradient for Calibrating Continuous Parameters, Approximate Policy Iteration using Bellman Error Minimization with Instrumental Variables, and Covariance Matrix Estimation using an Errors-in-Variables Factor Model | en_US |
dc.type | Academic dissertations (Ph.D.) | en_US |
pu.projectgrantnumber | 690-2143 | en_US |
Appears in Collections: | Operations Research and Financial Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Scott_princeton_0181D_10229.pdf | 1.4 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.