Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01hq37vn61b
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorPowell, Warren B.en_US
dc.contributor.authorScott, Warren Roberten_US
dc.contributor.otherOperations Research and Financial Engineering Departmenten_US
dc.date.accessioned2012-08-01T19:34:08Z-
dc.date.available2012-08-01T19:34:08Z-
dc.date.issued2012en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01hq37vn61b-
dc.description.abstractWe describe an adaptation of the knowledge gradient, originally developed for discrete ranking and selection problems, to the problem of calibrating continuous parameters for the purpose of tuning a simulator. The knowledge gradient for continuous parameters uses a continuous approximation of the expected value of a single measurement to guide the choice of where to collect information next. We show how to find the parameter setting that maximizes the expected value of a measurement by optimizing a continuous but nonconcave surface. We compare the method to sequential kriging for a series of test surfaces, and then demonstrate its performance in the calibration of an expensive industrial simulator. We next describe an energy storage problem which combines energy from wind and the grid along with a battery to meet a stochastic load. We formulate the problem as an infinite horizon Markov decision process. We first discretize the state space and action space on a simplfied version of the problem to get optimal solutions using exact value iteration. We then evaluate several approximate policy iteration algorithms and evaluate their performance. We show that Bellman error minimization with instrumental variables is equivalent to projected Bellman error minimization, previously believed to be two different policy evaluation algorithms. Furthermore, we provide a convergence proof for Bellman error minimization with instrumental variables under certain assumptions. We compare approximate policy iteration and direct policy search on the simplified benchmark problems along with the full continuous problems. Finally, we describe a portfolio selection method for choosing virtual electricity contracts in the PJM electricity markets, contracts whose payoffs depend on the difference between the day-ahead and real-time locational marginal electricity prices in PJM. We propose an errors-in-variables factor model which is an extension of the classical capital asset pricing model. We show how the model can be used to estimate the covariance matrix of the returns of assets. For US equities and PJM virtual contracts, we show the benefits of the portfolios produced with the new covariance estimation method.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subject.classificationOperations researchen_US
dc.titleEnergy Storage Applications of the Knowledge Gradient for Calibrating Continuous Parameters, Approximate Policy Iteration using Bellman Error Minimization with Instrumental Variables, and Covariance Matrix Estimation using an Errors-in-Variables Factor Modelen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Scott_princeton_0181D_10229.pdf1.4 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.