Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp013x816q57c
Title: Stochastic Dual Dynamic Programming and Backward Approximate Dynamic Programming with Integrated Crossing State Stochastic Models for Wind Power in Energy Storage Optimization
Authors: Durante, Joseph Lawrence
Advisors: Powell, Warren B
Contributors: Electrical Engineering Department
Keywords: Backward Approximate Dynamic Programming
Crossing State Stochastic Model
Energy Storage Optimization
Risk-Directed Importance Sampling
Stochastic Dual Dynamic Programming
Subjects: Operations research
Energy
Issue Date: 2020
Publisher: Princeton, NJ : Princeton University
Abstract: This dissertation brings to light the importance of stochastic models that accurately capture the crossing times of stochastic processes. A crossing time is a contiguous block of time for which a stochastic process is above or below some benchmark such as a forecast. Proper modeling of crossing times is especially important in stochastic sequential decision making problems with a storage element, which arise often in energy systems optimization. We present a family of models, called crossing state models (both univariate and multivariate models are introduced), that outperform standard time series models in their ability to replicate these crossing times. Furthermore, in multivariate processes (which may be spatially distributed) we address the problem of replicating crossing times at both the disaggregate and aggregate levels. We then consider two vastly different energy storage applications and develop robust algorithms that incorporate the univariate crossing state model to produce control policies. The first application aims to optimize the operation of a paired wind farm-storage device system to satisfy a time-varying load in the presence of stochastic electricity prices. We show that more robust solutions are achieved using the crossing state model than when more common stochastic models are considered. The new model introduces some additional complexity to the problem as its information states are partially hidden. We derive a near-optimal time-dependent policy using backward approximate dynamic programming (ADP), which overcomes the computational hurdles of exact backward dynamic programming, with higher quality solutions than more familiar forward ADP methods. The second application is the control of a power grid with distributed grid-level storage and high penetrations of offshore wind. Our control policy relies on a variant of stochastic dual dynamic programming (SDDP), an algorithm well suited for certain high-dimensional control problems, modified to accommodate hidden Markov uncertainty in the stochastics. However, the algorithm may be impractical to use as it exhibits relatively slow convergence. To accelerate convergence, we apply both quadratic regularization and a risk-directed importance sampling technique in the backward pass of the algorithm. Again, we show that the resulting policies are more robust than those developed using classical SDDP modeling assumptions and algorithms.
URI: http://arks.princeton.edu/ark:/88435/dsp013x816q57c
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Durante_princeton_0181D_13280.pdf5.65 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.