Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp0105741t92j
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorvan Handel, Ramonen_US
dc.contributor.authorRebeschini, Patricken_US
dc.contributor.otherOperations Research and Financial Engineering Departmenten_US
dc.date.accessioned2014-09-25T22:40:26Z-
dc.date.available2014-09-25T22:40:26Z-
dc.date.issued2014en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp0105741t92j-
dc.description.abstractThe goal of filtering theory is to compute the filter distribution, that is, the conditional distribution of a stochastic model given observed data. While exact computations are rarely possible, sequential Monte Carlo algorithms known as particle filters have been successfully applied to approximate the filter distribution, providing estimates whose error is uniform in time. However, the number of Monte Carlo samples needed to approximate the filter distribution is typically exponential in the number of degrees of freedom of the model. This issue, known as curse of dimensionality, has rendered sequential Monte Carlo algorithms largely useless in high-dimensional applications such as multi-target tracking, weather prediction, and oceanography. While over the past twenty years many heuristics have been suggested to run particle filters in high dimension, no principled approach has ever been proposed to address the core of the problem. In this thesis we develop a novel framework to investigate high dimensional filtering models and to design algorithms that can avoid the curse of dimensionality. Using concepts and tools from statistical mechanics, we show that the decay of correlations property of high-dimensional models can be exploited by implementing localization procedures on ordinary particle filters that can lead to estimates whose approximation error is uniform both in time and in the model dimension. Ergodic and spatial mixing properties of conditional distributions play a crucial role in the design of filtering algorithms, and they are of independent interest in probability theory. To better capture ergodicity quantitatively, we develop new comparison theorems to establish dimension-free bounds on high-dimensional probability measures in terms of their local conditional distributions. At a qualitative level, we investigate previously unknown phenomena that can only arise from conditioning in infinite dimension. In particular, we exhibit the first known example of a model where ergodicity of the filter undergoes a phase transition in the signal-to-noise ratio.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subject.classificationStatisticsen_US
dc.subject.classificationComputer scienceen_US
dc.titleNonlinear Filtering in High Dimensionen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Rebeschini_princeton_0181D_11032.pdf1.9 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.