Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp018p58ph207
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Sircar, Ronnie | |
dc.contributor.author | Geng, Sinong | |
dc.contributor.other | Computer Science Department | |
dc.date.accessioned | 2023-07-06T20:21:56Z | - |
dc.date.available | 2023-07-06T20:21:56Z | - |
dc.date.created | 2023-01-01 | |
dc.date.issued | 2023 | |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp018p58ph207 | - |
dc.description.abstract | Thanks to the availability of more and more high-dimensional data, recent developments in machine learning (ML) have redefined decision-making in numerous domains. However, the battle against the unreliability of ML in decision-making caused by the lack of high-quality data has not ended and is an important obstacle in almost every application. Some questions arise like (i) Why does an ML method fail to replicate the decision-making behaviors in a new environment? (ii) Why does ML give unreasonable interpretations for existing expert decisions? (iii) How should we make decisions under a noisy and high-dimensional environment? Many of these issues can be attributed to the lack of an effective and sample-efficient model underlying ML methods. This thesis presents our research efforts dedicated to developing model-regularized ML for decision-making to address the above issues in areas of inverse reinforcement learning and reinforcement learning, with applications to customer/company behavior analysis and portfolio optimization. Specifically, by applying regularizations derived from suitable models, we propose methods for two different goals: (i) to better understand and replicate existing decision making of human experts and businesses; (ii) to conduct better sequential decision-making, while overcoming the need for large amounts of high-quality data in situations where there might not be enough. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.publisher | Princeton, NJ : Princeton University | |
dc.subject | dynamic discrete choice model | |
dc.subject | inverse reinforcement learning | |
dc.subject | reinforcement learning | |
dc.subject | stochastic factor model | |
dc.subject | stochastic optimal control | |
dc.subject.classification | Computer science | |
dc.title | Model-Regularized Machine Learning for Decision-Making | |
dc.type | Academic dissertations (Ph.D.) | |
pu.date.classyear | 2023 | |
pu.department | Computer Science | |
Appears in Collections: | Computer Science |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Geng_princeton_0181D_14472.pdf | 1.67 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.