Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp018p58ph207
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSircar, Ronnie
dc.contributor.authorGeng, Sinong
dc.contributor.otherComputer Science Department
dc.date.accessioned2023-07-06T20:21:56Z-
dc.date.available2023-07-06T20:21:56Z-
dc.date.created2023-01-01
dc.date.issued2023
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp018p58ph207-
dc.description.abstractThanks to the availability of more and more high-dimensional data, recent developments in machine learning (ML) have redefined decision-making in numerous domains. However, the battle against the unreliability of ML in decision-making caused by the lack of high-quality data has not ended and is an important obstacle in almost every application. Some questions arise like (i) Why does an ML method fail to replicate the decision-making behaviors in a new environment? (ii) Why does ML give unreasonable interpretations for existing expert decisions? (iii) How should we make decisions under a noisy and high-dimensional environment? Many of these issues can be attributed to the lack of an effective and sample-efficient model underlying ML methods. This thesis presents our research efforts dedicated to developing model-regularized ML for decision-making to address the above issues in areas of inverse reinforcement learning and reinforcement learning, with applications to customer/company behavior analysis and portfolio optimization. Specifically, by applying regularizations derived from suitable models, we propose methods for two different goals: (i) to better understand and replicate existing decision making of human experts and businesses; (ii) to conduct better sequential decision-making, while overcoming the need for large amounts of high-quality data in situations where there might not be enough.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.subjectdynamic discrete choice model
dc.subjectinverse reinforcement learning
dc.subjectreinforcement learning
dc.subjectstochastic factor model
dc.subjectstochastic optimal control
dc.subject.classificationComputer science
dc.titleModel-Regularized Machine Learning for Decision-Making
dc.typeAcademic dissertations (Ph.D.)
pu.date.classyear2023
pu.departmentComputer Science
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Geng_princeton_0181D_14472.pdf1.67 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.