Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp018910jx35z
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHazan, Elad-
dc.contributor.authorAgarwal, Naman-
dc.contributor.otherComputer Science Department-
dc.date.accessioned2018-10-09T21:11:41Z-
dc.date.available2018-10-09T21:11:41Z-
dc.date.issued2018-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp018910jx35z-
dc.description.abstractIn recent years first-order stochastic methods have emerged as the state-of-the-art in large-scale machine learning optimization. This is primarily due to their efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, are less popular due to the high cost of computing the second-order information. The main problem considered in this thesis is can efficient second-order methods for optimization problems arising in machine learning be developed that improve upon the best known first-order methods. We consider the ERM model of learning and propose linear time second-order algorithms for both convex as well as non-convex settings which improve upon the state-of-the-art first-order algorithms. In the non-convex setting second-order methods are also shown to converge to better quality solutions efficiently. For the convex case the proposed algorithms make use of a novel estimator for the inverse of a matrix and better sampling techniques for stochastic methods derived out of the notion of leverage scores. For the non-convex setting we propose an efficient implementation of the cubic regularization scheme proposed by Nesterov and Polyak. Furthermore we develop second-order methods for achieving approximate local minima on Riemannian manifolds which match the convergence rate of their Euclidean counterparts. Finally we show the limitations of second/higher-order methods by deriving oracle complexity lower bounds for such methods on sufficiently smooth convex functions.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectMachine Learning-
dc.subjectOptimization-
dc.subject.classificationComputer science-
dc.titleSecond-Order Optimization Methods for Machine Learning-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Agarwal_princeton_0181D_12701.pdf1.63 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.