Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01v692t937q
DC FieldValueLanguage
dc.contributor.authorLi, Shuangping
dc.contributor.otherApplied and Computational Mathematics Department
dc.date.accessioned2022-06-16T20:34:25Z-
dc.date.available2022-06-16T20:34:25Z-
dc.date.created2022-01-01
dc.date.issued2022
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01v692t937q-
dc.description.abstractThis thesis focuses on the study of two problems in discrete probability theory motivated by modern machine learning applications. The first topic is the binary perceptron model, a simple model of neural networks that has gathered significant attention in the statistical physics, information theory and probability theory communities. For the symmetric binary perceptron model, we establish that the partition function of this model, normalized by its expected value, converges to a lognormal distribution. As a consequence, this allows us to establish several conjectures for this model, including contiguity, sharp threshold and strong freezing conjectures. The strong freezing property further points to an interesting phenomenon that is not present in sparse constraint satisfaction problems: on the one hand, the perceptron model is conjectured to be typically hard; on the other hand, efficient algorithms have been shown empirically to succeed in finding solutions, suggesting that such algorithms find atypical solutions. We formally establish such a phenomenon for both the symmetric and asymmetric binary perceptrons. We show that at low constraint density, there exists a subdominant connected cluster of solutions with almost maximal diameter, and that an efficient multiscale majority algorithm can find solutions in such a cluster with high probability. The second topic is learning sparse graphons. The problem of learning graphons has attracted considerable attention across several scientific communities, with significant progress over the recent years in sparser regimes. Yet, the current techniques still require diverging degrees in order to succeed with efficient algorithms in the challenging cases where the local structure of the graph is homogeneous. We provide an efficient algorithm to learn graphons in the constant expected degree regime. The algorithm is shown to succeed in estimating the rank-k projection of a graphon in the L2 metric if the top k eigenvalues of the graphon satisfy a generalized Kesten-Stigum condition.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subjectgraphon
dc.subjectinference
dc.subjectperceptron
dc.subjectsharp threshold
dc.subjectsolution space
dc.subject.classificationMathematics
dc.subject.classificationStatistics
dc.titleThe Solution Space of the Binary Perceptron Model and Inference of Sparse Graphons