Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01g732dd159
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSeung, Sebastian
dc.contributor.authorLuther, Kyle
dc.contributor.otherPhysics Department
dc.date.accessioned2022-06-16T20:33:30Z-
dc.date.available2022-06-16T20:33:30Z-
dc.date.created2022-01-01
dc.date.issued2022
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01g732dd159-
dc.description.abstractIt is generally believed that the learning capabilities of real biological brains are unmatched by modern artificial intelligence systems. However state of the art artificial systems usually do not closely resemble more biologically inspired models of learning. For visual object recognition, typical state of the art systems are convolutional networks trained by backpropagating errors from millions of labeled images. Many biological models do not rely on backpropagation of supervised error signals, and instead use unsupervised local learning rules to update synaptic connections. In this thesis we investigate the recognition capabilities of one such biological model, sparse coding. We then propose novel methods to improve recognition capabilities of bio-inspired models using unsupervised local learning rules. In chapter 1 we discuss previous work on biologically-inspired models of visual cortex. In chapter 2 we discuss quantitative methods to evaluate such models by measuring the invariance properties of their generated neural responses. In chapter 3 we examine the popular learning model, sparse coding, from the perspective of invariant object recognition. We argue that the generated neural responses are actually less invariant than pixels to small input distortions and speculate that this is undesirable for invariant object recognition. In chapter 4 we propose another algorithm, kernel similarity matching, based on local learning rules that learns sparse high dimensional representations which preserves the relative ordering of similarities between pairs of patterns. Finally in chapter 5 we propose another algorithm, invariant subspace modules, based off applying subspace clustering to image patches. Computing the norm of an input patch inside these learned subspaces can be regarded as invariant feature detection. Coupled with proper normalization, subspace clustering can be applied in a hierarchical fashion, leading to representations which support classification, at least for simple handwritten MNIST digits.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subjectbrain-inspired computation
dc.subjectlocal learning rules
dc.subjectObject recognition
dc.subjectunsupervised learning
dc.subject.classificationArtificial intelligence
dc.subject.classificationNeurosciences
dc.titleGenerating invariant representations with cortex-inspired models of unsupervised learning
dc.typeAcademic dissertations (Ph.D.)
pu.date.classyear2022
pu.departmentPhysics
Appears in Collections:Physics

Files in This Item:
File Description SizeFormat 
Luther_princeton_0181D_14063.pdf12.44 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.