Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01h128nh20m
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorVerma, Naveen-
dc.contributor.authorWang, Zhuo-
dc.contributor.otherElectrical Engineering Department-
dc.date.accessioned2017-04-28T15:45:10Z-
dc.date.available2017-04-28T15:45:10Z-
dc.date.issued2017-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01h128nh20m-
dc.description.abstractThe increasing deployment of large numbers of sensors is giving us access to physical signals, which could be of high informational value, on an unprecedented scale. However, the signals are generally derived from complex physical processes, for which we often do not have adequate analytical/physics-based models. Fortunately, machine learning algorithms enable us to model such complex signals in a data-driven manner. The challenge, however, is that the models and algorithms can pose computational requirements beyond those of highly-resource-constrained sensing platforms. While many recent works have focused on hardware optimizations at the architecture and circuit levels for energy-efficient implementation of widely-used machine-learning kernels (e.g., digital accelerators), this thesis addresses the INVERSE problem. That is, how algorithmic tools emerging from statistical optimization and machine-learning can be exploited to ease the hardware implementation itself. Rather than just the implementations serving the algorithms, we are also interested in how the algorithms can serve the implementations. To approach this, we first propose three opportunities enabled by machine learning and optimization. Data-Driven Hardware Resilience (DDHR) leverages machine learning algorithms not only to train to the sensor signals, but also to train to the error statistics due to hardware non-idealities. Hardware Driven Kernel Learning (HDKL) explores ways to adapt the training algorithm itself to accommodate inference functions and attributes of inference functions that are preferred from the perspective of resource-constrained implementations. The third opportunity leverages statistical optimization to substantially reduce the quantization errors of computation. These principles enable substantial hardware relaxations, which can lead to extremely efficient hardware implementations of embedded machine-learning systems. We then present a few demonstration systems mapping the principles to hardware architectures, with a strong focus on a comparator-based classification accelerator for directly performing inference on analog sensor data. Through these examples, we demonstrate either transformational new systems capabilities or orders of magnitude energy reduction compared to conventional realizations.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectembedded system-
dc.subjecthardware relaxation-
dc.subjectmachine learning-
dc.subjectsensing system-
dc.subject.classificationElectrical engineering-
dc.subject.classificationComputer engineering-
dc.subject.classificationComputer science-
dc.titleRelaxing the Implementation of Embedded Sensing Systems through Machine Learning and Statistical Optimization-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Wang_princeton_0181D_12054.pdf9.86 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.