Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01h128nh20m
Title: Relaxing the Implementation of Embedded Sensing Systems through Machine Learning and Statistical Optimization
Authors: Wang, Zhuo
Advisors: Verma, Naveen
Contributors: Electrical Engineering Department
Keywords: embedded system
hardware relaxation
machine learning
sensing system
Subjects: Electrical engineering
Computer engineering
Computer science
Issue Date: 2017
Publisher: Princeton, NJ : Princeton University
Abstract: The increasing deployment of large numbers of sensors is giving us access to physical signals, which could be of high informational value, on an unprecedented scale. However, the signals are generally derived from complex physical processes, for which we often do not have adequate analytical/physics-based models. Fortunately, machine learning algorithms enable us to model such complex signals in a data-driven manner. The challenge, however, is that the models and algorithms can pose computational requirements beyond those of highly-resource-constrained sensing platforms. While many recent works have focused on hardware optimizations at the architecture and circuit levels for energy-efficient implementation of widely-used machine-learning kernels (e.g., digital accelerators), this thesis addresses the INVERSE problem. That is, how algorithmic tools emerging from statistical optimization and machine-learning can be exploited to ease the hardware implementation itself. Rather than just the implementations serving the algorithms, we are also interested in how the algorithms can serve the implementations. To approach this, we first propose three opportunities enabled by machine learning and optimization. Data-Driven Hardware Resilience (DDHR) leverages machine learning algorithms not only to train to the sensor signals, but also to train to the error statistics due to hardware non-idealities. Hardware Driven Kernel Learning (HDKL) explores ways to adapt the training algorithm itself to accommodate inference functions and attributes of inference functions that are preferred from the perspective of resource-constrained implementations. The third opportunity leverages statistical optimization to substantially reduce the quantization errors of computation. These principles enable substantial hardware relaxations, which can lead to extremely efficient hardware implementations of embedded machine-learning systems. We then present a few demonstration systems mapping the principles to hardware architectures, with a strong focus on a comparator-based classification accelerator for directly performing inference on analog sensor data. Through these examples, we demonstrate either transformational new systems capabilities or orders of magnitude energy reduction compared to conventional realizations.
URI: http://arks.princeton.edu/ark:/88435/dsp01h128nh20m
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Wang_princeton_0181D_12054.pdf9.86 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.