Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01kp78gk50n
Title: Energy-Efficient Implementation of Machine Learning Algorithms
Authors: Lu, Jie (Lucy)
Advisors: JhaVerma, NirajNaveen K.
Contributors: Electrical Engineering Department
Keywords: Approximate learning
Convolutional neural networks
Energy reduction
Machine learning
Multi-task images
Transfer learning
Subjects: Electrical engineering
Computer engineering
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems, in order to derive inferences from sensor data. Very often, such systems face severe energy constraints. The focus of this thesis is on mitigating the energy required for computation, communication, and storage by exploiting various forms of computation algorithms. In the first part of our work, we focus on reducing the computations necessary during linear signal-processing. In order to achieve computation reduction, we consider random projection. Random projection is a form of compression that preserves a similarity metric widely used for pattern recognition is used for computational energy reduction. The form of compression is random projection, and the similarity metric is inner product between source vectors. Given the prominence of random projections within compressive sensing, previous research has explored this idea for application to compressively-sensed signals. We show that random projections can be exploited more generally without compressive sensing, enabling significant reduction in computational energy and avoiding a significant source of error. The approach is referred to as compressed signal processing (CSP). It applies to Nyquist-sampled signals. The second part of our work focuses on dealing with signal processing that may not be linear. We look into approximate computing and its potential as an algorithmic approach to reducing energy. Approximate computing is a broad approach that has recently received considerable attention in the context of inference systems. This stems from the observation that many inference systems exhibit various forms of tolerance to data noise. While some systems have demonstrated significant approximation-vs.-energy knobs to exploit this, they have been applicable to specific kernels and architectures; the more generally available knobs have been relatively weak, resulting in large data noise for relatively modest energy savings (e.g., voltage overscaling, bit precision scaling). In this work, we explore the use of genetic programming (GP) to compute approximate features. Further, we leverage a method that enhances tolerance to feature-data noise through directed retraining of the inference stage. Previous work in GP has shown that it generalizes well to enable approximation of a broad range of computations, raising the potential for broad applicability of the proposed approach. The focus on feature extraction is deliberate because it involves diverse, often highly nonlinear, operations, challenging general applicability of energy-reducing approaches. The third part of our work takes into consideration multi-task algorithms. By exploiting the concept of transfer learning and energy-efficient data show accelerators, we show that the use of convolutional autoencoders can enable various levels of reduction in computational energy and avoid a significant reduction in inference performance when multiple task categories are targeted for obtaining an inference. In order to minimize inference computational energy, a convolutional autoencoder is used for learning a generalized representation of inputs. We consider three scenarios: transferring layers using convolutional autoencoders, transferring layers using convolutional neural networks trained on different tasks, and no layer transfer. We also take into account the performance when transferring only convolutional layers versus when transferring convolutional layers and a fully connected layer. We study our methodologies through validations of their generalizability and through applications using clinical and image data.
URI: http://arks.princeton.edu/ark:/88435/dsp01kp78gk50n
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Lu_princeton_0181D_13763.pdf5.76 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.