Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01vh53x004w
Title: AI framework for improved system design and explainable decisions
Authors: Terway, Prerit
Advisors: Jha, Niraj K. NJ
Contributors: Electrical and Computer Engineering Department
Keywords: Design space exploration
Error correction
Explainable decision
Gaussian mixture model
Inverse design
Subjects: Engineering
Artificial intelligence
Computer science
Issue Date: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: System design space exploration requires searching for inputs that help achieve the desired performance. One popular technique used to search a large space is to sweep over several possible values of the inputs and select the combination that attains the best performance. While randomly searching for different choices for the values of the components may work for a small set of inputs, the search space explodes combinatorially as the number of inputs increases. Moreover, when the system evaluation is expensive, the designer needs to minimize the number of system evaluation calls. If the system has multiple performance metrics, the designer needs to identify the set of inputs that achieves the best tradeoff among competing performance metrics and select the input corresponding to the desired tradeoff. Real-word system design often requires designing the same system multiple times with varying specifications to meet different customer needs. For example, when designing an electrical circuit, the requirements for acceptable noise and the gain of the circuit may differ depending on the use case. In such scenarios, the designer may wish to dynamically drive a system from one use case to another without starting the search from scratch. Piggybacking on an existing solution to cater to different specifications may help reduce the design time while leveraging past knowledge. Furthermore, when the system evaluation is expensive, the designer may wish to gauge how much confidence to place in the solution suggested by the design tool before performing an evaluation. Having a confidence measure allows the designer to assess whether to evaluate a lower confidence solution with the possibility of getting a larger improvement in performance or vice versa. In addition, several systems have a multi-modal behavior: same performance over multiple choices of inputs. However, the cost of using different combinations of inputs may vary, even though the system attains the same performance. In such scenarios, the design tool should identify all combinations of possible inputs corresponding to the same performance so that the designer may choose one combination over another. When designing a complex system, the designer may often be limited by the number of inputs that can be varied. For instance, the designer may only have the flexibility to vary three out of 10 system inputs. Such limitations may arise when the system requires evaluation across different domains, and the designer wishes to vary the inputs corresponding to the domain with fast evaluation period. However, the designer may still like to characterize the performance across all the domains. In such scenarios, the design tool should have the flexibility to specify the fixed system inputs and obtain values of the remaining inputs. Finally, real-world data often have errors where some of the values of the features may be corrupted. Using erroneous data to make decisions may have catastrophic consequences. A popular technique to ensure that downstream tasks do not use incorrect data for decision-making involves identifying if the observation does not lie within the distribution of the past data (without errors). However, such a methodology ignores the valuable information that resides in features with correct values. In such scenarios, a tool that detects errors and locates/corrects the values of the features with errors will be beneficial. In this thesis, we address the system challenges mentioned above. First, we introduce a sample-efficient system design framework called ASSENT. ASSENT uses a two-step methodology for system design. In Step 1, we use a genetic algorithm (GA) to discover the rough tradeoff between performance metrics. As GA is sample-inefficient, we terminate it prematurely, thus avoiding a large number of simulations required to discover the entire tradeoff curve of different performance metrics. The designer then selects a solution of interest from the tradeoff curve to enhance the performance of the selected solution further. In Step 2, we convert a neural network verifier into an optimizer to improve the performance of the selected solution. We use an inverse design methodology that specifies the desired system performance to perform targeted simulations and drive the selected solution to perform better on all metrics. The targeted simulations, with the use of inverse design, makes the design process sample-efficient. Next, we present a framework, called INFORM, which performs constrained multi-objective optimization for system design. We introduce three inverse design techniques based on a neural network verifier, a neural network, and a Gaussian mixture model (GMM). The combinations of these inverse design methods lead to a total of seven inverse design schemes. Similar to ASSENT, we use a two-step methodology. In the first step, we modify a GA to make the design process sample-efficient. We inject candidate solutions into the GA population using inverse design methods instead of determining the candidate solutions for the next generation using only crossover and mutation, as in standard GA. The candidate solutions for the next generation are thus a mix of those generated using crossover/mutation and solutions generated using inverse design. At the end of the first step, we obtain a set of non-dominated solutions. In the second step, we choose a region of interest around the non-dominated solutions or another reference solution to further improve the objective function values using inverse design methods. Finally, we present REPAIRS: an explainable decision framework to complete/optimize partially specified systems and detect/locate/correct errors in a data instance. We use a GMM to learn the joint distribution of the system inputs and the corresponding output response (objectives/constraints). We use the learned model to complete a partially-specified system where only a subset of the component values and/or the system response is specified. When the system response exhibits multiple modes (i.e., the same response for different combinations of input values), REPAIRS determines the combinations of input values that correspond to the several modes. We also use REPAIRS for verifying the integrity of a given data instance. When the integrity check fails, we provide a mechanism to identify and correct the error.
URI: http://arks.princeton.edu/ark:/88435/dsp01vh53x004w
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Terway_princeton_0181D_14716.pdf40.59 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.