Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01v405sd22b
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorVerma, Naveen-
dc.contributor.authorHam, Eric-
dc.date.accessioned2019-08-16T18:48:01Z-
dc.date.available2019-08-16T18:48:01Z-
dc.date.created2019-04-22-
dc.date.issued2019-08-16-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01v405sd22b-
dc.description.abstractWith steady advances in neural network technology, it has become necessary to develop hardware that is better suited to running neural networks. Many different hardware systems have been developed for this purpose, but here we focus on an Application Specific Integrated Circuit (ASIC) developed by Professor Naveen Verma’s group at Princeton University. This chip has demonstrated superior efficiency and power usage when compared with other similarly intentioned topologies. However, it lacks sufficient memory capacity to easily run the more complicated neural networks that are currently being developed. In this work, I propose adding a Synchronous Dynamic Random Access Memory (SDRAM) component to the existing ASIC in order to resolve this issue. I hope that this addition will allow this chip to run more complicated neural networks with larger training datasets more efficiently.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleOptimizing Neural Network Performance by Integrating ASICs Optimized for Machine Learning with SDRAMen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2019en_US
pu.departmentElectrical Engineeringen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid960846697-
pu.certificateRobotics & Intelligent Systems Programen_US
Appears in Collections:Electrical and Computer Engineering, 1932-2023

Files in This Item:
File Description SizeFormat 
HAM-ERIC-THESIS.pdf7.09 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.