Please use this identifier to cite or link to this item:
|Title:||Algorithms for Data Normalization with Applications to Stop and Frisk|
|Abstract:||Data modeling is an already difficult task that is further exacerbated by the errors of data entry. Inconsistencies in large quantities of data can make it difficult to perform any kind of automated analyses. We motivate our investigation into improved data cleaning methods by revealing disastrous non-uniformity in data related to the controversial Stop and Frisk policy as implemented by the NYPD. These inconsistencies help guide our construction of workflow F, which consults multiple similarity measurements in order to dictate proper transformations of non-uniform data into standardized values. F increases the volume of non-standardized data that is correctly transformed by 887% in comparison to common existing methods, such as the Levenshtein distance. We conclude by presenting additional pathways for improvement and describing how to most effectively apply workflow F as part of an interactive tool.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Computer Science, 1988-2016|
Files in This Item:
|PUTheses2015-Fillmore_Mark.pdf||858.51 kB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.