Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp019019s5171
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorFreedman, Michael J-
dc.contributor.authorZhang, Haoyu-
dc.contributor.otherComputer Science Department-
dc.date.accessioned2018-06-12T17:45:25Z-
dc.date.available2018-06-12T17:45:25Z-
dc.date.issued2018-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp019019s5171-
dc.description.abstractThe rapidly growing size of data and the complexity of analytics present new challenges for large-scale data analytics systems. Modern distributed computing frameworks need to support not only embarrassingly parallelizable batch jobs, but also advanced applications analyzing text and multimedia data using complex queries and machine learning (ML) models. Given the computation and storage costs of advanced data analytics, resource management is crucial. New applications and workloads expose vastly different characteristics which make traditional scheduling systems inadequate, and at the same time offer great opportunities that lead to new system designs for better performance. In this thesis, we present resource management systems that significantly improve cloud resource efficiency by leveraging the specific characteristics of advanced data analytics applications. We present the design and implementation of the following systems: (i) VideoStorm: a video analytics system that scales to process thousands of vision queries on live video streams over large clusters. VideoStorm's offline profiler generates resource-quality profiles for vision queries, and its online scheduler allocates resources to maximize performance in terms of vision processing quality and lag. (ii) SLAQ: a cluster scheduling system for approximate ML training jobs that aims to maximize the overall model quality. In iterative and exploratory training settings, better models can be obtained faster by directing resources to jobs with the most potential for improvement. SLAQ allocates resources to maximize the cluster-wide quality improvement based on highly-tailored model quality predictions. (iii) Riffle: an optimized shuffle service for big-data analytics frameworks that significantly improves I/O efficiency. The all-to-all data transfer (i.e., shuffle) in modern big-data systems (such as Spark and Hadoop) becomes the scaling bottleneck for multi-stage analytics jobs, due to the superlinear increase in disk I/O operations as data volume grows. Riffle boosts system performance by merging fragmented intermediate files and efficiently scheduling the merge operations. Taken together, this thesis demonstrates a novel set of methods in both job-level and task-level scheduling for building scalable, highly-efficient, and cost-effective resource management systems. We have performed extensive evaluation with real production workloads, and our results show significant improvement in resource efficiency, job completion time, and system throughput for advanced data analytics.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectBig-Data Analytics-
dc.subjectCloud Computing-
dc.subjectDistributed Machine Learning-
dc.subjectDistributed Systems-
dc.subjectResource Scheduling-
dc.subjectVideo Analytics-
dc.subject.classificationComputer science-
dc.titleResource Management for Advanced Data Analytics at Large Scale-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Zhang_princeton_0181D_12602.pdf20.87 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.