Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp017w62fc456
Title: A Serverless Architecture for Application-Level Orchestration
Authors: Liu, Hao
Advisors: Levy, Amit
Contributors: Computer Science Department
Keywords: Cloud Computing
Serverless Computing
Subjects: Computer science
Issue Date: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: This thesis examines the problem of building large-scale applications usingthe serverless computing model and proposes decentralized, application-level orchestration for serverless workloads. We demonstrate that application-level orchestration is possible and practical using just the basic APIs of existing serverless infrastructures, and benefits both cloud users and cloud providers, compared with standalone orchestrators, the state-of-the-art solution to building large-scale serverless applications. It empowers cloud users with the flexibility of application-specific optimizations. It frees cloud providers from hosting and maintaining yet another performance-critical service. Furthermore, the performance and efficiency of application-level orchestration improve as the underlying systems develop. Thus, cloud providers can direct freed-up resources to core services in their serverless infrastructure and automatically reaps the benefits of a better orchestrator. This thesis describes mechanisms and implementations that help realize thegoal of application-level orchestration. In particular, we explain the necessity and challenges of decentralizing orchestration and present a system for decentralized orchestration named Unum. Unum introduces an intermediate representation (IR) language to express execution graphs using only node-local information to decentralize the orchestration logic of applications. Unum implements orchestration as a library that runs in-situ with user-defined FaaS functions, rather than as a standalone service. The library relies on a minimal set of existing serverless APIs---function invocation and a few basic datastore operations---that are common across cloud platforms. Unum ensures workflow correctness despite multiple executions of non-deterministic functions by using checkpoints to commit to exactly one output for a function invocation. Our results show that a representative set of applications scale better, runfaster, and cost significantly less with Unum than a state-of-the-art centralized orchestrator. We also show that Unum's IR allows hand-tuned applications to run faster by using application-specific optimizations and supporting a richer set of application patterns. We hope the results of this thesis inspire cloud practitioners to reconsiderthe approach of supporting new functionalities by simply adding more services to the cloud infrastructure. And we hope to encourage the building of other application-level orchestration systems from the serverless community.
URI: http://arks.princeton.edu/ark:/88435/dsp017w62fc456
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Liu_princeton_0181D_14389.pdf1.02 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.