Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01br86b6867
Title: Towards Machine Learning for Network Optimization: Buffer Sizing via Reinforcement Learning
Authors: Veizi, Artemis
Advisors: Apostolaki, Maria
Department: Electrical and Computer Engineering
Class Year: 2023
Abstract: Buffer management helps to optimize network performance by controlling the flow of data and preventing congestion, which can lead to delays and packet loss. Buffer size is a critical aspect of the buffer management problem, as it determines how much data can be stored and processed in a network device. Advancements in technology have made it possible to obtain larger buffer sizes for network devices at lower costs, but this does not necessarily mean that increasing buffer size is always the best solution for network optimization; increasing the buffer size beyond a certain limit can lead to ``bufferbloat'', where excessive buffering causes increased latency, jitter and packet loss, and ultimately degrades network performance. Previous works have aimed to characterize the ideal buffer size for a given network under various network \& traffic conditions, but lack extensibility to all topologies and network profiles. Furthermore, existing buffer management schemes set buffer thresholds according to fixed heuristic algorithms; static and heuristic buffer management algorithms suffer from weak adaptability to rapidly changing network traffic profiles, and often exhibit poor burst tolerance and significant throughput degradation in times of network congestion. We present a reinforcement learning algorithm to select optimal buffer thresholds, with the aim of developing a buffer management strategy which more dynamically responds to changes in network traffic. We use NS--3 (Network Simulator 3) to simulate different network configurations with varying traffic loads, TCP protocols, and per-priority queue weights on a fixed topology, and compare our reinforcement learning algorithm (RLBM) to a statically-configured buffer management scheme (SB). Within a limited topological scope, simulation results indicate that RLBM produces the same or better throughput as SB in simulations with larger physical buffers. The RLBM scheme also showed significant improvement in the worst observed FCT slowdown and end-to-end delay for small buffer sizes. Our findings indicate that reinforcement learning algorithms could improve network performance over traditional buffer management schemes, and warrant further exploration of reinforcement learning solutions to the buffer management problem.
URI: http://arks.princeton.edu/ark:/88435/dsp01br86b6867
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Electrical and Computer Engineering, 1932-2024

Files in This Item:
File Description SizeFormat 
VEIZI-ARTEMIS-THESIS.pdf10.48 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.