Deep Reinforcement Learning-Based Joint Routing and Capacity Optimization in an Aerial and Terrestrial Hybrid Wireless Network
Deep Reinforcement Learning-Based Joint Routing and Capacity Optimization in an Aerial and Terrestrial Hybrid Wireless Network
Blog Article
As the airspace is experiencing an increasing number of low-altitude aircraft, the concept of spectrum sharing between aerial and terrestrial users emerges as a compelling solution to improve the spectrum utilization efficiency.In this paper, we consider a new Aerial and Terrestrial Hybrid Network (ATHN) comprising aerial vehicles (AVs), ground base stations (BSs), and terrestrial users (TUs).In this ATHN, AVs and BSs BOOTS collaboratively form a multi-hop ad-hoc network with the objective of minimizing the average end-to-end (E2E) packet transmission delay.Meanwhile, the BSs and TUs form a terrestrial network aimed at Syd Hill PVC Endurance Breastplate with Rings maximizing the uplink and downlink sum capacity.Given the concept of spectrum sharing between aerial and terrestrial users in ATHN, we formulate a joint routing and capacity optimization (JRCO) problem, which is a multi-stage combinatorial problem subject to the curse of dimensionality.
To address this problem, we propose a Deep Reinforcement Learning (DRL) based algorithm.Specifically, the Dueling Double Deep Q-Network (D3QN) structure is constructed to learn an optimal policy through trial and error.Extensive simulation results demonstrate the efficacy of our proposed solution.