• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Wang, Jun (Wang, Jun.) [1] | Jiang, Weibin (Jiang, Weibin.) [2] | Xu, Haodong (Xu, Haodong.) [3] | Hu, Jinsong (Hu, Jinsong.) [4] | Wu, Liang (Wu, Liang.) [5] | Shu, Feng (Shu, Feng.) [6] | Fang, Zhou (Fang, Zhou.) [7]

Indexed by:

EI

Abstract:

Efficient and fair resource allocation is a critical challenge in vehicular networks, especially under high mobility and unknown channel state information (CSI). Existing works mainly focus on centralized optimization with perfect CSI or decentralized heuristics with partial CSI, which may not be practical or effective in real-world scenarios. In this paper, we propose a novel hierarchical deep reinforcement learning (HDRL) framework to address the joint channel and power allocation problem in vehicular networks with high mobility and unknown CSI. The main contributions of this work are twofold. Firstly, this paper develops a multi-agent reinforcement learning architecture that integrates both centralized training with global information and decentralized execution with local observations. The proposed architecture leverages the strengths of deep Q-networks (DQN) for discrete channel selection and deep deterministic policy gradient (DDPG) for continuous power control while learning robust and adaptive policies under time-varying channel conditions. Secondly, this paper designs efficient reward functions and training algorithms that encourage cooperation among vehicles and balance the trade-off between system throughput and individual fairness. By incorporating Jain's fairness index into the reward design and adopting a hybrid experience replay strategy, the proposed algorithm achieves a good balance between system efficiency and user equity. Extensive simulations demonstrate the superiority of the proposed HDRL method over state-of-the-art benchmarks, including DQN, DDPG, and fractional programming, in terms of both average throughput and fairness index under various realistic settings. The proposed framework provides a promising solution for intelligent and efficient resource management in future vehicular networks. © 2025 Elsevier B.V.

Keyword:

Deep reinforcement learning Reinforcement learning

Community:

  • [ 1 ] [Wang, Jun]College of Electrical Engineering and Automation, Fuzhou University, Fujian, Fuzhou; 350108, China
  • [ 2 ] [Jiang, Weibin]College of Electrical Engineering and Automation, Fuzhou University, Fujian, Fuzhou; 350108, China
  • [ 3 ] [Xu, Haodong]College of Electrical Engineering and Automation, Fuzhou University, Fujian, Fuzhou; 350108, China
  • [ 4 ] [Hu, Jinsong]College of Physics and Information Engineering, Fuzhou University, Fujian, Fuzhou; 350108, China
  • [ 5 ] [Wu, Liang]Mobile Communications Research Laboratory, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Jiangsu, Nanjing; 210006, China
  • [ 6 ] [Shu, Feng]School of Information and Communication Engineering, Hainan University, Hainan, Haikou; 570228, China
  • [ 7 ] [Shu, Feng]School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Jiangsu, Nanjing; 210094, China
  • [ 8 ] [Fang, Zhou]College of Electrical Engineering and Automation, Fuzhou University, Fujian, Fuzhou; 350108, China

Reprint 's Address:

  • [fang, zhou]college of electrical engineering and automation, fuzhou university, fujian, fuzhou; 350108, china

Show more details

Related Keywords:

Source :

Computer Networks

ISSN: 1389-1286

Year: 2025

Volume: 264

4 . 4 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Affiliated Colleges:

Online/Total:99/10068247
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1