Indexed by:
Abstract:
As an emerging computing paradigm, Mobile Edge Computing (MEC) significantly enhances user experience and alleviates network congestion by strategically deploying edge servers in close proximity to mobile users. However, the effectiveness of MEC hinges on the precise placement of these edge servers, a critical factor in determining the Quality of Experience (QoE) for mobile users. While existing studies predominantly focus on optimizing edge server placement in static scenarios, they often fall short when faced with user mobility, resulting in a degradation of QoE. To address this challenge, we propose an adaptive edge server placement approach that leverages Deep Reinforcement Learning (DRL) to select the base stations for placing edge servers in a dynamic MEC environment. Our objective is to minimize access delay by optimizing edge server placement for adapting to dynamic environment. To tackle the vast action space associated with edge server placement, we introduce a novel activation function in the actor neural network for efficient exploration. Furthermore, to enhance the adaptability of the derived edge server placement strategy, we meticulously design a new reward function, which takes into account the minimization of total access delay within dynamic MEC scenarios. Finally, to validate the effectiveness of our proposed method, extensive experiments were conducted using the Shanghai Telecom dataset. The results demonstrate that our approach outperforms baseline methods in minimizing access delay for users in dynamic MEC scenarios.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
COMPUTER NETWORKS
ISSN: 1389-1286
Year: 2025
Volume: 268
4 . 4 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6