Indexed by:
Abstract:
By effectively assigning and migrating tasks based on service requirements, the success rate of task execution in cloud-edge-end collaborative computing can be significantly enhanced, thereby ensuring the provision of high-quality services for users. The majority of conventional cloud-edge-end task offloading approaches primarily focus on static scenarios, posing challenges in ensuring the success rate of task execution in mobile scenarios. It is imperative to address the problem of constructing a joint optimization scheme for task allocation and migration that is suitable for mobile scenarios. This paper re-define the latency, energy, and migration model for task processing in mobile scenarios. Furthermore, we propose a Deep Reinforcement learning (DRL)-based Task allocation and Migration optimization algorithm (DRTM) to enhance the efficiency of task completion and minimize the total cost. DRTM introduces the traditional Actor-Critic with a mirror deep deterministic policy gradient (DDPG) and establishes a duel Q-network to update parameters on respective gradients for optimal policy acquisition. DRTM incorporates two target networks to effectively improve stability and convergence speed during training while reducing computational complexity. The experimental results demonstrate that DRTM can offer a high-performance task assignment and migration scheme in mobile scenarios, thereby significantly reducing the total cost of the task execution life cycle. © 2023 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2023
Page: 265-270
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: