Query:
学者姓名:陈哲毅
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Through deploying computing resources at the network edge, Mobile Edge Computing (MEC) alleviates the contradiction between the high requirements of intelligent mobile applications and the limited capacities of mobile End Devices (EDs) in smart communities. However, existing solutions of computation offloading and resource allocation commonly rely on prior knowledge or centralized decision-making, which cannot adapt to dynamic MEC environments with changeable system states and personalized user demands, resulting in degraded Quality-of-Service (QoS) and excessive system overheads. To address this important challenge, we propose a novel Personalized Federated deep Reinforcement learning based computation Offloading and resource Allocation method (PFR-OA). This innovative PFR-OA considers the personalized demands in smart communities when generating proper policies of computation offloading and resource allocation. To relieve the negative impact of local updates on global model convergence, we design a new proximal term to improve the manner of only optimizing local Q-value loss functions in classic reinforcement learning. Moreover, we develop a new partial-greedy based participant selection mechanism to reduce the complexity of federated aggregation while endowing sufficient exploration. Using real-world system settings and testbed, extensive experiments demonstrate the effectiveness of the PFR-OA. Compared to benchmark methods, the PFR-OA achieves better trade-offs between delay and energy consumption and higher task execution success rates under different scenarios. IEEE
Keyword :
computation offloading computation offloading deep reinforcement learning deep reinforcement learning Mobile edge computing Mobile edge computing personalized federated learning personalized federated learning resource allocation resource allocation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Z. , Xiong, B. , Chen, X. et al. Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning [J]. | IEEE Transactions on Mobile Computing , 2024 : 1-16 . |
MLA | Chen, Z. et al. "Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning" . | IEEE Transactions on Mobile Computing (2024) : 1-16 . |
APA | Chen, Z. , Xiong, B. , Chen, X. , Min, G. , Li, J. . Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning . | IEEE Transactions on Mobile Computing , 2024 , 1-16 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
移动边缘计算(Mobile Edge Computing, MEC)将计算与存储资源部署到网络边缘,用户可将移动设备上的任务卸载到附近的边缘服务器,得到一种低延迟、高可靠的服务体验.然而,由于动态的系统状态和多变的用户需求,MEC环境下的计算卸载与资源分配面临着巨大的挑战.现有解决方案通常依赖于系统先验知识,无法适应多约束条件下动态的MEC环境,导致了过度的时延与能耗.为解决上述重要挑战,本文提出了一种新型的基于深度强化学习的计算卸载与资源分配联合优化方法(Joint computation Offloading and resource Allocation with deep Reinforcement Learning, JOA-RL).针对多用户时序任务,JOA-RL方法能够根据计算资源与网络状况,生成合适的计算卸载与资源分配方案,提高执行任务成功率并降低执行任务的时延与能耗.同时,JOA-RL方法融入了任务优先级预处理机制,能够根据任务数据量与移动设备性能为任务分配优先级.大量仿真实验验证了JOA-RL方法的可行性和有效性.与其他基准方法相比,JOA-RL方法在任务最大容忍时延与设备电量约束下能够在时延与能耗之间取得更好的平衡,且展现出了更高的任务执行成功率.
Keyword :
多约束优化 多约束优化 深度强化学习 深度强化学习 移动边缘计算 移动边缘计算 计算卸载 计算卸载 资源分配 资源分配
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 熊兵 , 张俊杰 , 黄思进 et al. 多约束边环境下计算卸载与资源分配联合优化 [J]. | 小型微型计算机系统 , 2024 , 45 (02) : 405-412 . |
MLA | 熊兵 et al. "多约束边环境下计算卸载与资源分配联合优化" . | 小型微型计算机系统 45 . 02 (2024) : 405-412 . |
APA | 熊兵 , 张俊杰 , 黄思进 , 陈哲毅 , 于正欣 , 陈星 . 多约束边环境下计算卸载与资源分配联合优化 . | 小型微型计算机系统 , 2024 , 45 (02) , 405-412 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为了更好地支持边缘计算服务提供商进行资源的提前配置与合理分配,负载预测被认为是边缘计算中的一项重要的技术支撑.传统的负载预测方法在面对具有明显趋势或规律性的负载时能取得良好的预测效果,但是它们无法有效地对边缘环境中高度变化的负载取得精确的预测.此外,这些方法通常将预测模型拟合到独立的时间序列上,进而进行单点负载实值预测.但是在实际边缘计算场景中,得到未来负载变化的概率分布情况会比直接预测未来负载的实值更具应用价值.为了解决上述问题,本文提出了一种基于深度自回归循环神经网络的边缘负载预测方法(Edge Load Prediction with Deep Auto-regressive Recurrent networks, ELP-DAR).所提出的ELP-DAR方法利用边缘负载时序数据训练深度自回归循环神经网络,将LSTM集成至S2S框架中,进而直接预测下一时间点负载概率分布的所有参数.因此,ELP-DAR方法能够高效地提取边缘负载的重要表征,学习复杂的边缘负载模式进而实现对高度变化的边缘负载精确的概率分布预测.基于真实的边缘负载数据集,通过大量仿真实验对所提出ELP-DAR方法的有效性进行了验证与分析.实验结果表明,相比于其他基准方法,所提出的ELP-DAR方法可以取得更高的预测精度,并且在不同预测长度下均展现出了优越的性能表现.
Keyword :
循环神经网络 循环神经网络 概率分布 概率分布 深度自回归 深度自回归 负载预测 负载预测 边缘计算 边缘计算
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈礼贤 , 梁杰 , 黄一帆 et al. 基于深度自回归循环神经网络的边缘负载预测 [J]. | 小型微型计算机系统 , 2024 , 45 (02) : 359-366 . |
MLA | 陈礼贤 et al. "基于深度自回归循环神经网络的边缘负载预测" . | 小型微型计算机系统 45 . 02 (2024) : 359-366 . |
APA | 陈礼贤 , 梁杰 , 黄一帆 , 陈哲毅 , 于正欣 , 陈星 . 基于深度自回归循环神经网络的边缘负载预测 . | 小型微型计算机系统 , 2024 , 45 (02) , 359-366 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
As an effective technique to relieve the problem of resource constraints on mobile devices (MDs), the computation offloading utilizes powerful cloud and edge resources to process the computation-intensive tasks of mobile applications uploaded from MDs. In cloud-edge computing, the resources (e.g., cloud and edge servers) that can be accessed by mobile applications may change dynamically. Meanwhile, the parallel tasks in mobile applications may lead to the huge solution space of offloading decisions. Therefore, it is challenging to determine proper offloading plans in response to such high dynamics and complexity in cloud-edge environments. The existing studies often preset the priority of parallel tasks to simplify the solution space of offloading decisions, and thus the proper offloading plans cannot be found in many cases. To address this challenge, we propose a novel real-time and Dependency-aware task Offloading method with Deep Q-networks (DODQ) in cloud-edge computing. In DODQ, mobile applications are first modeled as Directed Acyclic Graphs (DAGs). Next, the Deep Q-Networks (DQN) is customized to train the decision-making model of task offloading, aiming to quickly complete the decision-making process and generate new offloading plans when the environments change, which considers the parallelism of tasks without presetting the task priority when scheduling tasks. Simulation results show that the DODQ can well adapt to different environments and efficiently make offloading decisions. Moreover, the DODQ outperforms the state-of-art methods and quickly reaches the optimal/near-optimal performance.
Keyword :
Cloud computing Cloud computing Cloud-edge computing Cloud-edge computing Computational modeling Computational modeling deep reinforcement learning deep reinforcement learning dependent and parallel tasks dependent and parallel tasks Heuristic algorithms Heuristic algorithms Mobile applications Mobile applications real-time offloading real-time offloading Real-time systems Real-time systems Servers Servers Task analysis Task analysis
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Xing , Hu, Shengxi , Yu, Chujia et al. Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning [J]. | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 2024 , 35 (3) : 391-404 . |
MLA | Chen, Xing et al. "Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning" . | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 35 . 3 (2024) : 391-404 . |
APA | Chen, Xing , Hu, Shengxi , Yu, Chujia , Chen, Zheyi , Min, Geyong . Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning . | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 2024 , 35 (3) , 391-404 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
5G网络切片与计算卸载技术的出现,有望支持移动边缘计算(Mobile Edge Computing,MEC)系统在降低服务延迟的同时提高资源利用率,进而更好地满足不同用户的需求.然而,由于MEC系统状态的动态性与用户需求的多变性,如何有效结合网络切片与计算卸载技术仍面临着巨大的挑战.现有解决方案通常依赖于静态网络资源划分或系统先验知识,无法适应动态多变的MEC环境,造成了过度的服务延时与不合理的资源供给.为解决上述重要挑战,本文提出了一种MEC环境中面向5G网络切片的计算卸载(Computation Offloading towards Network Slicing,CONS)方法.首先,基于对历史用户请求的分析,设计了一种门控循环神经网络对未来时隙的用户请求数量进行精确预测,结合用户资源需求对网络切片进行动态调整.接着,基于网络切片资源划分的结果,设计了一种双延迟深度强化学习对计算卸载与资源分配进行决策,通过解决Q值过高估计和高方差问题,进而有效逼近动态MEC环境下的最优策略.基于真实用户通信流量数据集,大量仿真实验验证了所提的CONS方法的可行性和有效性.与其他5种基准方法相比,CONS方法能够有效地提高服务提供商的收益,且在不同场景下均展现出了更加优越的性能.
Keyword :
深度强化学习 深度强化学习 移动边缘计算 移动边缘计算 网络切片 网络切片 计算卸载 计算卸载 资源分配 资源分配
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 张俊杰 , 王鹏飞 , 陈哲毅 et al. MEC环境中面向5G网络切片的计算卸载方法 [J]. | 小型微型计算机系统 , 2024 , 45 (9) : 2285-2293 . |
MLA | 张俊杰 et al. "MEC环境中面向5G网络切片的计算卸载方法" . | 小型微型计算机系统 45 . 9 (2024) : 2285-2293 . |
APA | 张俊杰 , 王鹏飞 , 陈哲毅 , 于正欣 , 苗旺 . MEC环境中面向5G网络切片的计算卸载方法 . | 小型微型计算机系统 , 2024 , 45 (9) , 2285-2293 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In mobile edge computing (MEC) systems, unmanned aerial vehicles (UAVs) facilitate edge service providers (ESPs) offering flexible resource provisioning with broader communication coverage and thus improving the Quality of Service (QoS). However, dynamic system states and various traffic patterns seriously hinder efficient cooperation among UAVs. Existing solutions commonly rely on prior system knowledge or complex neural network models, lacking adaptability and causing excessive overheads. To address these critical challenges, we propose the DisOff, a novel profit-aware cooperative offloading framework in UAV-enabled MEC with lightweight deep reinforcement learning (DRL). First, we design an improved DRL with twin critic-networks and delay mechanism, which solves the $Q$ -value overestimation and high variance and thus approximates the optimal UAV cooperative offloading and resource allocation. Next, we develop a new multiteacher distillation mechanism for the proposed DRL model, where the policies of multiple UAVs are integrated into one DRL agent, compressing the model size while maintaining superior performance. Using the real-world datasets of user traffic, extensive experiments are conducted to validate the effectiveness of the proposed DisOff. Compared to benchmark methods, the DisOff enhances ESP profits while reducing the DRL model size and training costs.
Keyword :
Autonomous aerial vehicles Autonomous aerial vehicles Computational modeling Computational modeling Computation offloading Computation offloading deep reinforcement learning (DRL) deep reinforcement learning (DRL) Internet of Things Internet of Things mobile edge computing (MEC) mobile edge computing (MEC) model compression model compression Optimization Optimization Quality of service Quality of service Resource management Resource management Training Training unmanned aerial vehicle (UAV) unmanned aerial vehicle (UAV)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Zheyi , Zhang, Junjie , Zheng, Xianghan et al. Profit-Aware Cooperative Offloading in UAV-Enabled MEC Systems Using Lightweight Deep Reinforcement Learning [J]. | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (12) : 21325-21336 . |
MLA | Chen, Zheyi et al. "Profit-Aware Cooperative Offloading in UAV-Enabled MEC Systems Using Lightweight Deep Reinforcement Learning" . | IEEE INTERNET OF THINGS JOURNAL 11 . 12 (2024) : 21325-21336 . |
APA | Chen, Zheyi , Zhang, Junjie , Zheng, Xianghan , Min, Geyong , Li, Jie , Rong, Chunming . Profit-Aware Cooperative Offloading in UAV-Enabled MEC Systems Using Lightweight Deep Reinforcement Learning . | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (12) , 21325-21336 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
在边缘计算中,为缓解移动设备计算能力、存储容量受限问题,通常将部分计算密集型任务卸载至边缘服务器.然而,由于移动设备计算能力的差异,无法为所有的移动设备制定统一的卸载方案.若对每个设备均单独进行训练,则无法满足时延需求.针对这一问题,本文提出了一种差异化设备上基于联邦深度强化学习的任务卸载方法.该方法使用环境内已有移动设备的卸载经验,结合深度Q网络和联邦学习框架,构建了 一个全局模型.随后,使用新移动设备上少量经验在全局模型上微调以构建个人模型.基于多种场景的大量实验,将本文所提出方法与理想方案、Naive、全局模型和Rule-based算法进行对比.实验结果验证了本文所提出方法针对差异化设备任务卸载问题的有效性,能在花费较短时延的同时得到接近理想方案的卸载方案.
Keyword :
任务卸载 任务卸载 依赖感知 依赖感知 深度强化学习 深度强化学习 联邦学习 联邦学习 边缘计算 边缘计算
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 余楚佳 , 胡晟熙 , 林欣郁 et al. 针对差异化设备的任务卸载方法 [J]. | 小型微型计算机系统 , 2024 , 45 (8) : 1816-1824 . |
MLA | 余楚佳 et al. "针对差异化设备的任务卸载方法" . | 小型微型计算机系统 45 . 8 (2024) : 1816-1824 . |
APA | 余楚佳 , 胡晟熙 , 林欣郁 , 陈哲毅 , 陈星 . 针对差异化设备的任务卸载方法 . | 小型微型计算机系统 , 2024 , 45 (8) , 1816-1824 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With flexible mobility and broad communication coverage, unmanned aerial vehicles (UAVs) have become an important extension of multiaccess edge computing (MEC) systems, exhibiting great potential for improving the performance of federated graph learning (FGL). However, due to the limited computing and storage resources of UAVs, they may not well handle the redundant data and complex models, causing the inference inefficiency of FGL in UAV-assisted MEC systems. To address this critical challenge, we propose a novel LightWeight FGL framework, named LW-FGL, to accelerate the inference speed of classification models in UAV-assisted MEC systems. Specifically, we first design an adaptive information bottleneck (IB) principle, which enables UAVs to obtain well-compressed worthy subgraphs by filtering out the information that is irrelevant to downstream classification tasks. Next, we develop improved tiny graph neural networks (GNNs), which are used as the inference models on UAVs, thus reducing the computational complexity and redundancy. Using real-world graph data sets, extensive experiments are conducted to validate the effectiveness of the proposed LW-FGL. The results show that the LW-FGL achieves higher classification accuracy and faster inference speed than state-of-the-art methods.
Keyword :
Autonomous aerial vehicles Autonomous aerial vehicles Biological system modeling Biological system modeling Classification inference Classification inference Computational modeling Computational modeling Data models Data models federated graph learning (FGL) federated graph learning (FGL) Graph neural networks Graph neural networks lightweight model lightweight model multiaccess edge computing (MEC) multiaccess edge computing (MEC) Task analysis Task analysis Training Training unmanned aerial vehicle (UAV) unmanned aerial vehicle (UAV)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhong, Luying , Chen, Zheyi , Cheng, Hongju et al. Lightweight Federated Graph Learning for Accelerating Classification Inference in UAV-Assisted MEC Systems [J]. | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (12) : 21180-21190 . |
MLA | Zhong, Luying et al. "Lightweight Federated Graph Learning for Accelerating Classification Inference in UAV-Assisted MEC Systems" . | IEEE INTERNET OF THINGS JOURNAL 11 . 12 (2024) : 21180-21190 . |
APA | Zhong, Luying , Chen, Zheyi , Cheng, Hongju , Li, Jie . Lightweight Federated Graph Learning for Accelerating Classification Inference in UAV-Assisted MEC Systems . | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (12) , 21180-21190 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
移动边缘计算(MEC)通过将计算与存储资源部署至网络边缘,有效降低了任务响应时间并提高了资源利用率。由于MEC系统状态的动态性和用户需求的多变性,如何进行有效的任务调度面临着巨大的挑战,不合理的任务调度策略将严重影响系统的整体性能。现有工作通常对任务采用平均分配资源或基于规则的策略,不能有效地处理动态的MEC环境,这可能造成过多的资源消耗,进而导致服务质量(QoS)下降。针对上述重要问题,提出了一种MEC中基于Actor-Critic深度强化学习的任务调度方法(TSAC)。首先,提出了一种面向边缘环境的任务调度模型并将任务等待时间和任务完成率作为优化目标;其次,基于所提系统模型与深度强化学习框架,将联合优化问题形式化为马尔可夫决策过程;最后,基于近端策略优化方法,设计了一种新型的掩码机制,在避免智能体做出违反系统约束的动作和策略突变的同时提高了TSAC的收敛性能。基于谷歌集群真实运行数据集进行仿真实验,与深度Q网络方法相比,至少降低6%的任务等待时间,同时提高4%的任务完成率,验证了的可行性和有效性。
Keyword :
任务调度 任务调度 多目标优化 多目标优化 掩码机制 掩码机制 深度强化学习 深度强化学习 移动边缘计算 移动边缘计算
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 黄一帆 , 曾旺 , 陈哲毅 et al. 移动边缘计算中基于Actor-Critic深度强化学习的任务调度方法 [J]. | 计算机应用 , 2024 , 44 (S1) : 150-155 . |
MLA | 黄一帆 et al. "移动边缘计算中基于Actor-Critic深度强化学习的任务调度方法" . | 计算机应用 44 . S1 (2024) : 150-155 . |
APA | 黄一帆 , 曾旺 , 陈哲毅 , 于正欣 , 苗旺 . 移动边缘计算中基于Actor-Critic深度强化学习的任务调度方法 . | 计算机应用 , 2024 , 44 (S1) , 150-155 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Green innovation is the inevitable trend in the development of the supply chain, and thus the government adopts subsidy policies for the relevant enterprises to enhance their enthusiasm for green development. In view of the manufacturers' fairness concerns in the dual-channel green supply chain that is composed of manufacturers and retailers, we propose a novel Stackelberg game model led by retailers and analyze the impact of manufacturers' fairness concerns on the decision-making of manufacturers and retailers in the dual-channel green supply chain under government subsidies. The results show that only the wholesale price of products, manufacturers' profits, and retailers' profits are affected by manufacturer's fair concerns. When manufacturer has fair concerns, product greenness and profits of supply chain members rise with the increase in government subsidies. The results can offer an effective reference for the dual-channel supply chain members with fairness concerns to make optimal decisions under government subsidies.
Keyword :
dual-channel green supply chain dual-channel green supply chain fairness concerns fairness concerns government subsidies government subsidies retailer-led retailer-led
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Song, Lei , Xin, Qi , Chen, Huilin et al. Optimal Decision-Making of Retailer-Led Dual-Channel Green Supply Chain with Fairness Concerns under Government Subsidies [J]. | MATHEMATICS , 2023 , 11 (2) . |
MLA | Song, Lei et al. "Optimal Decision-Making of Retailer-Led Dual-Channel Green Supply Chain with Fairness Concerns under Government Subsidies" . | MATHEMATICS 11 . 2 (2023) . |
APA | Song, Lei , Xin, Qi , Chen, Huilin , Liao, Lutao , Chen, Zheyi . Optimal Decision-Making of Retailer-Led Dual-Channel Green Supply Chain with Fairness Concerns under Government Subsidies . | MATHEMATICS , 2023 , 11 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |